Experimenting with the unknown broadens your perspective of what you’re comfortable with. Learning highlights what you may not be aware of, what you take for granted. A prime example is moving from a dependence on relational, SQL based storage systems into the land of NoSQL. When you swap out something as fundamental as how information is stored, you’ll naturally run into challenges. And they’re good challenges to face. They’ll reshape how you approach storage in general, for the better.
For example, if you move to a document database like MongoDB you will be forced to consider how information is partitioned within your system. Consequently, when working with relational databases, you’ll bring some of this partitioning back which can have significant benefits even in a relational model of storage.
Here are some additional benefits you will experience:
- Develop software faster
- Eliminate the ceremonial nature of describing new data structures and additions to data structures.
- Less friction between application models and storage formats.
- Increased software longevity
- Better design because you no longer have to invest in costly translation layers to mediate between application models and storage formats. This translation has historically led to sub optimal application models, which often lead to issues down the road.
- Distinct partitioning of your system, which lends itself well to scalability.
- Partitioning simplifies your application models too.
Recently, I gave a talk about the benefits of partitioning and automating development machines which is a separate topic. But, we used a MongoDB development machine as an example and toward the end, discussed some of the reasons why alternative storage solutions like MongoDB are a valuable tool to add to an organization’s repertoire. Part 2 of the recording focuses on the benefits of leveraging MongoDB.
In Approaching Automation, I outlined a series of steps to follow to make worthwhile investments in automation. In the following example, I’ll show how to apply these steps to a software development process.
Inspecting code can provide valuable insight into improving the design of a system. Inspection tools are most valuable when they’re integrated with the environments that developers use to create software. They can provide instant feedback to improve, on the fly.
Inspection is also valuable as an analysis tool after the fact. One barrier to analysis is the time it takes to setup and inspect a code base. This process is ripe for automation so time can be spent analyzing results, not gathering them.
But, blindly automating anything is as reckless as avoiding automation altogether.
Often, when discussing inspections, I find individuals wanting to inspect code bases every time a change is made to the system.
I’ll inquire how often do you perform this analysis currently and what do you do with the information? Often, I find there’s no methodical approach and sometimes inspection isn’t even a part of an existing process. It’s just something that someone said was a good idea.
Whatever the case, by stepping back and discussing how often the information is used and what it’s used for, we can begin to understand the value of automating inspections.
By challenging ourselves to understand why the information is valuable, we can determine the appropriate level of automation to improve.
Most teams are busy enough, they’ll be lucky to look at inspection results once a week. And, if they have to manually generate it, it’s much less likely they’ll even get around to it.
But, if inspection reports are automatically available on a weekly basis, they could invest more time in analyzing the results. And in turn, invest in acting on the results.
NDepend is a tool to inspect a .NET code base and provide actionable metrics to improve.
TeamCity is a platform to automate development processes, gather results and act upon them.
Let’s walk through the basis for automating inspections with NDepend and TeamCity.
First, we should outline the process.
NDepend is used to inspect a code base. This requires checking out the code, compiling it and then analyzing the results with NDepend.
Then, teams analyze the results for ways to improve.
And over time, they apply this insight to incrementally improve.
Eliminate the unnecessary
After outlining the process, it’s important to eliminate vestigial components. In the case of inspections, this is a matter of making a conscientious decision about what inspections are meaningful and what aren’t. Don’t just take everything out of the box. And, spend some time with NDepend to craft your own custom inspections.
Next, it’s important to establish objectives. NDepend comes with the concept of rules. Reducing rule violations can improve the quality of a code base. For example, NDepend comes out of the box with rules that help detect breaking changes to software interfaces. It also provides rules to detect dead code which can hamper the longevity of software. Deciding on a set of rules to enforce may serve as a worthwhile objective.
Let’s say we want to reduce dead code in a system. Every system contains some amount of dead code. Some of it can be detected automatically. Measuring the current level of dead code in a system and setting goals to reduce it serves as a progress indicator.
What if your system is comprised of ten percent dead code? What would it be worth the get that to five percent? What about one percent?
Make a decision
Everything above becomes the basis with which one decides to automate, or not automate, inspections of dead code or any other aspect of a system.
I always recommend a margin of two or three times the potential cost. That way you have room to absorb the unknown.
There’s no better way to describe automating it than to show you:
Over time, use the information you capture in TeamCity from the output of NDepend to see if efforts prove worthwhile.
Not everything is so easily quantifiable. Nonetheless, you can start to see how you can apply a methodical process focused on value to scientifically improve your development process.
Most developers support a plethora of projects. Setting up development machines with the tools and components necessary to work on all of these projects is problematic. Especially when development environments aren’t even close to the production environments they’re hosted in.
Partitioning the tools for each project into virtual machines and automatically creating virtual environments can give organizations a significant edge.
Gone are the days of:
- Wondering what is necessary to work on a project.
- Trial and error to setup development machines.
- Interrupting others to help you out.
- Relying on out-dated instructions to set things up.
- Conflicting tools and bogged down development machine resources.
- Know what software is necessary for a project, software versions and configuration.
- A history of why, when, who and how this software was incorporated by versioning the definition of environments.
- Eliminate surprises by developing in an environment much more like production.
- Safely creating and testing scripts to automate the setup of environments.
- Reduced time to get a project up and running.
- Happier customers, due to less problems in production.
Last week, I spoke about this at the Omaha .NET User’s Group. In the first half of the talk, I enumerate many reasons why this is valuable and then walk through an example of building a MongoDB development environment with Vagrant to use in an ASP.NET MVC application.
Yesterday, I hosted a webinar with JetBrains about moving beyond continuous integration with TeamCity. By building on the principles of fast feedback, team cohesion, confidence and always having a known working state to start from; we can extend the ideas in continuous integration to further improve the development and delivery of software to create even more value for customers.