Month: September 2014

If you can’t estimate, who can?

Hourly billing runs rampant in many professions. The justification is often rooted in the idea that those doing the work don’t know what all will be necessary. In order to make ends meet, so called experts charge by the hour. But, if an expert doesn’t know what it will take, how in the world is their customer supposed to know?…

What if we said investment instead

Successful projects rely upon worthwhile investments. Worthwhile investments require agreement about objectives, how to measure success to know when we’re on the right path, and what the outcome should be worth. Unfortunately many organizations use projects as a means to produce intermediate results. Intermediate results that don’t have an understood correlation to any particular outcome. Often the result of a…

“While we’re at it”

Occasionally an opportunity will arise to make an investment with the potential for an astronomical return. Sometimes these investments will literally fall into your lap. Usually it takes work to carve out a worthwhile investment. However you arrive at a potential investment, so long as the potential return is significant (10 to 1), avoid the temptation to posit “while we’re…

Obsessing over details is a sign

When approaching software development from a focus on outcomes and results, instead of efforts and billable minutes, I’ve noticed an early indicator. When conversations digress into nuances of what will be accomplished, it’s usually a sign of a lack of value. Although the outcome appears lofty, in reality it isn’t. When investments are extremely valuable, people tend not to obsess…

Don’t chase quantity

Many custom software development firms provide boutique services to customers. They’re often smaller firms and that’s a good thing. They’re a small group of highly talented individuals working to produce valuable results for customers. Unfortunately, many boutique firms aspire to grow their business by scaling personnel. This is like trying to move bigger boulders by hiring more people to push.…

Leveraging remote expertise

If you employ remote workers, or you want to extend the opportunity for employees to work remotely, no doubt you have reservations. There are many benefits in employing partial and full time remote workers. You have expanded access to expertise and talent that local markets often cannot provide. You can expand your reach to serve customers in local markets of…

Investing in seven figure software, tip #1

There are opportunities abound to better serve your customers and improve your organization with investments in custom software. There are also countless ways to burn money in the process on marginally valuable software. This will be the first in a series of tips to avoid pouring your money down the drain and instead invest in software with an exponential return…

Broadening storage perspectives

Experimenting with the unknown broadens your perspective of what you’re comfortable with. Learning highlights what you may not be aware of, what you take for granted. A prime example is moving from a dependence on relational, SQL based storage systems into the land of NoSQL. When you swap out something as fundamental as how information is stored, you’ll naturally run into challenges. And they’re good challenges to face. They’ll reshape how you approach storage in general, for the better.

For example, if you move to a document database like MongoDB you will be forced to consider how information is partitioned within your system. Consequently, when working with relational databases, you’ll bring some of this partitioning back which can have significant benefits even in a relational model of storage.

Here are some additional benefits you will experience:

  • Develop software faster
    • Eliminate the ceremonial nature of describing new data structures and additions to data structures.
    • Less friction between application models and storage formats.
  • Increased software longevity
    • Better design because you no longer have to invest in costly translation layers to mediate between application models and storage formats. This translation has historically led to sub optimal application models, which often lead to issues down the road.
    • Distinct partitioning of your system, which lends itself well to scalability.
    • Partitioning simplifies your application models too.

Recently, I gave a talk about the benefits of partitioning and automating development machines which is a separate topic. But, we used a MongoDB development machine as an example and toward the end, discussed some of the reasons why alternative storage solutions like MongoDB are a valuable tool to add to an organization’s repertoire. Part 2 of the recording focuses on the benefits of leveraging MongoDB.

Applied automation TeamCity and NDepend

In Approaching Automation, I outlined a series of steps to follow to make worthwhile investments in automation. In the following example, I’ll show how to apply these steps to a software development process.

Inspecting code can provide valuable insight into improving the design of a system. Inspection tools are most valuable when they’re integrated with the environments that developers use to create software. They can provide instant feedback to improve, on the fly.

Inspection is also valuable as an analysis tool after the fact. One barrier to analysis is the time it takes to setup and inspect a code base. This process is ripe for automation so time can be spent analyzing results, not gathering them.

But, blindly automating anything is as reckless as avoiding automation altogether.

Often, when discussing inspections, I find individuals wanting to inspect code bases every time a change is made to the system.

I’ll inquire how often do you perform this analysis currently and what do you do with the information? Often, I find there’s no methodical approach and sometimes inspection isn’t even a part of an existing process. It’s just something that someone said was a good idea.

Whatever the case, by stepping back and discussing how often the information is used and what it’s used for, we can begin to understand the value of automating inspections.

By challenging ourselves to understand why the information is valuable, we can determine the appropriate level of automation to improve.

Most teams are busy enough, they’ll be lucky to look at inspection results once a week. And, if they have to manually generate it, it’s much less likely they’ll even get around to it.

But, if inspection reports are automatically available on a weekly basis, they could invest more time in analyzing the results. And in turn, invest in acting on the results.

NDepend is a tool to inspect a .NET code base and provide actionable metrics to improve.

TeamCity is a platform to automate development processes, gather results and act upon them.

Let’s walk through the basis for automating inspections with NDepend and TeamCity.


First, we should outline the process.

NDepend is used to inspect a code base. This requires checking out the code, compiling it and then analyzing the results with NDepend.

Then, teams analyze the results for ways to improve.

And over time, they apply this insight to incrementally improve.

Eliminate the unnecessary

After outlining the process, it’s important to eliminate vestigial components. In the case of inspections, this is a matter of making a conscientious decision about what inspections are meaningful and what aren’t. Don’t just take everything out of the box. And, spend some time with NDepend to craft your own custom inspections.

Establish objectives

Next, it’s important to establish objectives. NDepend comes with the concept of rules. Reducing rule violations can improve the quality of a code base. For example, NDepend comes out of the box with rules that help detect breaking changes to software interfaces. It also provides rules to detect dead code which can hamper the longevity of software. Deciding on a set of rules to enforce may serve as a worthwhile objective.

Establish measures

Let’s say we want to reduce dead code in a system. Every system contains some amount of dead code. Some of it can be detected automatically. Measuring the current level of dead code in a system and setting goals to reduce it serves as a progress indicator.

Establishing value

What if your system is comprised of ten percent dead code? What would it be worth the get that to five percent? What about one percent?

Make a decision

Everything above becomes the basis with which one decides to automate, or not automate, inspections of dead code or any other aspect of a system.

I always recommend a margin of two or three times the potential cost. That way you have room to absorb the unknown.

Automate it

There’s no better way to describe automating it than to show you:


Over time, use the information you capture in TeamCity from the output of NDepend to see if efforts prove worthwhile.

Not everything is so easily quantifiable. Nonetheless, you can start to see how you can apply a methodical process focused on value to scientifically improve your development process.