Why We Stopped Estimating Ongoing Development
Estimates are useful. They help business owners predict and control their budget, scope, and timelines. Right?
Well, not always.
Chances are, you’ve been on a project that was more expensive, took more time, or delivered less value than you initially thought. Study after study shows that software projects are notorious for project overruns. It’s not fun being on either side when a project goes awry. Clients become understandably frustrated and development teams burn out. So when we looked at the landscape, and how we could deliver a customer experience congruent with our core values, taking a good look at our estimating process was critical to our success.
Scott and I had worked for consulting firms in the past, so when we first launched we modeled our business processes from what we’d seen. We spent a lot of time and money producing estimates that were, in the best case scenario, an educated guess as to the level of time, effort and complexity needed to complete a project.
Over the years, we’ve learned that there are some projects where estimates work great and others where it’s a disaster. So when should you estimate and when should you do something different?
Complicated vs. Complex Projects
The easiest way to determine if your project is a good candidate for estimating is to think about whether you’re working in a complicated or complex ecosystem.
In his book, Team of Teams: New Rules of Engagement for a Complex World, General Stanley McChrystal describes how complicated projects “have many parts, but those parts are joined, one to the next, in relatively simple ways: one cog turns, causing the next one to turn as well, and so on. The workings of a complicated device like an internal combustion engine might be confusing, but they ultimately can be broken down into a series of neat and tidy deterministic relationships; by the end, you will be able to predict with relative certainty what will happen when one part of the device is activated or altered.”
For example, when I worked as a copywriter, I could accurately predict how long it would take me to put the content together for a website. There were a number of different steps, but they were sequential and linear. I started with gathering inputs, such as research and interviews. Then I wrote a draft, gathered internal feedback, revised the draft, showed it to the client, incorporated their feedback, showed the revisions, proofread, delivered a final draft, and then had the client provide approval. It was a sequence. X inputs yielded Y outputs. As a result, I could provide confident estimates and fixed pricing to my clients.
But let’s contrast that with a complex ecosystem, one that has a large number of interdependencies. How does that change our ability to estimate? In short, it makes it impossible. Complex systems, such as the weather, stock markets, and wildlife populations are “fickle and volatile,” according to McChrystal, because they are subject to the butterfly effect, a mathematical theory that demonstrates how small changes in a complex system can have massive impacts that are impossible to predict. Therefore, the more interdependencies in a project, the less likely it is that you’ll produce an accurate estimate.
What if all of your work is complex?
Since we work exclusively on existing projects, our clients have a number of interdependencies in their codebase. Technologies such as APIs, cloud computing, and reusable libraries are exacerbating our interconnected development landscape and it’s incredibly rare to find a software project that doesn’t interact with some sort of database or third-party system. The more interconnected codebases become, the less useful estimates are in our planning process.
So what is useful in the planning process if estimates don’t help us prepare for these scenarios? An estimate’s role in the planning process is ultimately to aid with answering a question, such as: “Can we deploy this new feature by July 1st of this year?” or “Is it possible to complete this project within our budget of $5,000?”
Instead of spending lots of time spinning our wheels putting together an estimate that ultimately won’t work, we’ve been experimenting with something different: confidence values. We try to give a forecast for how likely something is to happen, sort of like how the meteorologist forecasts whether or not it will rain on Tuesday (weather is another complex system). Here’s how our confidence value conversations go:
Step 1: Start With the Right Question
When we started this process, we were inspired by a quote from Albert Einstein: “If I had an hour to solve a problem I’d spend 55 minutes thinking about the problem and 5 minutes thinking about solutions.” When we thought about why estimates weren’t working, we found that most of the time it wasn’t addressing the real question our clients had. So we started digging deeper, making open-ended questions more and more discreet until we ultimately found their primary concerns. Here are some examples of question frameworks we see frequently:
Questions about Budget
-
How much will this cost me? (Too open-ended)
-
Can this be completed within my budget? (Now we’re getting somewhere!)
Questions about Time
-
How long will this take you? (Too open-ended)
-
Can you deliver this set of features for our demo on March 1st? (Much better!)
Step 2: Instead of Saying Yes, Give A Confidence Value
Once we have the right question, we start talking about how confident we are that we can say yes. How do we determine this? Most often, it’s an expert intuition. When you’ve been working with legacy code systems as long as we have, you get a sense for code smells, which are often determined by human intuition more than precise analysis. But the point is to kick off a conversation where we’re actually helping our clients achieve their goals. Just saying yes because that’s what someone wants to hear is a disservice. Instead, we deliver value by being realistic and honest. This helps our clients make better strategic decisions and is a much better way to work together.
Step 3: Revise the Scope to Increase Confidence
The best part of this strategy is that it becomes an ongoing conversation instead of a one-time, all-hands, stop-everything event. If our honest confidence is 50%, we can start talking about options. Sometimes this means using a different solution than was originally proposed, streamlining functionality, or taking some time to prioritize features. So a conversation might go like this:
Client: We have 14 items we’d like to incorporate at our next investor demo. How likely is that to happen?
Corgibytes: Probably 50%. Item number six would be really time consuming the way it’s proposed.
Client: Oh, really? Actually, that’s not a very important feature at all. What if we deleted that one?
Corgibytes: More like 80%. Let me dive in for an hour or so and I’ll be able to confirm that. Sound good?
Client: Awesome!
Something can always go wrong
So the goal is to get to 100%, right? Not really. We don’t give estimates above 90% because we always keep a contingency for “unknown unknowns”. We’ve learned over the years that the unexpected can always happen.
For example a client may ask: “Can we upgrade this dependency to mitigate a security vulnerability for less than $300?” Since the work involved is likely just asking a package manager to perform the upgrade, how hard could it be? This is an example of something that we’ve seen go wrong many times, for a myriad of reasons. In complex systems, situations come up frequently that you didn’t anticipate. The goal isn’t to predict them; instead it’s about being realistic, communicating regularly, and devising a solution that meets our client’s specific situation. For some, that means keep going until it’s done. For others, it’s working on it until a budget ceiling is hit and then stop. And still others may ask us to think of alternative solutions that would get the project to the same end state. Being flexible and responsive in the face of complexity helps us deliver value.
So how about you? What are your thoughts about estimates? Let’s keep the conversation going in the comments.
Contributing Authors: M. Scott Ford, Nickie McCabe, Jeff Blum, Stephanie Mack, and Jocelyne Morin-Nurse
Want to be alerted when we publish future blogs? Sign up for our newsletter!