BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles On Uncertainty, Prediction, and Planning

On Uncertainty, Prediction, and Planning

Bookmarks

Key Takeaways

  • The software industry has a dismal track record when it comes to predicting and planning in the face of uncertainty.
  • There are a significant number of biases that prevent us from learning, including cognitive biases and compensation structures.
  • Statistical approaches to predictions can be successful if we expend the effort to create learning-based models such as Monte Carlo simulations.
  • Highly uncertain environments are best exploited using the iterative learning models inherent to Agile methods.
  • Extremely uncertain, non-deterministic environments are best exploited by the incremental learning model of hypothesis testing (Hypothesis-Driven Development) and learning to embrace the discomfort associated with uncertainty.

The Best Laid Plans ...

"Prediction is very difficult, especially about the future."

This quote is often attributed to physicist Neils Bohr. It is also variously attributed to one or all of the following: Mark Twain, Yogi Berra, Samuel Goldwyn, politician Karl Kristian Steincke, or simply an old Danish proverb. That’s a healthy warning that things are rarely as simple as they seem and they usually get more complicated the deeper we go into them. If we can’t even determine who actually cautioned us about the difficulty of predictions then perhaps it’s an indicator that predictions themselves are even trickier.

The software industry operates on predictions at nearly every turn. Yet our track record of success with them is dismal. Depending on which metrics we choose to quote, between 60-70% of software projects are delivered over-budget, over-schedule, or canceled outright 1, often after large amounts of money have been spent.

What causes this and what can be done to increase the odds of success? Before we get into that, let’s review the reasons for making software predictions.

Businesses need to budget and plan. Capital, the lifeblood of a business, must be allocated sensibly toward efforts that will provide the best return. We need to answer questions such as: How much should we spend? Which projects get approval to proceed? Over what time horizon are we allocating? Is it the two weeks of the next sprint or the twelve months of the next fiscal year?

Traditionally, allocation questions were answered by first creating some kind of estimate regarding long-range project cost and scope, formulating a schedule and plan around that, and then deciding whether the plan was worthy of capital. For many reasons, some of which we discuss below, a plan often started unraveling even before its ink was dry.

With the advent of Agile methods such as Scrum, the planning cycle has been reduced to as little as two weeks. Even so, the questions remain about which stories in our backlog are worthwhile. But even shortened release cycles still result in disappointment at least 60% of the time 1. So what is wrong? Why aren’t we improving things? Why can’t we seem to learn how to make things better?

Why We Don’t Learn

Let’s examine what mechanisms drive us to continue with strategies that often fail and leave us stuck in a non-learning cycle. If we can understand the motivations for our actions then it might make it easier to change them and learn something along the way.

We Get Paid Not To

In much of the corporate world, job security and incentives are tied to making accurate predictions, formulating a plan to achieve them, allocating scarce capital, and then delivering on the plan. Additionally, there are often financial incentives to deliver at less than the predicted cost and completion date. As long as pay incentives are tied to predicting and planning the future, predictions and plans will be the bread and butter of business activity, whether they produce the desired outcome or not. In fact, convoluted conclusions likely will be derived that prove their validity, regardless of whether such validity exists.

Unfortunately, the solution to failed predictions is often taken to be the alluringly plausible "we’ll do better next time." It’s natural to wonder: how many attempts does it require to achieve mastery? At some point, we should realize that a strategy isn't working and something else needs to be tried. What slows the learning is that compensation is tied to not understanding it. Upton Sinclair captured the essence of this when he wrote (in gender-specific terms):

"It is difficult to get a man to understand something, when his salary depends on his not understanding it."

If we want to improve our outcomes then we need to change our compensation structures into something that rewards us for learning and move away from structures that reward us for not understanding things.

An anecdote: Once, when describing to an executive how uncertain completion dates are in non-deterministic systems, he turned to me in exasperation and, holding his thumb and forefinger a fraction of an inch apart, said, "You’re telling me that you don’t know when you will finish until you are this close to being done? That’s nonsense." It's hard to say who was more disappointed in the conversation. The executive, because to him I seemed to lack a basic understanding of how business works, or me, because the executive seemed to lack a basic understanding of the mathematics of his business. In reality, we were both right, at least from our respective viewpoints. The real problem lay in the architecture of our compensation system that forced us into incompatible beliefs.

The Allure Of Simplicity

No matter how we practice it, software engineering is a difficult business undertaking. It was always thus. Fred Brooks, writing decades ago, concluded that there was "No Silver Bullet" 2 that would eliminate the inherent complexity and difficulty of developing software. And yet here we are, so many years later, still seeking a solution to the complexity, something that will make it simple. This desire for simplicity drives us to create oversimplified plans that discount the likelihood of the unknowns that will derail our project when they suddenly appear, often late in the plan after considerable sums have been spent.

When it comes to predictions, it is alluring to believe that there is a simple, one-size-fits-all solution that will work for everyone and all that is required is rigid adherence to its practices. Yet history shows us that a never-ending parade of simple solutions come and go, falling into and out of fashion with regular occurrence. What this suggests is that the software industry has complex needs and there is no simple, "Silver Bullet" solution that addresses them. From that wondrous wit, H.L. Mencken, we have this admonition to warn us about the allure of simplicity:

"... there is always a well-known solution to every human problem — neat, plausible, and wrong."

The Sunk Cost Fallacy

Once we have invested time and money to create a prediction then the sunk cost fallacy rears its head. The sunk cost fallacy boils down to this: money already spent cannot be recovered but we are often unable to see that and spend additional money seeking a return on our initial outlay. We are prone to this because our job security usually requires us to prove that we are making wise financial decisions that turn out to be profitable. Worse, the more we spend the more we feel the need to justify our investment, putting us on a spiraling path of ever greater cost. All of this means that we will too often spend money to defend a failed prediction long after it would be better abandoned and the money reallocated to a more sensible idea.

There is an instructive example in the natural world, which has no sunk costs. If something doesn't work, if it's maladaptive to its environment, it is dealt a swift and pitiless blow that ends its reign and a new idea quickly replaces it. It’s an example worth remembering next time we find ourselves believing our prediction will come true if we just invest a bit more into it.

The Dogma Trap

Any successful business has, deliberately or accidentally, discovered a set of practices that allow it to exploit a niche in its environment. These practices become codified into a system of rules and organizational hierarchies that are intended to preserve and perpetuate the success of the business. The longer the success, the easier it is for profitable practices to become encoded as dogma. If any of these successful practices involve predictions, then belief in the efficacy of predictions may become dogma as well.

Of course, the cardinal problem with dogma is that, by definition, it is unchanging, thereby blinding us to a changing environment, which has no such definition. And when the environment changes but the dogma doesn't then it’s a short step to extinction. Avoiding this requires us to reject the formulation of dogma.

But rejecting dogma is often an unwelcome practice in an organization. Those who question it are often labeled as someone who "isn't a team player" or someone who needs to "get with the program." Sidelining or removing such people is the typical response. After all, dogma exists in a business because it codifies a strategy that led to success and protecting that success is a principal mission of the organization. When it comes to predictions, however, a reasoned approach would suggest that thoughtfulness, not dogma, should guide decision making.

The Cruelty of Randomness

Prediction typically has two troubling beliefs inherent to it. One, that the future will proceed much like the past and, two, that all past events were expected. In reality, at the moment they were occurring, events that reshaped the future were often entirely unexpected. By forecasting the future we are often assuming that there will be no unknowable and, therefore, unexpected future events. The cognitive trap is that new endeavors seem to be similar to those in the past, making us believe that we have advance knowledge of events that would otherwise surprise us. The problem is that each new endeavor unfolds according to its own internal, and usually unknowable, set of unique characteristics.

If we know what we don't know then we can simply apply an appropriate fudge factor to account for it and proceed on our way, satisfied that our plan accounts for unknowns. Unfortunately, we are too often unaware of our own ignorance, much less how to plan for it. Additionally, we are incentivized to find reasons that it "failed last time because of X but we have accounted for that this time." While we may have accounted for X in the latest prediction it's never X that surprises us the next time. It's Y or Z or any other unknown. While there are a finite number of alphabetic characters such that we can eventually account for all of them, there is no such upper limit in the possible range of real-world unknowns.

But what if we get lucky and are rewarded with a random success for one of our predictions? If we don’t realize that it’s random, it will inevitably reduce our inclination to try a new strategy because of our natural belief that success was due to our skill instead of chance. That makes it more likely that random successes will be elevated to the status of perfect execution and repeated failures will be rationalized as poor execution. But it’s the randomness of reward we get from the lucky prediction that causes us to try ever harder to reproduce it. Studies show that the humble pigeon will quickly learn the pattern required to peck a lever to release food 3. And if no food ever arrives they will quickly give up. But if the reward is random, if there is no discernible pattern to when pecking the lever releases food, then the pigeons soon are driven into a superstitious frenzy of pecking in trying to determine the pattern. This behavior doesn’t seem terribly dissimilar from repeated attempts to make our predictions come true.

The Charismatic

Add in another human bias: we like confident and charismatic people. Confident, certain, and assertive people are promoted quickly and rise to positions where they influence a company. From there, they orchestrate activities to show that they can predict the future, formulate a plan, and execute on it. When faced with an unknown, they have a certain answer and a plan of action at the ready even if the plan might represent a mismatch between their confidence and their competence. So we marshal resources under their leadership and move ahead full of certitude. Contrast that to those who are uncertain and when asked about an unknown, shrug their shoulders and reply "I don’t know. Let’s do an experiment and see if we can figure it out," leading us to turn to the charismatic individuals instead of the cautious ones.

An overconfidence bias also comes into play. Charismatic and confident people are likely to be imbued with a sense of superior predictive ability over their compatriots. Rationally, we might look at the 70% failure rate of predictions and decide that we are better off avoiding them because we stand only a 30% chance of success. Highly confident people are instead likely to take a different view, discount years of statistics from many projects, and believe that their efforts will somehow succeed where others failed.

An anecdote: Many years ago, at the tail end of the dot-com bubble, I worked in a startup as a software developer. We were led by a young, energetic, and charismatic CEO who exuded confidence and mastery. At the time, we leased office space in a building that had been shedding software tenants one after the other as each one failed like so many dominoes. There were only a few of us left and the nearly-empty building and parking lot had the eerie feel of a post-apocalyptic setting. It was in this environment that our CEO called an all-hands meeting to reassure the anxious staff that our future was promising. I recall him making an eloquent and impassioned case that filled the room with the belief that we might make it.

In the end, of course, we were no different than any other of the innumerable dot-coms that failed in the wake of the bubble’s bursting. Indeed, our denouement arrived shortly after our CEO’s rousing speech when we failed to receive another round of financing and joined the exodus of the building’s tenants.

Blinkered by confidence and faith in a charismatic leader, many in the company were unable to see what was obvious: we could not survive if we were not profitable. This was clear in hindsight but beforehand it seemed reasonable to believe that we were different and would succeed where so many others recently had failed. It was an instructive lesson in maintaining a reserve of skepticism around charisma.

Being Mistaken, Usually

"Well, we won't make that mistake again. We even fired some people to make sure it never recurs." That's probably true. We won't make the same mistake because we are prepared for it on the next attempt. The problem is that the first mistake was unknowable before it occurred and the same thing will happen again but this time with a different set of mistakes. The set of new mistakes, to which we will fall victim, is an inexhaustible supply because they are always unknowable in advance. Winston Churchill perfectly described this scenario while addressing Parliament at the dawn of World War II. Members were concerned about repeating the mistakes of World War I and wanted assurance that they would be avoided. Churchill replied:

"I am sure that the mistakes of that time will not be repeated; we should probably make another set of mistakes."

We are often mistaken and simply don't yet know it. And being wrong and not knowing it feels just like being right 4. Actually, being right and being wrong are indistinguishable until the moment we are proven wrong. That should sound a note of caution about the inevitability of mistakes.

There is an expression often heard in management meetings and boardrooms: "failure is not an option." While this is usually intended to discourage half-hearted efforts, it excludes learning and discovery because failure is a necessary ingredient in learning. It also suggests that to admit a mistake means to admit incompetence and possibly lose one’s job. Once this belief system is in place and cemented by financial incentives, it can lead to the idea that failure indicates a historically successful practice is going through a temporary rough patch and we simply need to redouble our efforts so it will once again be successful, even if the real lesson is that we need to change course. Under these conditions, admitting an error and changing course is a difficult thing to do because we are irreversibly invested in our belief system. History is filled with examples of businesses that failed to learn and continued to feed ever greater amounts of precious capital into failed strategies even as those strategies drove them right off a cliff. A moment’s reflection will disabuse us of the notion that we are somehow immune to such folly.

Strategies That Use Learning

So that’s a rundown of some of the reasons why we are often unable to learn and continue with strategies that fail us. But what if we can avoid these pitfalls? Are there strategies that focus on learning? As it happens, there are.

A Deterministic Approach

Historically, software projects used a Waterfall model of development. Requirements were gathered, estimates were made from the requirements, and schedules were created from the estimates. This approach is based on a deterministic view of software projects and that with enough upfront data and analysis, we can make accurate predictions about cost and delivery dates. These projects often began failing early, usually due to inadequate requirements and inaccurate estimates. In the latter case, estimates were often faulty because they were not based on statistically rigorous methods but instead gathered from methods that were little more than guessing.

It turns out, though, that a deterministic view can succeed by using calibrated statistical models gathered from a company’s historical software projects. One common statistical method is a Monte Carlo analysis 5 6. The underlying mathematics are rather complicated but it boils down to this: we gather a set of historical data that typically include parameters like effort and duration. We then run scenarios thousands of times where we randomly vary input parameters to produce a probability distribution that a given amount of work will be completed in a given amount of time. For example, we might derive a distribution that indicates a certain amount of staff effort has a 25% probability of being completed within a month, a 50% probability within two months, and a 90% probability within five months. The key point is that we use historical data, unique to our organization, to calibrate our model and produce probability ranges for outcomes instead of single-point values. Notice how rigorous this approach is compared to someone’s unsubstantiated claim that "I can do that in a week."

With this approach, we are also choosing to learn. We gather data over time and use it iteratively to teach us about our organization’s capabilities and the cost and time required to perform work. Of course, our model is only as good as the data we use to calibrate it. Vague requirements specifications, poor record-keeping for completed work, and other such shortcomings will yield disappointing results.

A Pseudo-Deterministic Approach

A fully-deterministic approach as described above works well if requirements can be specified in advance and are not subject to frequent revision but this type of project is rarely seen. What if we are working on more typical projects with unclear goals, uncertain specifications, and unknown market needs? Deterministic predictions under those conditions are unlikely to yield satisfactory results.

Enter Agile methods.  

Agile methods take a pseudo-deterministic approach to software delivery. Born out of the frustration with repeated failures in traditional Waterfall projects, Agile methods abandon the belief in long-term predictions and planning and instead focus on short-term delivery of working software and adapting to change as it occurs. By using Agile methods, we adopt the philosophy that requirements cannot be determined far in advance but must instead emerge over time.

One of the more popular Agile methods is Scrum 7. Its two-week sprint minimizes project tracking error and drift by shortening timeframes for releases. We reprioritize with every sprint and in so doing effectively reset our project clock, giving us the flexibility to adapt to change.

We can still use Monte Carlo-type methods to predict the volume of stories we can produce 6 but we surrender our belief of one aspect of determinism: that we can generate long-term plans that determine project schedules. Instead, we once again focus on learning by iteratively discovering what we need to deliver.

But have we actually solved the problem of predictions and plans or have we just minimized the impact of being wrong about them? It seems we might still carry with us the same problem but at a smaller scale.

An Evolutionary Approach

We have progressed from the long-term release cycles of traditional methods to the much shorter cycles of Agile methods. We also abandoned the belief in long-term, fixed requirements and chose instead to focus on smaller stories. Both of these changes help us iteratively discover requirements and produce better results. This leads to an obvious question: if a little discovery is a good thing, is more discovery an even better thing?

Enter hypothesis testing.

Hypothesis testing (also called Hypothesis-Driven Development) takes its cues from the greatest experimental laboratory ever devised: evolution. Evolution makes no pretense at being able to predict what the future holds. It simply responds to change by constant experimentation. An experiment that produces a better outcome is rewarded with longevity. A worse outcome is quickly subjected to an ignominious end. If we are willing to surrender our predictive beliefs then evolution has a lot to teach us.

With hypothesis testing, we take a slightly more deliberate approach than the pure randomness of evolution. We proceed as scientists do when faced with the unknown: formulate a hypothesis and subject it to measurement and failure in the real world. If it is falsifiable and can't be proven false, at least not yet, then it has merit.

There are many ways to implement hypothesis testing 8 9 10 but here is a simple example. We formulate a hypothesis such as "We believe that our customers want a left-handed widget feature on our data portal. We declare our hypothesis to be true if traffic to our portal increases by 5% in one week." If our hypothesis is correct then we should see at least a 5% bump in traffic within a week. If not, we were wrong and reject our hypothesis and possibly remove the feature. We then reformulate our hypothesis or move on to another one. It’s beyond the scope of this article to provide a detailed how-to of hypothesis testing but the references provide links to articles with instructive examples and best-practices.

With hypothesis testing, we surrender our predictive beliefs that envision how the future will unfold. Instead, we build from the bottom up, testing each small piece as we go, minimizing the risk to capital and cutting losses early. In effect, we make ourselves intellectually humble and admit we have little knowledge of the future. We accept that we don’t know what we don’t know and are unlikely to ever really know much in advance. We can only discover it through experimentation.

Most importantly, hypothesis testing minimizes the biases described above that slow our learning. With it, we actually get paid to learn and use objective data to validate or falsify our ideas. We minimize sunk costs thereby making it less likely to cling to a failed idea. We use randomness to help us learn instead of fooling us into seeking a reward where none is to be found. Charismatic personalities have less sway when objective data is the measuring tool. And finally, being wrong is accepted as the normal state and part of the experiment. In effect, we are using an evidence-based decision system over one based on omnipotence and superstition.

We can further inoculate ourselves against bias by placing strict, consistent limits on the amount of capital allocated to hypotheses and requiring short timeframes for proving them true. Otherwise, we are right back to endeavors that need "just a little more time" or "just a little more money." Evolution allows no such exemptions. Ultimately, we need to decide if we want to be "right" or make money. We sometimes seek the former while claiming to seek the latter.

Admittedly, this approach doesn’t yield a particularly motivating rally cry like that of the predictive approach’s "Full speed ahead!" By contrast, "Let’s run an experiment" is hardly as energizing. But it has the potential to be more profitable which, perhaps, carries its own motivation.

A Common and Misguided Strategy

"The fault, dear Brutus, is not in our stars,
But in ourselves…"

Julius Caesar (Act 1, Scene 2)

Perhaps we have a biased sample set in our industry and hear only the stories of predictive planning nightmares and not the successes, making us believe that the nightmare scenario is the common one. But given so many stories from so many people over so many years, it seems that the scenario is probably representative for many work environments. It contains the worst possible choices and almost always leads to failed outcomes.

Here’s how it occurs: We have a generic, somewhat vague goal like "increase revenue from our website by ten percent next year." Or maybe it’s more specific like "add a left-handed widget to our data portal because customers will buy it."  Whatever it is, it typically isn’t well-specified and the assumptions underlying the premise are just that: assumptions. And hidden details will surely appear as we begin work. We have done similar features in the past but, crucially, we have never done exactly the same thing before. But that should be "good enough" for the basis of our prediction. We then have one, perhaps two, developers provide a prediction that is little more than an off-the-cuff guess. And then we are off to the races. It often goes like this in predictive environments:

Manager: "How long will it take to write the Widget feature?"
Programmer: "I don’t know, maybe a month."
Manager: "What? That’s ridiculous! There’s no way it will take that long!"
Programmer: "Well, OK, I can probably do it in a week."
Manager: "That’s more like it. I’ll put it in the schedule. Do it next week."

In an Agile environment it might look like this:

Manager: "How many story points are you estimating for the Widget story?"
Programmer: "I don’t know, maybe it's a thirteen."
Manager: "What? That’s ridiculous! There’s no way it’s that big!"
Programmer: "Well, OK, it’s probably a three."
Manager: "That’s more like it. I’ll update the backlog. Do it in the next sprint."

This is little more than random guessing under extreme duress and creates the worst possible conditions: vague specifications, no rigorous collection of historical data upon which to draw for a careful, statistical analysis, off-the-cuff predictions from one or two programmers, and turning the guess into a commitment to deliver according to a schedule. To this mix, add incentives for managers to "hold developers accountable" for failing to deliver what they never realized was a promise instead of a guess and the understandable fear of punishment for being wrong about their guess once it becomes a commitment. Is it any wonder that failure is an inevitable outcome? They only way it is delivered is by cutting features, heroic overtime, and sacrificing quality. And yet, the lesson is rarely "this isn’t working so we need to try something else." Instead, it’s often "we need to get better at predictions."

We get what we pay for. If we are required to use predictions to derive plans then we must invest the time and money to do it right. If we use Agile methods then the delivery of working software must take precedence over predictions. To do otherwise is wishing to get something for nothing. As the Second Law of Thermodynamics makes clear, "There’s no free lunch."

Know Thine Environment

It is imperative to know the environment in which our businesses are operating. If we work on large, contract-driven projects where timelines are extended and the specifications are well-defined in advance, then quantitative prediction is usually a required skill to survive. On the other hand, if we operate in the more common environment where specifications are vague or non-existent, the market needs are unclear or unknowable, timelines are short and urgent, and competition for market share is fierce, then we should consider a hypothesis-driven approach.

A key problem is that we often misunderstand the mathematical underpinnings of our environment. We often believe that we operate in a deterministic world where more effort will reward us with a successful result. In fact, we often are operating in a non-deterministic, highly empirical world with an unstable foundation that changes with time. Statisticians call this a problem with "a non-stationary base" where the mathematical foundation is not stable and there is no base upon which to anchor our assumptions. Under these conditions, fixed, deterministic methods will not succeed outside of sheer, random luck. For all of the biases listed above, it’s nearly irresistible to believe that we can predict and plan even when we can’t.

Unfortunately, if we are not operating under stable conditions then greater effort put into a prediction has a higher chance of increasing our confidence in its accuracy than it does in improving the accuracy itself. And so we become more certain of our wisdom than we do of our doubt. We are then prone to commit ever more capital to prove that we are right instead of cautiously guarding our resources and applying them when the data tell us we are on the right path.

Knowing the environment in which we operate means that pay incentives are aligned with methods that produce successful outcomes for that environment. We are incentivized to learn, in whatever form it may take for our environment.

Final Thoughts

One of the key difficulties with predictions lies in our natural human reluctance to accept uncertainty. Being in a state of uncertainty and doubt is an extremely uncomfortable place. So we are much more inclined to embrace those who are full of confidence than we are those who shrug and prefer to run an experiment to verify a hypothesis.

The external reality is that the business environment is often governed by uncertainty, unknowable unknowns, and darkness that we must navigate with only the faintest of lights. Our challenge is to accept the disquieting nature of that environment instead of clinging to the comfort of a belief system that provides us with a reassuring but misleading picture.

The road to knowledge is paved with the embrace of uncertainty. If we can learn to live with its discomfort then we open the path to learning. To paraphrase a famous saying: The price of learning is eternal unease.

Footnotes

1 Standish Group 2015 Chaos Report.

2 "No Silver Bullet — Essence and Accident in Software Engineering", by Fred Brooks.
Proceedings of the IFIP Tenth World Computing Conference: 1069–1076.

3 " 'SUPERSTITION' IN THE PIGEON", B.F. Skinner, Journal of Experimental Psychology, 38, 168-172.

4 "On Being Wrong" by Kathyrn Schulz.

5 "A Gentle Introduction to Monte Carlo Simulation for Project Managers" by Kailash Awati.

6 "Web-Based Monte Carlo Simulation for Agile Estimation" by Thomas Betts.

7 "The Scrum Primer" by Pete Deemer and Gabrielle Benefield.

8 "How to Implement Hypothesis-Driven Development"  by Barry O’Reilly

9 "Why hypothesis-driven development is key to DevOps" by Brent Aaron Reed and Willy-Peter Schaub

10 "Hypothesis-driven development" by Adrian Cho

About the Author

J. Meadows is a technologist with decades of experience in software development, management, and numerical analysis. The author is an occasional contributor to InfoQ.

Rate this Article

Adoption
Style

BT