Skip to main content

Becoming Agile: Evidence Based Management

March 8, 2019

"My advice would always be to ignore the perceived wisdom and look for the most reliable evidence on the ground" - D.T. Puttnam

Hearing a senior executive announce "We're committed to becoming agile!" is not the bombshell moment it used to be. It no longer indicates a personal revelation or board-room epiphany. In fact, if you were to read some of the interviews with managers in business magazines, or the guest articles and puff-pieces on agility in their companies, you'd think that their hearts and minds had been won decades ago. You'd have no reason to doubt that executives were anything other than firmly on-side with Scrum Masters, coaches, and change agents. Agile transformation, you would surmise, is a done deal with the higher-ups.

Now according to the 2018 State of Agile report, the top reason survey respondents gave for undergoing agile transformation was to accelerate a production calendar. You can see why people would think that this is important. The imperative everyone faces is to complete projects "faster and cheaper". Time, after all, is money. Hence there's no great mystery why executives would appear to be pulling in the direction of agile change.

 

Moreover, there's no denying that senior managers are generally right about the need for increased agility. If their organizations fail to improve, then they could well be disrupted out of business. That much is becoming increasingly clear. Yet it's also the sticking point. You see, agile transformation isn't about "accelerating a production calendar", as is often assumed. It's really about becoming a learning organization...one that's innovative enough to survive.

People, I suppose, can be right about certain things for the wrong reasons. That's what management support for "going agile" often boils down to. I remember the gasp of incredulity when I first explained to a senior executive that the agile transformation he talked about wasn't geared towards completing projects "faster and cheaper", as he claimed. "In truth," I said, "agile transformation is about learning to build the right thing at the right time". The mouth gaping back at me was momentarily staggered into silence. The face that framed it was a picture. It is still in the gallery of my mind's eye: "Study of a suit in shock". An unforeseen vista had rent the air and stretched out before him. He now saw a different perspective: value might not be what it appeared to be. These are the new bombshell moments which add pep to an agile coach's day.

It's important to give folk an anchor in a major paradigm shift, such as the adoption of agile practice. They need something they can hold on to as a point of reference, a true north. Here's something to grab: agile transformation is not done for its own sake. Value, however it is measured, ought to be improved as a result of change. If value is measured well, then by means of frequent release, inspection, and adaptation, it can be maximized. If it is measured poorly, then an organization may just become better at doing the wrong thing. Eventually, more switched-on disruptors will move in for the kill.

Some companies misalign their success criteria. For example, they may gauge success in terms of a certain capability delivered by a certain date for a certain cost. Their project controls and measurements are geared towards an objective where evidence of success, or failure, will arrive late. These companies would be far better served by an ability to respond to change in an uncertain world.

Generally speaking, the most common dysfunction is to try and measure value in terms of outputs, rather than outcomes. A classic example, which is often encountered in agile transformation, is known as "point productivity". If a technical team estimates each piece of work before doing it, and gives it a notional size in points, then team members will roughly know how much they're taking on when planning to meet a goal. They are within their rights to use this approach. However, the numbers are not then too hard for managers to get hold of, and it is tempting for the higher-ups to conflate the estimated size of the work with its value. On the basis of this extraordinarily circumstantial evidence, the more "points" a team supposedly "delivers", the more productive it is held to be. Why fake this equivalence? Because the genuine value of product increments to stakeholders, if any are indeed delivered at all, can be a much harder thing to discern. In fact there could be little appreciation of a product in the first place. There might simply be a project with a mandate, a bolus of requirements, and a fixed amount of money in the pot.

The irony of the situation is palpable. We often read about how merciless companies can be in the quest for profit, and of how keen they are to maximize shareholder value. Yet when it actually gets down to counting the beans in a timely way, and figuring out what sort of investment to make in the next Sprint, based on hard empirical evidence, they can't set themselves to the task. They can't do it. They fiddle the books with wooden dollars instead, and declare that the balance of story points being "delivered" perhaps ought to rise. Non-profit organizations, such as charities, are not necessarily any cleaner. The performance indicators used to gauge impact on the social good, for example, can be tenuous and trailing. Rather than dealing with this challenge head-on, and identifying the outcomes which they can empirically measure, non-profits might also resort to assessing outputs instead. They too can fall prey to the siren call of measuring "points delivered", for example. They too might obscure true stakeholder value with circumstantial data that, when taken out of context, becomes fake. The comparative ease with which data like this can be obtained, and used to proxy for value, is often imagined to be a solution and its misappropriation a virtue.

It's critically important to understand that measurement is strategic in nature. Senior executives are accountable for the value an organization provides and for corporate reputation. If the measurement of value is poor, then the outlook is grim. However, if the understanding of value is challenged and curated in an empirical way, with a timely focus on quality outcomes rather than circumstantial outputs, then it becomes possible to survive and thrive. Continuous improvement is enabled. Management, in other words, has to be evidence-based. This is of essential concern where an agile organization with an innovation capability is to be cultivated.

What actually should executives measure then? Evidence Based Management, or EBM, proposes four key value areas which organizations can focus on, irrespective of business context. These are Current Value, Unrealized Value, Time to Market, and Ability to Innovate. Let's take a look at each of these areas, and at the nature of the measurements which lie beneath them.

  1. Evidence Based ManagementCurrent Value measures the value delivered to customers or users today. How much is an initiative worth to stakeholders? How satisfied are customers, or employees for that matter? How many people actually use a product or service? From a financial perspective, what is the revenue per employee, or the product-cost ratio?
  2. Unrealized Value measures the value that could be realized by meeting all of the potential needs of customers or users. What does the organization need to do to maximize that value? What is the relative market share of a product or service, and what is the gap between the current experience of customers or users, and an experience that might be said to "delight" them?
  3. Ability to Innovate measures the ability to deliver a new capability or feature which better serves customer or user needs. Do we have data on which functions are used most heavily, and those which are used least often? What is the organization's innovation rate, which is to say, the relative investment in developing new capabilities versus maintenance of the old? How much technical debt has to be paid off, relative to new functionality? What is the defect rate for the product or service, and how many major incidents occur over a given period? How many different versions are being supported? What percentage of a team's time is spent on the product, and how focused on that product are they allowed to be? How much time is spent merging code and context-switching?
  4. Time to Market measures the ability to quickly deliver a new capability, service, or product. How frequently do builds happen, how often is work integrated, and what is the frequency of release? Once a release is made, is additional time and effort needed for the product to stabilize, and if so how much? What is the average time it takes to effect a repair? What are the cycle and lead times to production, and how long does it take to close the validated learning loop?

That's a lot of things to potentially measure, and the elicitation of the associated data is something that has to be sponsored by top management. Nobody, no employee, nor any consultant brought in for the job, can pull these numbers out of a hat. Getting hold of key value measures isn't the sort of thing that can be delegated away. These are a comprehensive set of data points drawn from a range of functions across the enterprise. No-one below chief officer level is likely to have the organizational reach, or the political sway, to pull in the numbers. Moreover, departmental and mid-tier managers should be keen to engage with this strategic initiative, so that the data obtained will be timely and of good quality. This can require direct and pro-active involvement from the CEO, even if he or she has someone else to collate the measurements and to curate the results. Don't forget that the first step in organizational transformation is to communicate a sense of urgency for change...for which executives have to constantly stay on the ball.

These key value measures, once elicited, provide a kind of transparency over enterprise agility. The data can be visualized in the form of a spider diagram, for example, and a picture then shaped of how "agile" an organization might be. Of course, this picture will not in itself cause an improvement. No report ever can. Simply being in receipt of numbers and charts does not cause people to inspect and adapt. For any change to happen, empirical process control must be established. A change leadership team will need to consider these measurements, challenge those which appear to be weak, and frame an hypothesis about how the situation might be improved.

EBM spider diagramWe believe that [doing X] for [people Y] will achieve [outcome Z]. We will know that this is true when we see that [measurement M] has changed.

For example, if the innovation rate appears to be poor, they might decide to furnish a product ownership community with a dedicated team for the incubation of new opportunities. Then they can see the effect on the innovation rate, and on the overall picture of enterprise agile health.

We believe that supporting the product ownership community with a dedicated team will achieve an improved innovation rate. We will know that this is true when we see that the number of value propositions being evaluated each month has increased relative to the number of support and maintenance tickets which have been started for existing products.

Well? Did the innovation rate of the company improve? What other measures changed when the data was next collated...and what hypothesis ought to be framed and tested next?

These are the bare bones of EBM, or Evidence Based Management. If it looks like something which is impossibly or impractically hard to do, bear this in mind: in a crisis, executives suddenly become interested in empirical evidence. They ditch the pro-forma reports, filters, and stage-gated practices which normally define the organization's way-of-working. They get close to the outcomes instead. They try to exert control over them until the situation stabilizes and the immediate danger recedes. Then they go back to their old, non-empirical practices which have just been found wanting. Can you feel the irony yet?

Perhaps executives should adopt Evidence Based Management as a new standard, if they genuinely want to have a more agile organization. The alternative is bleak. If they don't do so, then the illusion of control -- which comes from managing circumstantial outputs -- will persist until reality bites them. But will the outcome of such ignorance be survivable at that point? Or will the more agile, and the more innovative, have moved in for the kill?


What did you think about this post?