Insight

Four things we do that prevent value delivery (part one)

Background

On November 22nd 2019 I gave the closing keynote at Scrum Deutschland, a talk called ‘The Four Things You Do To Prevent Value Delivery’. In this keynote, I discussed the trends I found during my research at multiple organisations.

For background, I’m often called into organisations to investigate their ability to deliver value. Such a study is holistic, it involves looking at the organisational design, technical capabilities, culture and knowledge, and type of control and metrics used to define success. I get these kinds of request usually because of three main reasons:

  1. Something went wrong, and the organisation wishes to know what happened where and when.
  2. Work (or sometimes even a transformation) is ongoing, and the organisation wants a second opinion on risk or improvement factors.
  3. The organisation expects an inquiry or audit into its delivery capability and wishes to be prepared.

As expected, repeated studies have allowed me to observe some patterns. Some of these have to do with ‘the system’, the whole of hierarchy, setups, technology and so on that make up an organisation. The impact of the human element in all of this surprised me though. Not because of its presence, but more how it impacted everything. I didn’t find any malicious behaviour at any point. What I found where people trying their best, making an effort to be helpful, and yet this often resulted in issues in the delivery capability. It is these observations that I shared in my (happy to say well received) keynote. And now in these four articles.

Example

Let’s start with an example.

I was called in to assess an organisation that was having difficulties. They were several months overdue on delivering a core system. Estimates were wildly unreliable, the backlog has only increased in size, morale was low and yet nothing seemed to be happening. The organisation used a combination of Scrum and Project Management on paper. Upon further study of where the work for each team(member) came from, what I found was a still-running Lean improvement program, several projects both inside and outside of the scope of two programs, and Scrum, via a set of Product Owners. With no clear single source of truth, choice and work, team(members) would switch scope depending on who last spoke to them, and work kept on being added to an already overburdened system. When discussing this with the people in charge, it was not new information. The approach was deemed to work when it was devised, and so it should work now. Clearly the issues lay somewhere else. Upon further pressing to reconsider the organisational setup using the experiences gathered thus far, I was met with the following quote:

‘We will not be discussing the approach again.’

Elaboration

In retrospect, it was a predictable response. Most people, myself included, would probably point to internal politics as the culprit here. It’s easy to launch initiatives, but very costly to cancel them. It is politically expedient to be successful all the time. This is correct, well-fitted explanation to the quote.

And yet, it nagged at me. Sure, internal politics are awful and cause a lot of damage to organisations, and those playing them are walking a tightrope of damage control. But is this the whole answer to that response? There was a stubbornness in their response. It was more than refusing to admit failure to prevent loss of face. They were more defensive than that. It puzzled me.

And a few months later I started reading Annie Duke’s book ‘Thinking in Bets’. And suddenly, things started coming together. What if the politics came second? What if I was dealing with people who had so thoroughly based their identity on their ability to solve problems and manage complex things, that the idea that they ‘got it wrong’ was inconceivable to them on a fundamental level? But then, that begged the question of why being wrong about something was such a big deal?

So, down the rabbit hole I went.

Explanation (Why this is wrong)

It turned out the rabbit hole wasn’t that deep, the answer wasn’t that difficult. People can’t handle complexity. Cognitively I mean. We have a bunch of built-in patterns to reduce the big, scary chaotic world down into manageable chunks. Incredibly useful back when we climbed out of trees and started living in small groups in a dangerous world full of things trying to kill us. Got bit by a snake once in the grass? From now on, the grass is dangerous.

Much research into this collection of biases and fallacies was done by Daniel Kahneman and Amos Tversky, resulting in a Nobel Prize and a book that should be mandatory reading for everyone with a functioning brain over 16: Thinking, Fast and Slow. While parts of it have since been critiqued as part of the larger reproduction crisis in science, it offers an invaluable start in understanding the human condition. This field, Behavioral Economics, has been expanding since and has a lot to offer for those of us trying to enact change.

So, back to dealing with complexity. It turns out our brain is making a considerable effort to actually reduce the complexity of the world for us through the use of these biases.

It’s not that none of those people wanted to admit fault to others (though they really didn’t want that either). Much more, it was that admitting that the result was different than what they predicted implied there were things outside of their control or knowledge, meaning that they were far less in control of their fate than they wanted. And this simply could not be. If they could blame anything but complexity, or shift the conversation away from it at least, that would make things make sense again. And so the discussion was shut down.

‘We will not be discussing the approach again.’

Solution

This is not a solution of course. This is just sticking one’s head in the sand. And yet this is what we do all too often. Either by ignoring things (consciously or unconsciously) or explaining things away. Until reality catches up as it always does.

So what then? Well, in all honesty that story about playing chess versus playing poker didn’t come from me. I read it in a book called ‘Thinking In Bets’, by Annie Duke. I recommend it, there’s a lot of valuable advice in there. But the core element is one I’d like to share here. Start thinking in terms of bets. What’s the chance of being right? What’s the risk of being wrong? What happens if I’m wrong? What should I see if I’m right? And because you’re playing a game of imperfect information, being wrong once doesn’t necessarily mean you were wrong. It could just be bad luck. So maybe try some things a few times, if the cost isn’t that high. Because a betting strategy can only be evaluated after multiple bets. So do small experiments, multiple times. And if you find something that works today, don’t assume it’ll work tomorrow. Think in bets.

And try to prevent yourself from being certain of something. Because that’s when you lose connection with reality. As always, the only certainties you have in life is that one day it’s going to end, and before (and after) that time, the government will take half your stuff. The rest you should be exploring.

Don’t play chess when you should be playing poker.

Sources

Daniel Kahneman - Thinking, Fast and Slow Annie Duke - Thinking in Bets Robert Burton - On Being Certain Barry O’Reilly - Unlearn George Tuff & Steven Goldbach - Detonate