Developing a Thinking Organization. Part I
…today’s leaders will be bending their minds to the business of winning with a greater intensity than we’ve seen for a generation.
With no end in sight to ‘austerity’ and only meagre expectations for growth in consumer demand, the necessity for organizations to be different and to perform better is acute. To borrow loosely from H.G. Well’s atmospheric introduction to The War of the Worlds(1):
No one would have believed that in the early years of this decade, your organization’s affairs would be being scrutinised from afar as someone with a microscope studies creatures that swarm and multiply in a drop of water. Few even considered the possibility of intelligent life amongst your competitors and yet, in meeting rooms and boardrooms around the world, intellects vast, cool and unsympathetic were regarding your organization with envious eyes and slowly and surely they were drawing up their plans against you…
Perhaps a rather uncomfortable truth conveyed here is that today’s leaders will be bending their minds to the business of winning with a greater intensity than we’ve seen for a generation. As you square up for this battle of minds, you may become concerned about the advantage they’ll get from the terabytes of ‘Big Data’ they have at their disposal or their army of analysts poised to deliver unique competitor intelligence. But the simple truth is that whatever resources they might have at their disposal, the only factor that will allow their deliberations to be more meaningful and insightful than those of your leadership team is the speed and accuracy of their problem solving and decision making — in short, their ability to think.
In this paper I want to explore how improving the brainpower of your organization might become the key to your competitive success. I’ll start by looking at why the thinking patterns we, as individuals, use to run the majority of our lives cannot, without modification, be relied upon to shape the thinking within our organizations. I’ll explore how these thinking patterns can be modified to produce superior cerebral performance for both individuals and teams and, finally, I’ll go on to outline how some of these ideas can be introduced in such a way as to create a truly ‘Thinking Organization.’
What’s wrong with the way we think?
We might expect that the ability to think clearly and effectively is the essential component of leadership and management. After all, how did these good people secure their elevated positions without the ability to appraise complex situations, solve problems, make decisions and manage risks and opportunities? The evidence to support this view, however, is not encouraging:
- Risky trading and high level mismanagement led to Lehman’s $600 million demise. The financial crisis triggered by this collapse is estimated to have caused indirectly the loss of two million jobs.
- Hoover in the UK offered two free flights for every purchase over $150. The flights cost more than the products, losing the company a cool $74 million.
- Famously, Coca-Cola, armed with millions of dollars of marketing studies, launched “New Coke.” After an astonishingly negative public reaction, it was withdrawn after three months and Coca-Cola returned to the old formula.
- IBM hired Microsoft to develop an operating System for their new PC — but let them keep the rights to the software.
- Or one of my particular favourites, Gerald Ratner, at one time the largest jewellery retailer in the UK, announced some of his products were “total crap.” The comment wiped $750 million off the value of his business.
A list of examples of any length doesn’t prove that poor quality of thinking is endemic across all organizations. What it does show is that one piece of poor thinking can make or break a career and perhaps even an entire organization. As long ago as 1995, the Yankelovich/ Kepner-Tregoe study “Problem Solving and Decision Making”(2) found that senior executives took a dim view of the decision making prowess in large organizations. Of the executives surveyed, 80 % felt executives missed achieving their objectives when decision making. When it came to problem solving, only half the executives felt secure in their company’s ability to ask the right questions to find the root cause of a problem. More recently, in April 2012, a One Poll(3) study conducted across 500 senior managers for Kepner-Tregoe concluded that in over 50% of cases, improvement initiatives failed because of poorly defined objectives and an inability to think through how to make them work.
The evidence that there’s room for improvement in the way organizations think is pretty compelling and if we are to understand the pathology of this condition, we probably need to start by looking at how we as individuals think — after all, the “thinking organization” is simply an aggregation of all the individual thinking that’s going on within it. To begin this exploration, I turn first to Daniel Kahneman’s seminal work, Thinking Fast and Slow(4).
In his book, Kahneman explains that our minds use two modes of thought: the automatic, instant, intuitive, involuntary responses provided by what he terms our “System 1” thinking; and then the more controlled, effortful, analytical and considered thoughts supplied by our “System 2” minds. To illustrate the difference between these two thinking modes, Kahneman simply asks us to consider the following two mathematical problems:
2 + 2
17 x 24
As you looked at the first problem, the number four sprang to mind with (hopefully) no effort. As you went on to consider the second, no immediate answer came to mind. You probably knew you could solve it, but without spending time on it you would not be certain that the answer is 408. These two problems beautifully illustrate the difference between our System 1 and System 2 thinking and also allow us to sense that where possible, we will favour System 1 thinking, as it requires no effort, rather than System 2, which is taxing, difficult and in the case of this example, brings back unpleasant memories of school.
Following this line of reasoning, it becomes plausible to imagine that most leaders and managers within organizations will use ‘System 1’ thinking wherever possible to minimise effort for, as we know, ‘effort’ is hard work and typically in short supply. This would be of little concern if we could trust our System 1 to conduct consistently and reliably the thinking required for high quality problem solving and decision making but as Kahneman argues, such trust would be misplaced. He explains that the really scary part is that we cannot tell when our intuitive System 1 responses are based on sound judgement or when our System 1 is simply making things up. Could this be the first clue in our search for the cause of poor quality organizational thinking?
To understand why we have to be careful of the intuitive judgements delivered by System 1, we need to recognise that our intuition is simply and only recognition. This appreciation can help us understand when we can trust System 1 and when it might be a mistake to do so. Faced with a situation that we recognise, our System 1 judgements will have value and in very familiar situations, we might even be said to have developed expert intuition. A production manager may have a rich experience of the foibles of a particular manufacturing system, or a marketer may have a deep understanding of the dynamics of a niche market. In both these situations, the way in which their “expert” System 1 minds instantly resolve related issues can probably be relied upon because they recognise the specific issue and can access pertinent ideas from directly relevant experience.
The challenge comes when the issue that requires thought goes beyond our direct experience.
Psychologists tell us that ideas can be thought of as nodes in a vast network called associative memory in which each idea is linked to many others. When an idea forms, it does not simply trigger one other idea, it instantly activates many ideas which in turn activate many more in an exponential explosion of thought. As only a very few of these activated ideas register in our conscious minds, we cannot be certain of how germane the set of ideas is that System 1 uses to shape the insights we draw or the conclusions we make. Never wanting to be short of something to say, we humans have intuitive opinions and feelings about almost everything that comes our way and these opinions and feelings are influenced by the unconscious ideas activated within our associative memory, whether directly relevant or not. In the immortal words of Arnold H. Glasgow, “The fewer the facts the stronger the opinions.”
To highlight the impact that unconsciously activated ideas can have on the seemingly rational output of our System 1 minds, allow me to use a few of the examples from Kahneman’s book which I personally find illuminating and insightful.
Why our thinking can be unconsciously influenced?
Psychologists suggest that our System 1 can be influenced by what is termed the Priming effect. In other words, recent stimuli may have activated ideas in our associative memory that remain below the radar of our conscious minds. A perfect demonstration of this priming effect was conducted in an office kitchen at a British university(5). This office had used an ‘Honesty box’ for people to pay for tea and coffee and a list of suggested prices was clearly displayed. One day a picture appeared just above the price list. The picture changed each week to show either flowers or eyes that appeared to be looking directly at the observer. On the first week eyes stared at the coffee and tea drinkers and their average contribution was 70p. On the second week the picture changed to flowers and the contributions fell to 15p!
Figure 1. Pounds paid per litre of milk consumed as a function of week and image type Amount paid for milk — eyes/flowers influence test
Now let’s imagine that the first item on your agenda one morning was to agree on the location of your new manufacturing facility. As you drove into work, you were vaguely aware of a story on the radio reporting a minor electoral success for a political party of which you did not approve in one of the countries being considered for the new facility. Whilst you find the story irritating, it should have no bearing on the morning’s deliberations. Your associative machine might easily have made a connection however, and later in the meeting, when the merits of that country are being discussed, your System 1 has been primed by the news story and its outputs have become biased. Without the checks and balances provided by System 2, System 1 might nudge you towards the wrong choice for the wrong reasons!
Why we are wired to jump to conclusions?
Kahneman argues that as soon as we are faced with a problem, our intuitive System 1 accesses associative memory for something it recognises in a search for possible cause. To minimise effort, System 1 will build the most logical and appealing solution it can, based on your store of experience. Again, you’ll not be aware of the experiences your System 1 is referencing in its search for this coherent solution, nor that any ambiguity has been suppressed as it pushes the answer into your mind.
When you look at these two boxes in the above example, you almost certainly read the box on the left as ABC and the box on the right as 12 13 14 and yet the middle item in both boxes is identical. So why did you arrive at this solution? Your System 1 is referencing a learned pattern, in this case your “ABCs” from school, and using this pattern to provide the answer — for this box, the answer A 13 C is of course equally correct. The shape is ambiguous, but you jump to a conclusion about its identity and do not become aware that System 1 effortlessly and incorrectly in this case, resolved the ambiguity for you. The most important aspect of this is that you made a clear choice but you didn’t know it.
As consultants in the field of problem solving, we often see examples of people ‘jumping to cause’ because of the influence of learned patterns woven into their System 1 minds. We work with a bank that has a substantial branch network. The operating results for all branches are reviewed monthly by the executive committee. Over the months one particular branch showed a steady decline in the volume of transactions, so the executive committee began an investigation. One Vice President pointed out that the decline had started at about the same time as the appointment of a new manager. “It’s always the same pattern” he said, “these people take too long to get to know the local needs and customers.” In this case, the unfortunate manager has moved on, a decision that ignored the fact that two major defence contractors near this branch were laying employees off as a key contract came to an end. This important and available data didn’t fit the simple, powerful and appealing solution constructed by the Vice President’s System 1.
Why what we see is all there is?
It’s understood that an essential design feature of our associative machine is that it represents only activated ideas, information that is not retrieved might as well not exist. System 1 excels at constructing the best possible story that employs currently activated ideas. The amount and quality of data on which the story is based is largely irrelevant, providing the solution feels logical and appealing. Consider the following:
“Will Peter make a good Director of Marketing? He is intelligent and creative.”
Your System 1 delivered the answer quickly and it was probably a yes. If so, you picked the best answer based on the very limited information available and used it to construct a coherent story. It takes your System 2 to ask the question, “What do I need to know before I can form an opinion about who should get the marketing job?” System 2 ‘analysis’ will ensure a more complete set of criteria are considered and may for example, surface the fact that Peter has a track record of being lazy and unproductive. What would be your answer now?
Why we wrestle with what ‘Good’ looks like?
It’s believed that our associative machine is continually drawing on our stored experience to make sense of the world and we only really become surprised, triggering careful System 2 analysis, when something conflicts with our model of what is normal. Ask the question, How many animals of each kind did Moses take into the ark?
Our System 1 checks the question and passes it as legitimate, as the idea of animals going into the arc sets up the biblical context and Moses is to be expected in that biblical context. The number of people whose System 1 points out that Moses didn’t take any animals into the arc — Noah did — is so small as to dub this the ‘Moses illusion’.
The challenge here is that we don’t know what experiences our System 1 has accessed to create the ‘norms’ it uses to test validity in different situations. I was recently talking to the CEO of a dairy company who was delighted that sales in a certain category had grown by 50%. Apparently his last Sales Director, who had been with the firm for many years, had resigned and a replacement had come in with a very different approach. “I thought we were doing a great job in that category” he explained, “I had no idea of the potential.” The CEO’s System 1 had constructed a norm or benchmark for sales in this category, which System 2 never felt the need to challenge until the advent of the new Sales Director.
Why we can be prone to answering the wrong question?
We are often confronted with difficult problems to solve or choices to make and, never wanting to be short of an answer, System 1 will always try to simplify the challenge by substituting a difficult question with an easier one.
When we are faced with tough questions such as, “How should our primary source of competitive advantage evolve over time?” our System 1 taps our associative memory for something related but a bit less challenging, such as, “What did that customer I was talking to last week say about why he liked us?” Such substitute questions may help but, of course, will sometimes lead to serious errors.
System 1 will generate quick answers to difficult questions without imposing much work on our lazy System 2. An easier question is likely to be evoked and very easily answered, dproviing an off-the-shelf solution to each difficult question.
This substitution is an automatic process for System 1. System 2 will, of course, endorse the answer provided by System 1, or perhaps choose to modify it slightly. However, lazy System 2 often follows the path of least effort and endorses the alternative answer without much scrutiny — you will not be at a loss for an answer, you will not have to work very hard and you may not even notice that you didn’t really answer the question you were asked.
The five examples I have taken to illustrate the inner workings of our System 1 minds provide a little understanding of the power and complexity of our intuitive intelligence. Our System 1 minds have evolved in such a way as to prioritise speed over accuracy when faced with problems and decisions. Making an almost instantaneous call on whether it should be fight, flight or freeze in a given situation, rather than taking the time to think through the real nature of the threat and calculate the precise probability of the likely outcome has after all, allowed our species to survive and thrive, but can intuitive System 1 be relied upon to ensure the future success of the complex organizations for which we work in the same way that it has our species?
If we accept that as individuals we will favour the ease and simplicity of System 1 thinking, it follows that without conscious effort to do things differently, our organizations will “Think” in a similarly flawed way. As we struggle to win in highly competitive markets, is it really okay for our strategic choices to be informed by largely irrelevant emotional ‘Priming’ or ‘what you see is all there is’ thinking? Should problems be addressed within the context of a ‘norm’ of poor performance, by ‘jumping to conclusions’ or by taking at face value the probable cause of vaguely similar deviations we have encountered in the past? Should we really allow some of the most difficult questions we face as organizations to be ‘substituted’ for questions that are easier to answer?