Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Read - Cannon & Edmondson

Download as pdf or txt
Download as pdf or txt
You are on page 1of 21

Long Range Planning 38 (2005) 299e319 www.lrpjournal.

com

Failing to Learn and Learning


to Fail (Intelligently):
How Great Organizations Put Failure
to Work to Innovate and Improve

Mark D. Cannon and Amy C. Edmondson

Organizations are widely encouraged to learn from their failures, but it is something most
find easier to espouse than to effect. This article synthesizes the authors’ wide research
in this field to offer a strategy for achieving the objective. Their framework relates techni-
cal and social barriers to three key activities e identifying failure, analyzing failure and
deliberate experimentation e to develop six recommendations for action. They suggest
that these be implemented as an integrated set of practices by leaders who can ‘walk the
talk’ and work to shift the managerial mindset in a way that redefines failure away from its
discreditable associations, and view it instead as a critical first step in a journey of discovery
and learning.
Ó 2005 Elsevier Ltd. All rights reserved

Introduction
The idea that people and the organizations in which they work should learn from failure has
considerable popular support e and even seems obvious e yet organizations that systematically
learn from failure are rare. This article provides insight into what makes learning from failure so
difficult to put into practice e that is, we address the question of why organizations fail to learn
from failure.
We also note that very few organizations experiment effectively e an activity that necessarily
generates failures while trying to discover successes e to maximize the opportunity for learning
from failure and minimize its cost. In short, we argue that organizations should not only learn from

0024-6301/$ - see front matter Ó 2005 Elsevier Ltd. All rights reserved.
doi:10.1016/j.lrp.2005.04.005
failure e they should learn to fail intelligently as a deliberate strategy to promote innovation and
improvement. In this article, we identify the barriers embedded in both technical and social systems
that make such intelligent use of failure rare in organizations, and we offer recommendations for
managers seeking to improve their organization’s ability to learn from failure.

Research foundations and core ideas


Over the past decade or so, our research has revealed impediments to organizational learning from
failure on multiple levels of analysis. The first author has investigated individuals’ psychological
responses to their own failures, demonstrating the aversive emotions people experience and how
that inhibits learning. The second author has identified group and organizational factors they limit
learning from failure in teams and organizations. We have worked together for a number of years to
conceptualize and develop recommendations for how to enable organizational learning from
failure, drawing from our own and others’ research. In this article, we hope to provoke reflection
and point to possibilities for managerial action by synthesizing diverse ideas and examples that
illuminate both the challenges and advantages of learning from failure.
We have three core aims. First, we aim to provide insights about what makes organizational
learning from failure difficult, paying particular attention to what we see as a lack of understanding
of the essential processes involved in learning from failure in a complex organizational system such
as a corporation, hospital, university, or government agency.
Second, drawing from our own field research conducted over the past decade as well as from
additional sources, we develop a model of three key processes through which organizations can
learn how to learn from failure. We argue that organizational learning from failure is feasible, but
that it involves skillful management of three distinct but interrelated processes: identifying failure,
analyzing failure, and deliberate experimentation. Managed skillfully, these processes help managers
take advantage of the lessons that failures offer, which otherwise tend to be ignored or suppressed in
most organizations.
Third, we argue that most managers underestimate the power of both technical and social barriers
to organizational learning from failure, leading to an overly simplistic criticism of organizations and
managers for not exploiting learning opportunities. Although numerous prior writings have
advocated learning from failure, there is little advice as to how to overcome barriers to making this
happen.
In summary, the intended contribution of this article is to explicate the challenges of
organizational learning from failure and to build on this explanation to design new strategies for
action. Our strategies are communicated through a framework that relates two types of barriers to
three key activities to develop six areas for action. Taken together, our framework, examples and
recommendations are intended to provide a starting point for concrete managerial action.

Organizational failure defined


Failure, in organizations and elsewhere, is deviation from expected and desired results. This
includes both avoidable errors and the unavoidable negative outcomes of experiments and risk
taking.1 We define failure broadly to include both large and small failures in domains ranging from
the technical (a flaw in the design of a new machine) to the interpersonal (such as a failure to give
feedback to an employee with a performance problem). Drawing from our own and others’
research, we suggest that an organization’s ability to learn from failure is best measured by how it
deals with a range of large and small outcomes that deviate from expected results rather than
focusing exclusively on how it handles major disasters. Deviations from expected results can be
positive or negative, and even positive deviations present opportunities for learning. However, we
focus on negative surprises because of the unique psychological and organizational challenges
associated with learning from them.

300 Failing to Learn and Learning to Fail


negative surprises [have] unique psychological and organizational
challenges associated with learning from them

Small failures versus large failures


Large and well-publicized organizational failures e such as the Columbia and Challenger Shuttle
tragedies, the Colorado South Canyon Firefighter deaths, the fatal drug error that killed a Boston
Globe correspondent at Boston’s Dana Farber Hospital, and the Parmelat and Enron accounting
scandals e argue for the necessity of learning from failure. Recognizing the need to understand and
learn from consequential incidents such as these, executives and regulators often establish task forces
or investigative bodies to uncover and communicate the causes and lessons of highly visible failures. By
their nature, many such efforts will come too late for the goal of organizational learning from failure.
The multiple causes of large failures are usually deeply embedded in the organizations where the
failures occurred, have been ignored or taken for granted for years, and rarely are simple to correct.2
An important reason that most organizations do not learn from failure may be their lack of
attention to small, everyday organizational failures, especially as compared to the investigative
commissions or formal ‘after-action reviews’ triggered by large catastrophic failures. Small failures
are often the ‘early warning signs’ which, if detected and addressed, may be the key to avoiding
catastrophic failure in the future.3
Our research in organizational contexts ranging from the hospital operating room to the
corporate board room suggests that an intelligent process of organizational learning from failure
requires proactively identifying and learning from small failures. Small failures are often overlooked
because at the time they occur they appear to be insignificant minor mistakes or isolated anomalies,
and thus organizations fail to make timely use of these important learning opportunities. We find
that when small failures are not widely identified, discussed and analyzed, it is very difficult for
larger failures to be prevented.4

Barriers to organizational learning from failure


Learning from failure is a hallmark of innovative companies but, as noted above, is more common
in exhortation than in practice. Most organizations do a poor job of learning from failures, whether
large or small.5 In our research, we found that even companies that had invested significant money
and effort into becoming ‘learning organizations’ (with the ability to learn from failure) struggled
when it came to the day-to-day mindset and activities of learning from failure.6
Instead, organizations’ fundamental attributes usually conspire to make a rational process of
diagnosing and correcting causes of failures difficult to execute. A prominent tradition in
managerial research examines the importance of considering both social and technical attributes of
organizations as systems. Recognizing that organizations are simultaneously social systems and
technical systems, management researchers have long considered the need to examine how features
of tasks and technologies, together with social, psychological and structural factors, shape
organizational outcomes.7 We draw from this basic framework to categorize barriers to
organizational learning from failure into technical and social causes. We then describe three
specific learning processes through which these barriers can be overcome.

Barriers Embedded in Technical Systems


Research on learning has shown that limitations in human intuition and ‘sense-making’ can lead
people to draw false conclusions that inhibit both individual and collective learning. Technical
barriers to learning from failure thus include a lack of the basic scientific ‘know how’ to be able to

Long Range Planning, vol 38 2005 301


draw inferences from experiences systematically, as well as the presence of complex systems or
technologies that are inherently difficult to understand.8 In sum, when diagnosing cause-effect
relationships is technically difficult, learning from failure will necessarily be challenging as well.
Task design can obscure failures. For example, an excess work in process (WIP) inventory slows
the discovery of manufacturing process errors. By the time a large batch of defective inventory
reaches the next manufacturing step, the error has been repeated many times, leading to far more
rework than if the error had been caught immediately. A central insight of lean manufacturing,
therefore, is the redesign of tasks to make failures transparent by reducing WIP inventory through
smaller and smaller batch sizes.9
Technical barriers to gleaning failure’s lessons include an inadequate understanding of the
scientific method and an inability to engage in the following aspects of rigorous analysis: problem
diagnosis, experimental design, systematic analysis of qualitative data, statistical process controls,
and statistical analysis.

Barriers embedded in social systems


Social barriers to learning from failure start with the strong psychological reactions that most
people have to the reality of failure. Being held in high regard by other people, especially those with
whom one interacts in an ongoing manner, is a strong fundamental human desire, and most people
tacitly believe that revealing failure will jeopardize this esteem. Even though people may learn from
and appreciate others’ disclosures of failure, positive impressions of the others in question may be
eroded in subtle ways through the disclosure process. Thus, most people have a natural aversion to
disclosing or even publicly acknowledging failure.

Being held in high regard by others is a strong fundamental human


desire.... people instinctively ignore or disassociate themselves
from their own failures
Even outside the presence of others, people have an instinctive tendency to deny, distort, ignore,
or disassociate themselves from their own failures, a tendency that appears to have deep
psychological roots.10 The fundamental human desire to maintain high self-esteem is accompanied
by a desire to believe that we have a reasonable amount of control over important personal and
organizational outcomes. Psychologists have argued that these desires give rise to ‘positive
illusions’dunrealistically positive views of the self, accompanied by illusions of control e that in
fact enable people to be energetic and happy and to avoid depression. Some even argue that positive
illusions are a hallmark of mental health.11 However, the positive illusions that boost our self-
esteem and sense of control and efficacy may be incompatible with an honest acknowledgement of
failure, and thus, while promoting happiness, can inhibit learning.
Managers have an added incentive to disassociate themselves from failure because most
organizations reward success and penalize failure. Thus, holding an executive or leadership position
in an organization does not imply an ability to acknowledge one’s own failures.12 Sidney
Finkelstein’s in-depth investigation of major failures at over 50 companies suggested that the
opposite might be the case:

Ironically enough, the higher people are in the management hierarchy, the more they tend to
supplement their perfectionism with blanket excuses, with CEOs usually being the worst of all. For
example, in one organization we studied, the CEO spent the entire forty-five-minute interview
explaining all the reasons why others were to blame for the calamity that hit his company.
Regulators, customers, the government, and even other executives within the firm--all were
responsible. No mention was made, however, of personal culpability.13

302 Failing to Learn and Learning to Fail


Organizational structures, policies and procedures, along with senior management behavior, can
discourage people from identifying and analyzing failures and from experimenting.14 Many
organizational cultures have little tolerance for e and punish e failure. A natural consequence of
punishing failures is that employees learn not to identify them, let alone analyze them, or to
experiment if the outcome might be uncertain. Even in more tolerant organizations, most managers
do not reward behaviors which acknowledge failure by offering raises, promotions, or other privileges.
Next, when failures are identified, social factors inhibit the constructive discussion and analysis
through which shared learning occurs. Most managers do not have strong skills for handling the hot
emotions that often surface e in themselves or others e in such sessions. Thus, discussions that
attempt to unlock the potential learning from the failure can easily degenerate into opportunities
for scolding, finger-pointing or name-calling. Public embarrassment or private derision can leave
participants with ill feelings and strained relationships rather than learning. For this reason,
effective debriefing and learning from failure requires substantial interpersonal skill.
The unfortunate reality for most organizations is that the barriers to learning from failure
described above are all but hard wired into social systems, and greatly reduce the ability of most
organizations to learn from failure. Thus, not only do few organizations systematically capture
failure’s lessons, most managers lack a clear understanding of what a proactive process of learning
from failure looks like.
Without a clear model of what it takes to learn from failure, organizations are at a disadvantage
in facing these barriers. While fully acknowledging the magnitude of the challenge, we suggest that
breaking the learning process down into more tangible component activities greatly enhances the
likelihood of gleaning failures’ lessons. In the next section, we identify and explain three distinct
processes through which effective organizations can proactively learn from failure.

Three processes for organizational learning from failure


Learning from failure is as much a process as an outcome. Identifying the component activities
through which this process can occur is an initial step in making it happen. We therefore offer three
core organizational activities through which organizations learn from failure: (1) identifying failure,
(2) analyzing failure, and (3) deliberate experimentation. They are presented in order of increasing
challenge - both organizationally and in terms of the technical and interpersonal skills required.
Breaking this encompassing organizational learning process into narrower component parts
suggests a strategy for building new competencies that starts with the least challenging process of
identifying failure and builds up to the more challenging one of deliberate experimentation. In this
section, we further describe these activities and provide illustrations of organizations that have
successfully enacted them. These illustrations were deliberately collected from multiple industries
and organizations so as to inform practitioners and researchers across diverse contexts.

Identifying failure
Proactive and timely identification of failures is an essential first step in the process of learning from
them. One of the revolutions in manufacturing - the drive to reduce inventory to the lowest
possible levels - was stimulated as much by the desire to make problems and errors quickly visible as
by the desire to avoid other inventory-associated costs. As Hayes and his colleagues have noted,
surfacing errors before they are compounded, incorporated into larger systems, or made
irrevocable, is an essential step in achieving high quality.
Indeed, one of the tragedies in organizational learning is that catastrophic failures are often
preceded by smaller failures that were not identified as being worthy of examination and learning.
In fact, these small failures are often the key ‘early warning signs’ that can provide the wake up call
needed to avert disaster further down the road. Social system barriers are often the key driver of this
kind of problem. Rather than acknowledge and address a small failure, individuals have a tendency
to deny the failure, distort the reality of the failure, or cover it up, and groups and organizations
have the tendency to suppress awareness of failures.

Long Range Planning, vol 38 2005 303


The tendency to ignore failure can allow failures to be repeated, developing a smaller failure into
a bigger one. For example, Finkelstein presents Jill Barad at Mattel as an illustration of failing to
acknowledge and learn from mistakes in a timely manner. In Mattel’s ill-fated acquisition of the
Learning Company, Barad first overlooked the problems that the organization was having prior to
the acquisition. An opportunity to acknowledge the failure came when the third quarter 1999
earnings turned out to be a loss of $105 million, rather than a profit of $50 million as she expected.
However, rather than address the failure, she remained optimistic and predicted significant profits
for the next quarter; instead, there was a loss of $184 million. Once again, rather than acknowledge
the failure and learn from it, she repeated the same mistake for the next two quarters as well, thus
making the same mistake for a total of four quarters.
Similarly, an examination of the failed HIH Insurance Group in Australia reveals that
organizational collapse often unfolds in a set of phases, including an ‘early warning sign’ phase in
which leaders typically do not openly identify or respond constructively to failures. Rather than
acknowledge failure and respond appropriately, management acted to conceal failure from the
board of directors and others who might have assisted in addressing the problems more
effectively.15 Likewise, a study of the failure of T. Eaton Co. Ltd. (once Canada’s largest retailer and
the world’s largest privately held department store chain) concluded that the company’s inability to
identify a series of failures in a timely manner contributed to the company’s demise.16
By contrast, the CEO of a mechanical contractor recognized the value of exposing failure and
publicizing it in order to help employees learn from each other and not repeat the same mistake.
The CEO:

pulled a $450 ‘mistake’ out of the company’s dumpster, mounted it on a plaque, and named it the
‘no-nuts award’ - for the missing parts. A presentation ceremony followed at the company barbecue.
‘You can bet no one makes that mistake any more,’ the CEO says. ‘The winner, who was initially
embarrassed, now takes pride in the fact that his mistake has saved this company a lot of money.’17

‘You can bet no one makes that mistake any more!’

Examples of systematically identifying failures


Overcoming the psychological barriers to identifying failure requires courage to face the unpleasant
truth. But the key organizational barrier to identifying failure has mostly to do with the
inaccessibility of the data necessary to identify failures. To overcome this barrier, organizational
leaders must take the initiative to develop systems and procedures that make available the data
necessary to identify and learn from failure. To illustrate, Dr. Kim Adcock of Kaiser Permanente
proactively collected and organized data to identify failure of physicians in reading mammograms.
Due to inherent difficulties in reading mammograms accurately, the medical profession has come
to expect a 10-15% error rate, even among expert readers. Consequently, discovering that a reader
has missed one or even several tumors is not necessarily indicative of that reader’s diagnostic ability
and may not provide much incentive for learning from failure. By contrast, when Dr. Adcock
became radiology chief at Kaiser Permanente Colorado, he utilized the longitudinal data available
in the HMO’s records to proactively identify failure and produce detailed, systematic feedback
including bar charts and graphs for each individual x-ray reader. For the first time, each reader
could learn whether he or she was falling near or outside of the acceptable range of errors. Dr.
Adcock also provided readers with the opportunity to return to the misread x-rays so they could
investigate why they missed a particular tumor and learn not to make the same mistake again.18
On a larger scale, Electricite De France, which operates 57 nuclear power plants, provides an
example of identifying and learning from potential failures. The organization tracks each plant for
anything even slightly out of the ordinary and has a policy of quickly investigating and publicly
reporting any anomalies throughout the entire system so that the whole system can learn.19

304 Failing to Learn and Learning to Fail


Feedback seeking is also an effective way of identifying many types of failures. Feedback from
customers, employees and other sources can expose failures, including communication breakdowns
as well as failure to meet goals or satisfy customer requirements. Proactively seeking feedback from
customers may be necessary in order for manufacturers and service providers to identify and
address failures in a timely manner.
For example, only five to ten percent of dissatisfied customers choose to complain following
service failure; instead, most simply switch providers. This is one of the reasons service companies
fail to learn from failures and therefore lose customers. Service management researchers Tax and
Brown cite General Electric and United Parcel Service as two organizations that proactively seek
data that will help them identify failures. General Electric (GE) places an 800 number directly on
each of its products and encourages customers to inform the company of any problems. GE has an
Answer Center that is open twenty-four hours a day, 365 days a year, receiving approximately 3
million customer calls a year.20
United Parcel Service (UPS) provides an example of how to seek feedback from within the
company. The company has built in a half hour per week to the schedule of each of its drivers for
receiving their feedback and answering questions. These simple techniques exemplify methods of
identifying failure in a timely way so that the organizations can learn, respond quickly, and retain
customers.
At the same time, these techniques are not easy to implement. Employees, consciously and not,
may actively avoid opportunities to expose and learn about their failures. Effective identification of
failure entails exposing failures as early as possible, to allow learning in an efficient and cost effective
way. This often requires a proactive effort on the part of managers to surface available data on
failures and use it in a way that promotes learning.

Failing to identify failures


A recent tragic example of the consequences of delayed and minimized identification of failure can
be found in the Columbia disaster. As discussed in the Columbia Accident Investigation Board’s
report, NASA managers spent 16 days downplaying the possibility that foam strikes on the left side
of the shuttle represented a serious problem e a true failure e and so did not view the events as
a trigger for conducting detailed analyses of the situation. Instead the strikes were deemed ordinary
events, within the boundaries of past experience, an interpretation that would later seem absurd
given the large size of the debris. The shared belief that there was little they could have done
contributed to a lack of proactive analysis and exploration of possible remedies. Sadly, post-event
analyses have suggested the possibility that fruitful actions could have been taken had the failure
been identified and explored early in this window of opportunity.21
Because psychological and organizational factors conspire to reduce failure identification,
a fundamental reorientation in which individuals and groups are motivated to engage in the
emotionally challenging task of seeking out failures is needed. Obviously, organizations that have
a habit of ‘shooting the messenger’ who identifies and reveals a failure will discourage this process.

organizations that have a habit of ‘shooting the messenger’ will


discourage the process of seeking out failures

Cultures that promote failure identification


Creating an environment in which people have an incentivedor at least do not have
a disincentivedto identify and reveal failures is the job of leadership.22 For example, the
Children’s Hospital in Minneapolis developed a ‘blameless reporting’ system to encourage
employees not only to reveal medical errors right away, but also to share additional information

Long Range Planning, vol 38 2005 305


that could be used in analyzing causes of the error.23 Similarly, the US Air Force specifically
motivates speaking up early by penalizing pilots for not reporting errors within 24 hours. Errors
reported immediately are not penalized; those not reported but discovered later are treated severely.
In sum, pervasive social barriers, psychological and organizational, discourage reporting failure,
just as technical barriers such as system complexity and causal ambiguity inhibit recognizing failure

Analyzing failure
It hardly needs to be said that organizations cannot learn from failures if people do not discuss and
analyze them. Yet this remains an important insight. The learning that is potentially available may
not be realized unless thoughtful analysis and discussion of failure occurs. For example, for Kaiser’s
Dr. Adcock, it is not enough just to know that a particular physician is making more than the
acceptable number or errors. Unless deeper analysis of the nature of the radiologists’ errors is
conducted, it is difficult to learn what needs to be corrected. On a larger scale, the US Army is
known for conducting After Action Reviews that enable participants to analyze, discuss and learn
from both the successes and failures of a variety of military initiatives. Similarly, hospitals use
‘Morbidity and Mortality’ (M&M) conferences (in which physicians convene to discuss significant
mistakes or unexpected deaths) as a forum for identifying, discussing and learning from failures.
This analysis can only be effective if people speak up openly about what they know and if others
listen, enabling a new understanding of what happened to emerge in the assembled group. Many of
these vehicles for analysis only address substantial failures, however, rather than identifying and
learning from smaller ones.
An example of effective analysis of failure is found in the meticulous and painstaking analysis that
goes into understanding the crash of an airliner. Hundreds of hours may go into gathering and
analyzing data to sort out exactly what happened and what can be learned. Compare this kind of
analysis to what takes place in most organizations after a failure
As noted above, social systems tend to discourage this kind of analysis. First, individuals
experience negative emotions when examining their own failures and this can chip away at self-
confidence and self-esteem. Most people prefer to put past mistakes behind them rather than revisit
and unpack them for greater understanding.
Second, conducting an analysis of a failure requires a spirit of inquiry and openness, patience and
a tolerance for ambiguity. However, most managers admire and are rewarded for decisiveness,
efficiency and action rather than for deep reflection and painstaking analysis.
Third, psychologists have spent decades documenting heuristics and psychological biases and
errors that reduce the accuracy of human perception, sense making, estimation, and attribution.24
These can hinder the human ability to analyze failure effectively.
People tend to be more comfortable attending to evidence that enables them to believe what they
want to believe, denying responsibility for failures, and attributing the problem to others or to ‘the
system’. We would prefer to move on to something more pleasant. Rigorous analysis of failure
requires that people, at least temporarily, put aside these tendencies to explore unpleasant truths
and take personal responsibility. Evidence of this problem is provided by a study of a large
European telecoms company, which revealed that very little learning occurred from a set of large
and small failures over a period of twenty years. Instead of realistic and thorough analysis, managers
tended to offer ready rationalizations for the failures. Specifically, managers attributed large failures
to uncontrollable events outside the organization (e.g., the economy) and to the intervention of
outsiders. Small failures were interpreted as flukes, the natural outcomes of experimentation, or as
illustrations of the folly of not adhering strictly to the company’s core beliefs.25
Similarly, we have observed failed consulting relationships in our field research in which the
consultants simply blamed the failure on the client, concluding that the client was not
really committed to change, or that the client was defensive or difficult. By contrast, a few highly
learning-oriented consultants were able to engage in discussion and analysis that involved raising
questions about how they themselves contributed to the problem. In these analytic sessions, the

306 Failing to Learn and Learning to Fail


consultants raised questions such as ‘Are there things I said or did that contributed to the defensiveness
of the client?’, ‘Was my presentation of ideas and arguments clear and persuasive?’ or ‘Did my analysis
fall short in some way that led the client to have legitimate doubts?’ Raising such questions increases
the chances of the consultants learning something useful from the failed relationship, but requires
profound personal curiosity to learn what the answers might be. Blaming the client is much more
simple, comfortable and common.26

Raising questions about their contribution to failure increases


consultants’ learning from failed relationships, but requires profound
personal curiosity. Blaming clients is much more comfortable
Recent research in the hospital setting by Ticker and Edmondson shows that health care
organizations typically fail to analyze or make changes even when people are well aware of failures.
Whether medical errors or simply problems in the work process, few hospital organizations dig
deeply enough to understand and capture the potential learning from failures. Processes, resources,
and incentives to bring multiple perspectives and multiple minds together to carefully analyze what
went wrong and how to prevent the occurrence of similar failures in the future are lacking in most
organizations.
Thus formal processes or forums for discussing, analyzing and applying the lessons of failure
elsewhere in the organization are needed to ensure that effective analysis and learning from failure
occurs. Such groups are most effective when people have technical skills, expertise in analysis, and
diverse views, allowing them to brainstorm and explore different interpretations of a failure’s causes
and consequences. Because this usually involves the potential for conflict that can escalate, people
skilled in interpersonal or group process, or expert outside facilitators, can help keep the process
productive.
Next, skills for managing a group process of analyzing a failure with a spirit of inquiry and
sufficient understanding of the scientific method is an essential input to learning from failure as an
organization. Without a structure of rigorous analysis and deep probing, individuals tend to leap
prematurely to unfounded conclusions and misunderstand complicated problems. Some
understanding of system dynamics, the ability to see patterns, statistical process controls, and
group dynamics can be very helpful.27 To illustrate how this works in real organizations, we review
a few case study examples below.

Examples of systematically analyzing failure


Edmondson et. al. report how Julie Morath, the Chief Operating Officer at the Minneapolis
Children’s Hospital, implemented processes and forums for the effective analysis of failures, both
large and small. She bolstered her own technical knowledge of how to probe more deeply into the
causes of failure in hospitals by attending the Executive Sessions on Medical Errors and Patient
Safety at Harvard University, which emphasized that, rather than being the fault of a single
individual, medical errors tend to have multiple, systemic causes. In addition, she made structural
changes within the organization to create a context in which failure could be identified, analyzed
and learned from.
To create a forum for learning from failure, Morath developed a Patient Safety Steering
Committee (PSSC). Not only was the PSSC proactive in seeking to identify failures, it ensured that
all failures were subject to analysis so that learning could take place. For example, the PSSC
determined that ‘Focused Event Studies’ would be conducted not only after serious medical
accidents but even after much smaller scale errors or ‘near misses.’ These formal studies were
forums designed explicitly for the purpose of learning from mistakes by probing deeply into their
causes. In addition, cross-functional teams, known as ‘Safety Action Teams’ spontaneously formed

Long Range Planning, vol 38 2005 307


in certain clinical areas to understand better how failures occurred, thereby proactively improving
medical safety. One clinical group developed something they called a ‘Good Catch Log’ to record
information that might be useful in better understanding and reducing medical errors. Other teams
in the hospital quickly followed their example, finding the idea compelling and practical.
In the pharmaceutical industry, about 90 percent of newly developed drugs fail in the
experimental stage, and thus drug companies have plenty of opportunities to analyze failure. Firms
that are creative in analyzing failure benefit in two ways. First, analyzing a failed drug sometimes
reveals that the drug may have a viable alternate use. For example, Pfizer’s Viagra was originally
designed to be a treatment for angina, a painful heart condition. Similarly, Eli Lilly discovered that
a failed contraceptive drug could treat osteoporosis and thus developed their one-billion-dollar-a-
year drug, Evista, while Strattera, a failed antidepressant, was discovered to be an effective treatment
for hyperactivity/attention deficit disorder.
Second, a deep probing analysis can sometimes save an apparently failed drug for its original
purposes, as is seen in the case of Eli Lilly’s Alimta. After this experimental chemotherapy drug
failed clinical trials, the company was ready to give up. The doctor conducting the failed Alimta
trials, however, decided to dig more deeply into the failuredutilizing a mathematician whose job at
Lilly was explicitly to investigate failures. Together they discovered that the patients who suffered
negative effects from Alimta typically had a deficiency in folic acid. Further investigation
demonstrated that simply giving patients folic acid along with Alimta solved the problem, thereby
rescuing a drug that the organization was ready to discard.28
Failure analysis can reach beyond the company walls to include customers. Systematic analysis of
small failures in the form of customer breakdowns was instituted at Xerox using a network-based
system called Eureka. By capturing and sharing 30,000 repair tips, Xerox saves an estimated $100
million a year through service operations efficiencies. The Eureka analysis also provides important
information for new product design.29

Analyzing employee and customer defections to capture the lessons


To help build an organization’s ability to analyze its own failures, outside sources of technical
assistance in analyzing failure can be engaged. For example, Frederick Reichheld at Bain and
Company has demonstrated the value of a deep, probing analysis of failure in the areas of customer
and employee defections In one instance, the fact that most customers who defected from
a particular bank gave ‘interest rates’ as the reason for switching banks seemed to suggest that their
original bank’s interest rates were not competitive. However, his additional investigation
demonstrated that there were no significant differences in interest rates across the banks. Careful
probing through interviews indicated that many customers defected because they were irritated by
the fact that they had been aggressively solicited for a bank-provided credit card, and then had their
applications turned down. A superficial analysis of customer defection would have led to the
conclusion that the bank’s interest rates were not competitive. A deeper analysis led to an alternate
conclusion: the bank’s marketing department needed to do a better job of screening in advance the
customers to whom it promoted such cards.
The importance of analysis related to employee turnover at another company, where managers
became concerned when they observed high turnover among sales people and conducted an
investigation. Many of the employees gave ‘working too many hours’ as the reason for their
defection. Initially, it appeared that the turnover may not have been such a bad thing d after all
who needs employees who are not committed to working hard? However, further data collection
revealed that many of the employees who quit were among their most successful salespeople, and
had subsequently found jobs that required, on average, 20 percent fewer hours. Once again, deeper
probing and analysis yielded a truer understanding of the situtation.30

Benefits of analyzing failure


In addition to the technical aspects of systematic analysis, discussing failures has important social
and organizational benefits. First, discussion provides an opportunity for others who may not have

308 Failing to Learn and Learning to Fail


been directly involved in the failure to learn from it. Second, others may bring new perspectives and
insights that deepen the analysis and help to counteract self-serving biases that may color the
perceptions of those most directly involved in the failure. After experiencing failure, people typically
attribute too much blame to other people and to forces beyond their control. If this tendency goes
unchecked, it reduces an organization’s ability to mine the key learning that could come from the
experience.

.the value of learning from analyzing and discussing simple mistakes


is often overlooked
Lastly, the value of the learning that might result from analyzing and discussing simple mistakes
is often overlooked. Many scientific discoveries have resulted from those who were attentive to
simple mistakes in the lab. For example, researchers in one of the early German polymer labs
occasionally made the mistake of leaving a Bunsen burner lit over the weekend. Upon discovering
this mistake on Monday mornings, the chemists simply discarded the overcooked results and went
on with their day. Ten years later, a chemist in a polymer lab at DuPont made the same mistake.
However, rather than simply discarding the mistake, the Dupont chemist gave the result some
analysis and discovered that the fibers had congealed. This discovery was the first step toward the
invention of nylon. With similar attention to the minor failure in the German lab, they might have
had a decade head start in nylon, potentially dominating the market for years.31
These first two sections have dealt with inadvertent failures. If a firm can identify and analyse such
failures, and then learn from them, it may be able to retrieve some value from what has otherwise been
a negative ‘result’. But failure need not always be considered from a ‘defensive’ viewpoint. Our third
section describes an ‘offensive’ approach to learning from failure e deliberate experimentation. The
three activities presented in this article e identifying failure, analysing failure and deliberate
experimentation e are not intended to be viewed as a sequential three-step process, but rather as
(reasonably) independent competencies for learning from failure. They can be sensibly examined
alongside each other, since each is easily inhibited by social and technical factors.

Deliberate experimentation
The third active process used by organizations to learn from failure is the most provocative. A
handful of exceptional organizations not only seek to identify and analyze failures, they actively
increase their chances of experiencing failure by experimenting. They recognize failure as
a necessary by-product of true experimentation, that is, experiments carried out for the express
purpose of learning and innovating. By devoting some portion of their energy to trying new things,
to find out what might work and what will not, firms certainly run the risk of increasing the
frequency of failure. But they also open up the possibility of generating novel solutions to problems
and new ideas for products, services and innovations. In this way, new ideas are put to the test, but
in a controlled context.
Experiments are understood to have uncertain outcomes and to be designed for learning. Despite
the increased rate of failure that accompanies deliberate experimentation, organizations that
experiment effectively are likely to be more innovative, productive and successful than those that do
not take such risks.32 Similarly, other research has confirmed that those research and development
teams that experimented frequently performed better than other teams.33
Social systems can make deliberate experimentation difficult because most organizations reward
success, not failure. Purposefully setting out to experiment e thus generating and accepting some
failures alongside some successes e although reasonable, is difficult in a general business culture
where failures are stigmatized. Conducting experiments also involves acknowledging that the status
quo is imperfect and could benefit from change. A psychological bias known as the confirmation

Long Range Planning, vol 38 2005 309


trap, meaning that people tend to seek to confirm their views rather than to learn why they might be
wrong, makes planning ventures that could produce learning but that might very well fail
particularly difficult.34 Deliberate experimentation requires that people not just assume their views
are correct, but actually put their ideas to the test and design (even small very informal)
experiments in which their views could be disconfirmed.

Examples of effective experimentation


A good example of the ability to overcome these psychological barriers is provided by the
influential, award-winning design firm, IDEO. They communicate this perspective with slogans
such as ‘Fail often in order to succeed sooner’ and ‘Enlightened trial-and-error succeeds over the
planning of the lone genius.’35 These sayings are accompanied by frequent small experiments, and
much good humor about the associated failures.36
Similarly, PSS/World Medical encourages experimentation in a variety of ways and sometimes
even goes so far as to encourage employees to experiment with career moves. PSS/World Medical
has a ‘soft landing’ policy: if an employee tries out a new position, but does not succeed after a good
faith effort, the employee can have his or her former job back. This ‘soft landing’ policy is an
implicit recognition that experiments have uncertain outcomes and that people will be more willing
to experiment if the organization protects their interests.37
Technical skills are critical in implementing a deliberate experimentation process. First, because
analyzing failure is part of this process, key individuals need skills in analyzing the results of
experiments. Second, because rigor is needed to design experiments that will effectively confirm
or disconfirm hypotheses to generate useful learning. Under some conditions, this can be
extremely challenging. For example, customer satisfaction at a large resort will be affected by
many interdependent aspects of the customer’s experience. If the resort experiments with
different possible innovations to enhance customer satisfaction, how do they determine their
impact?
Designing experiments in complex, interdependent systems is challenging even for research
experts. In addition to knowledge of experimental design and analysis, people need resources to run
experiments in different parts of the organization and to capture the learning.
The 3M Corporation has been unusually successful in providing incentives and policies that
encourage deliberate experimentation. The company has earned a reputation for successful product
innovation by encouraging deliberate experimentation and by cultivating a culture that is tolerant
and even rewarding of failures; failures at 3M are seen as a necessary step in a larger process of
developing successful, innovative products. Now legendary stories such as Arthur Fry and the failed
super-adhesive that spawned the Post-it industry are spread far and wide, both within and outside
the company. Setting goals, such as having 25 percent of a division’s revenues come from products
introduced within the last five years, means that divisions must continuously experiment to develop
new products.
Bank of America provides an interesting example of experimentation in the service setting.
Seeking to become more innovative, senior management decided to go ahead with deliberate,
intelligent experimentation in the branches e experiments that would inevitably affect and often be
visible to customers. Wishing to become an industry leader in innovation, the bank established
a program to develop a process and culture of innovation in two dozen real-life ‘laboratories’: fully
operating banking branches in which new product and service concepts, such as virtual tellers, were
being tested by employees (and customers).
Senior executives addressed organizational barriers by funding and developing an ‘Innovation &
Development Team’ to manage this process. A successful program entailed hiring individuals with
the technical research skills to address a number of complicated questions, such as: how to gauge
success of a concept; how to prioritize which concepts would be tested; how to run several
experiments at once; and how to avoid the novelty factor itself from altering the experimental
outcome. Successful experiments, determined on the basis of consumer satisfaction or revenue
growth, were then recommended for a national rollout.38

310 Failing to Learn and Learning to Fail


Senior management strongly supported experimentation . they
recognized [it] would necessarily produce failures along the way
Senior management strongly supported innovation and experimentation at these branches. For
example, they recognized that trying out innovative ideas would necessarily produce failures along
the way, so they targeted a failure rate of 30% as one that would indicate sufficient attempts at truly
novel ideas were being made. However, employee rewards were primarily based on indices
measuring routine performance (such as opening new customer accounts), and their personal
compensation often suffered when they spent time experimenting with new ideas, or when their
experiments failed. As a result, employees were reluctant to try out radical experiments until
management made changes to align reward systems with the organization’s espoused value of
innovation.

Factors promoting deliberate experimentation


Experimental research in social psychology by Lee at al confirms this point, that espoused goals of
increasing innovation through experimentation are not as effective when rewards penalize failures
as when rewards and values are aligned with the goal of promoting experimentation. As both field
and laboratory examples show, although experimentation is an essential activity underlying
innovation, it is both technically and socially challenging to implement intelligently. One of the
advantages of most forms of experimentation is that failures can take place off-line e in dry runs,
simulations, and other kinds of practice situations in which the failures are not costly. However,
even in these situations, interpersonal fears can lead to reluctance to take risks, limiting the
effectiveness of the experiments.39 Moreover, some experiments must take place on-line, in real
settings, in which customers interact directly with the failures.

Putting failure to work to innovate and improve


Our basic premise is that, although the barriers to a systematic process of learning from failure in
organizations are deep-rooted and numerous, by breaking this process down into component
activities, organizations can slowly but surely improve their track record of learning from their own
failures. While catastrophic failures will always, and rightly, command attention, we suggest that
focusing on the learning opportunities of small failures can allow organizations and their managers
to minimize the inherently threatening nature of failure to gain experience and momentum in this
learning process.
The previous section analyses the technical and social barriers to engaging in learning-from-
failure activities: this section builds the analysis to develop a framework to explore what
organisations can do to overcome these barriers.
Table 1 summarizes this advice, relating the two types of barriers to the three critical activities for
learning from failure to suggest six actionalde recommendations. The upper section lists the
technical system barriers to learning activities, and offers recommendations emphasizing training,
education, and the judicious use of technical expertise, while the lower section lists the social system
barriers, and presents recommendations for building psychological and organizational capabilities
for identifying failure, analyzing failure, and experimentation.

Recommendations on technical barriers


To overcome technical barriers, we first recommend helping employees to see that identifying
failure requires a proactive and skillful search, that human intuition is often insufficient to extract

Long Range Planning, vol 38 2005 311


Table 1. A Framework for Enabling Organizational Learning from Failure

Key Processes in Organizational Learning From Failure

Identifying failures Analyzing failures Experimentation

Barriers Complex systems make A lack of skills and Lack of knowledge


embedded in many small failures techniques to extract of experimental design.
Technical ambiguous. lessons from failures.
Systems
Recommendations R1: Build information R2: Structure After Action R3: Identify key individuals
systems to capture and Reviews or other formal for training in experimental
organize data, sessions that follow specific design; use as internal
enabling detection of guidelines for effective analysis consultants to advise pilot
anomalies, and ensure of failures, and ensure projects and other line
availability of systems availability of data (operational) experiments.
analysis expertise. analysis expertise.
Barriers Threats to self-esteem inhibit Ineffective group process Organizations may penalize
embedded in recognition of one’s own limits effectiveness of failed experiments inhibiting
Social Systems failures, and corporate cultures failure analysis willingness to incur failure
that ‘shoot the messenger’ discussions. for the sake of learning.
limit reporting of failures. Individuals lack efficacy
for handling ‘hot’ issues.
Recommendations R4: Reinforce psychological R5: Ensure availability of R6: Pick key areas of operations
safety through organizational experts in group dialogue in which to conduct an
policies such as blameless and collaborative learning, experiment, and publicize
reporting systems, and invest in development results, positive and negative,
through training first of competencies of other widely within the company
line managers employees in these skills. (Bank of America example).
in coaching skills, and by Set target failure rate for
publicizing failures as a experiments in service of
means of learning. innovation and make sure
reward systems do not
contradict this goal.

the key learning from failure, and that intelligent experimental design is a critical tool for
innovation and learning. With this basic understanding, employees are better able to recognize
when they either need to receive more specialized training themselves or to engage the assistance of
someone else who has benefited from such training.

Recommendation 1: Overcoming technical barriers to identifying failure


Organizations are complex systems, often making small (and sometimes even large) failures difficult
to detect. Failures, as noted above, are deviations from the expected and desired; if a system has
many complex parts and interactions, such deviations can be ambiguous. The Columbia shuttle’s
initial failure exemplifies this phenomenon; it was not clear to those involved until much later that
the foam strike should indeed be identified as a failure.
The Columbia example also highlights the erroneous level of confidence people have in their
initial interpretations that nothing is really wrong. Enhancing an individual’s ability to identify
(especially small) failures requires training. For example, training in statistical process control (SPC)
is useful for identifying failure on an assembly line. Without SPC, people are at a disadvantage

312 Failing to Learn and Learning to Fail


in discovering whether variation indicates that something is really wrong e a signal e or
whether it is just natural ‘noise’ in a process that is under control. Similarly, employees in
complicated and interdependent organizations will benefit from training in systems thinking and
scientific analysis. This enhances their ability to identify failure and pinpoint its source e and
especially to realize the critical role of small failures in creating large consequences in complex
systems.
Some failures can only be discerned when sufficient data are compiled and reviewed, such as we
saw in the Kaiser Mammogram example, in which higher-than-average error rates for an individual
physician constituted a failure, although single errors were not be considered evidence of failure.
Thus failure identification may be enabled by effective information systems that facilitate collection
and analysis of otherwise dispersed experiences. Recall also that Electricite De France developed
a system to detect anomalies and feed the information back to operators. Likewise, GE0 s 800
number and its prominent placement on products create the opportunity for data to be generated
by consumers and collected by GE, and reviewed when there are sufficient for meaningful analysis.

Recommendation 2: Overcoming technical barriers to analyzing failure


Most people tend not to recognize that they lack complete information or that their analysis is not
rigorous, and thus leap quickly to questionable conclusions while remaining confident that they are
correct. The tendency inhibits the extraction of the right lessons from failure. Figuring out which
aspects of a situation were contributing factors to something that did not go as expected is
a complex undertaking. For this reason, organizations need individuals with skills and techniques
for systematic analysis of complex organizational data. At the Minneapolis Children’s Hospital, the
patient safety effort included considerable care to ensure appropriate technical skills were in place to
gain the most appropriate lessons from each mishap e whether large or small.
Overcoming technical barriers does not require each and every employee to have the required
technical skills. The judicious use of a few well-placed technical experts and systems thinkers may be
enough to trigger more reliable identification of failure. At Children’s Hospital, safety experts were
brought in to help the hospital identify latent failures and a skilled facilitator ran every meeting
convened to analyze a given failure. Again, the mathematician that Eli Lilly hired to help
understand failures provides another example of how technical expertise can be built into the
organizational structure.
In contrast, following the Columbia launch, a simulation program designed by Boeing was used
to analyze the potential threat of the foam strikes on the mission. However, the technology had
several shortcomings for the task. The tool had not been calibrated for foam pieces greater than
three cubic inches in size - but the foam piece that struck the Columbia was 400 times bigger!
Further, the computer model simulated damage to a particular kind of tile e which was not the
type of tile on the area struck on the leading edge of the Shuttle’s wing. Had the right technical
experts, with the right tool, been able to work on the analysis, the outcome of Columbia’s final flight
might have been different.40

Recommendation 3: Overcoming technical barriers to effective experimentation


To produce valuable learning, experiments must be designed effectively. However even PhD
laboratory researchers with years of experience can struggle to get an experimental design just
right. In addition, most organizational settings have only limited ability to isolate variables and
reduce ‘noise’, which makes designing experiments for organizational learning challenging. At its
most basic, designing experiments for learning requires careful thought as to what kinds of data
will be collected and how results of the experiment will be assessed. For example, Bank of
America examined financial and customer satisfaction metrics at its experimental bank
branches. The key is to consider possible outcomes in advance and know how they might be
interpreted.

Long Range Planning, vol 38 2005 313


To produce valuable learning, experiments must be designed effectively.
The key is to consider in advance how all possible outcomes might be
interpreted
Again, it is not necessary to make all employees experts in experimental methodology; it is more
important to know when help is needed from (internal or external) experts with sophisticated skills.
Organizations can overcome this barrier by hiring and supporting a few well placed experts and
making their availability known to others. Thus, Bank of America handled this problem smartly by
developing the Innovation and Development Team and staffing it with experts who understood the
vulnerabilities associated with conducting research experiments in a real-world setting and how to
work around them.

Recommendations on social barriers


In addition to implementing the above recommendations to manage technical barriers, managers
must also deal with barriers due to social systems that are more subtle, pervasive, and difficult to
address. Even without explicit incentives against failure, many organizations have norms and
practices that are unfriendly to experimentation as well as to identifying, analyzing and learning
from failure. The next three recommendations tackle these issues directly.

Recommendation 4: Overcoming social barriers to identifying failure


To promote timely identification of failure, organizations must avoid ‘shooting the messenger’ and
instead put in place constructive incentives for speaking up. People must feel able to talk about the
failures of which they are aware, whether they are clear or ambiguous. To do this, leaders need to
cultivate an atmosphere of psychological safety to mitigate risks to self-esteem and others’
impressions. Developing psychological safety begins with the leader modeling the desired behaviors,
visibly demonstrating how they wish subordinates and peers to behave.
Leader modeling serves two significant purposes. First, to communicate expected and
appropriate behavior, it is important for leaders to ‘walk the talk.’ Second, leader modeling can
help subordinates learn how to enact these processes. Because these behaviors may be unfamiliar in
many organizations, having a model to observe can be very helpful in facilitating subordinate
learning. Leaders can model effectively by generating new ideas, disclosing and analyzing failure,
inviting constructive criticism and alternative explanations, and capturing and then utilizing
learning.41
Finally, psychological safety cannot be implemented by top down command. Instead, it is created
work group by work group through attitudes and activities of local managers, supervisors and
peers. As the second author has found in previous research, the development of managerial
coaching skills is one way to help build this type of learning environment. In addition,
organizational policies can either support or undermine the development of psychological safety.
For example, an organization-wide ‘blameless’ system for reporting errors, as used by Children’s
Hospital in Minneapolis, sends a cultural signal that it is truly safe to identify and reveal failures.
Perhaps one of the most difficult aspects of analyzing failure pertains to interpersonal dynamics,
the focus of our next recommendation.

Recommendation 5: Overcoming social barriers to analyzing failure


Developing an environment in which people feel safe enough to identify failure and speak up is
necessary to help ensure identification of failure, but insufficient to produce learning from failure.
Effective analysis of failure requires both time and space for analysis and skill in managing the
conflicting perspectives that may emerge. Some organizations provide for such time and space:, the

314 Failing to Learn and Learning to Fail


military use ‘After Action Reviews’ and hospitals ‘Mortality and Morbidity’ conferences to analyze
failures.
In addition to putting such structures in place, leaders need to involve people with diverse
perspectives and skills in order to generate deeper learning. While their introduction may produce
inevitable by-products of tension and conflicts, their experience can help keep the dialogue
learning-oriented. Decades of research by organizational learning pioneer Chris Argyris have
demonstrated that people in disagreement rarely ask each other the kind of sincere questions that
are necessary for them to learn from each other. People often try to force their views on the other
party, rather than seeking to educate them by explaining the underlying reasoning behind their
views.42

people in disagreement rarely ask each other the kind of sincere


questions necessary for them to learn from each other
For example, during the teleconference the night before the space shuttle Challenger was
launched, engineers and administrators both proved incapable of having the kind of discussion
which could lead to each side understanding the other’s concerns. Rather than try to explain what
they saw in their (incomplete) data to educate the administrators and fill in the gaps in their
understanding, the engineers made abstract statements such as ‘It is away from goodness to make any
other recommendation’ and ‘It’s clear, it’s absolutely clear.’ In turn, the administrators did not
communicate their own concerns and questions thoughtfully, but instead contributed to an
increasingly polarized discussion in which the engineers’ competencies were impugned. Eventually,
the individuals with the most power e NASA senior managers e made the decision.43
Thus, we recommend either developing or hiring skilled facilitators who can ensure that
learning-oriented discussions take place when analyzing organizational failures. Managers can be
trained to test assumptions, inquire into others’ views, and present their own views (no matter how
correct or thorough they may seem to them) as incomplete e or partial e accounts of reality. These
interpersonal skills can be learned, albeit slowly and with considerable effort, as action research has
demonstrated.44 When managers have such skills, they are able to both model this behavior and
provide active coaching to others to help them be more effective in generating learning from the
heated discussions often produced when failures are analyzed. Finally, even though a little training
may not produce instant skill, it can remind managers of the need to engage in discussions they
might otherwise avoid, as well as giving them some additional confidence.

Recommendation 6: Overcoming social barriers to experimentation


As Lee et al note, when incentives are inconsistent with espoused values that advocate learning from
failure, true experimentation will be rare. Executives thus need to align incentives and offer
resources to promote and facilitate effective experimentation. Organizational policies such as 3M’s
directive that 25 percent of a division’s revenues come from products developed in the last five
years, and Bank of America’s setting the expected level for failed experiments at 30 percent can go
a long way in sending the signal that the organization values creative experimentation. Promoting
individuals who have invested significant time experimenting with new ideas sends a similar
message.
In addition, leading by example is crucial. Managers who experiment intelligently themselves,
and who publicize both failures and successes, demonstrate the value of these activities and help
others see that the ideal of learning from failure in their organization is more than talk. As an
example, Burton reports how Eli Lilly’s chief science officer introduced ‘failure parties’ to honor
intelligent, high-quality scientific experiments that nonetheless failed to achieve the desired results.
In addition, coaching and clear direction may be useful in helping subordinates understand what
types of experiments should be designed. Finally, to develop the ability to manage all these

Long Range Planning, vol 38 2005 315


processes, managers may need to work on their own psychological and emotional capabilities to
enable them to shift how they think about failure.
The above activities can help organizations to identify problems and opportunities and to learn
and innovate. But employees who engage in learning behaviors must work to ensure that their
bosses, and other parts of the organization, understand and endorse the ‘intelligent failure’ concept.
Further, implementing these recommendations requires time, resources and patience, so engaging
in learning activities while maintaining current operations will require building in some level of
slack. In sum, managers setting out on this course must be realistic in their expectations for learning
from failure.

Reframing failure
These above recommendations are best implemented as an integrated set of practices accompanied
by an encompassing shift in managerial mindset. Table 2 summarizes this shift.
First, failure must be viewed not as a problematic aberration that should never occur, but rather
as an inevitable aspect of operating in a complex and changing world. This is of course not to say
leaders should encourage people to make mistakes, but rather to acknowledge that failures are
inevitable, and hence the best thing to do is to learn as much as possible e especially from small
ones d so as to make larger ones less likely. Beliefs about effective performance should reflect this.
This implies holding people accountable, not for avoiding failure, but for failing intelligently, and
for how much they learn from their failures.
Of course, whether a failure turns out to be intelligent or not is sometimes not easy to know at
the outset of an experiment. To provide managers with some guidelines, organizational scholar Sim
Sitkin identifies five characteristics of intelligent failures: (1) They result from thoughtfully planned
actions, (2) have uncertain outcomes, (3) are of modest scale, (4) are executed and responded to
with alacrity, and (5) take place in domains that are familiar enough to permit effective learning.45
Managers would also be smart to consider their organization’s current issues related to risk
management as they develop experiments. By considering these criteria in advance, and by
analyzing and learning from previous experiments, managers are able to increase the chances that
their failures will be intelligent.
Examples of unintelligent failure include making the same mistake over and over again, failing
due to carelessness, or conducting a poorly designed experiment that will not produce helpful
learning. In addition, managers need to create an environment in which they and their employees

Table 2. Reframing the Traditional Managerial Mindset for Learning

Traditional Frame Learning-oriented reframe

Expectations about failure Failure is not acceptable Failure is a natural byproduct of a healthy
process of experimentation and learning

Beliefs about effective Involves avoiding failure Involves learning from intelligent failure and
performance communicating the lessons broadly in the
organization

Psychological and Self-protective Curiosity, humor, and a belief that being


interpersonal responses the first to capture learning creates personal
to failure and organizational advantage
Approach to leading Manage day-to-day Recognizing the need for spare organizational
operations efficiently capacity to learn, grow and adapt for the future
Managerial focus Control costs Promote investment in future success

316 Failing to Learn and Learning to Fail


are open to putting aside their self-protective defenses and responding instead with curiosity and
a desire to learn from failure.
Finally, learning to fail intelligently requires leaders to adopt a long-term perspective. Too many
managers take a short-term view that focuses primarily on the efficient control of day-to-day
operations and on managing costs. By contrast, enhancing an organization’s learning ability
requires a perspective that focuses on building its long-term capacity to learn, grow, and adapt for
the future.

Conclusion
This article starts from the observation that few organizations make effective use of failures for
learning due to formidable and deep-rooted barriers. In particular, small failures can be valuable
sources of learning, presenting ‘early warning signs’. However, they are often ignored, and thus
their valuable lessons for preventing serious harm are missed. We show that properties of technical
systems combine with properties of social systems in most organizations to make failures’ lessons
especially difficult to glean. At the same time, we highlight noteworthy exceptions e organizations
that have done a superb job of making failures visible, analyzing them systematically, or even
deliberately encouraging intelligent failures as part of thoughtful experimentation.
Organizational learning from failure is thus not impossible but rather is counter-normative and
often counter-intuitive. We suggest that making this process more common requires breaking it
down into essential activities e identifying failure, analyzing failure, and experimenting e in which
individuals and groups can engage. By reviewing examples from a variety of organizations and
industries where failures are being mined and put to good use through these activities, we seek to
demystify the potentially abstract ideal of learning from failure. We offer six actionable
recommendations, and argue that these recommendations are best implemented by reframing
managerial thinking, rather than by treating them as a checklist of separate actions.
In conclusion, leaders can draw on this conceptual foundation as they seize opportunities, craft
skills, and build routines, structures, and incentives to help their organizations enact these learning
processes. At the same time, we do not underestimate the challenge of tackling the psychological
and interpersonal barriers to this organizational learning process. As human beings, we are
socialized to distance ourselves from failures. Reframing failure from something associated with
shame and weakness to something associated with risk, uncertainty and improvement is a critical
first step on the learning journey.

Reframing failure to being associated with risk and improvement is


a critical first step on the learning journey

Acknowledgements
We acknowledge the financial support of the Division of Research at Harvard Business School for
the research that gave rise to these ideas. We would also like to thank the LRP editor, as well as the
Special Issue editors and the anonymous reviewers for very helpful feedback.

References
1. M. Cannon and A. C. Edmondson, Confronting failure: Antecedents and consequences of shared beliefs
about failure in organizational work groups, Journal of Organizational Behavior 22, 161e177 (2001).
2. D. Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA, University
of Chicago Press, Chicago, IL (1996).

Long Range Planning, vol 38 2005 317


3. S. Sitkin, Learning through failure: the strategy of small losses, in L. L. Cummings and B. Staw (eds.),
Research in Organizational Behavior 14, , JAI Press, Greenwich, CT, 231e266 (1992).
4. See especially A. L. Tucker and A. C. Edmondson, Why hospitals don’t learn from failures: Organizational
and psychological dynamics that inhibit system change, California Management Review 45(2), 55e72
(2003).
5. For example, see both D. Leonard-Barton, Wellsprings of knowledge: Building and sustaining the sources of
innovation, Harvard Business School Press, Boston (1995); and S. B. Sitkin (1992) op cit at Ref 3 above.
6. See especially A. C. Edmondson, The local and variegated nature of learning in organizations,
Organization Science 13(2), 128e146 (2002).
7. E. A. Trist and K. W. Bamforth, Some social and psychological consequences of the Longwall method of
coal-getting, in D. S. Pugh (ed.), Organization theory, Penguin, London 393e419 (1958); Also see A. K.
Rice, Productivity and social organization: The Ahmedabad experiment, Tavistock, London (1958).
8. For example, see C. Perrow, Normal accidents. Living with high-risk technologies, Basic Books, New York
(1984).
9. R. H. Hayes, S. C. Wheelwright and K. B. Clark, Dynamic Manufacturing: Creating the Learning
Organization, Free Press, New York (1988).
10. D. Goleman, Vital lies, simple truths: The psychology of self-deception, Simon and Schuster, New York
(1985).
11. S. E. Taylor, Positive Illusions: Creative Self-Deception and the Healthy Mind, Basic Books, New York
(1989).
12. C. Argyris, Overcoming organizational defenses: Facilitating organizational learning, Allyn and Bacon,
Wellesley, MA (1990).
13. S. Finkelstein, Why Smart Executives Fail and What You Can Learn from Their Mistakes, Portfolio,
New York (2003). The quotation is from p. 179e180.
14. F. Lee, A. Edmondson, S. Thomke and M. Worline, The mixed effects of inconsistency on
experimentation in organizations, Organization Science 15(3), 310e326 (2004).
15. K. Mellahi, The dynamics of boards of directors in failing organizations, Long Range Planning 38(3),
(2005) doi:10.1016/j.lrp.2005.04.001.
16. J. Sheppard and S. Chowdhury, Riding the wrong wave: Organizational failure as a failed turnaround,
Long Range Planning 38(3), (2005) doi:10.1016/j.lrp.2005.03.009.
17. Make no mistake, Inc. Magazine, June (1989), p. 105.
18. M. Moss, Spotting breast cancer, doctors are weak link, The New York Times [late ed] A1, (27 June 2002);
M. Moss, Mammogram team learns from its errors, The New York Times [late ed.] A1, (28 June 2002).
19. E. C. Nevis, A. J. DiBella and J. M. Gould, Understanding organizations as learning systems, Sloan
Management Review 36, 73e85 (1995).
20. S. W. Brown and S. S. Tax, Recovering and Learning from Service Failures, Sloan Management Review
40(1), 75e89 (1998).
21. N.A.S.A. report on the Space Shuttle Columbia at http://www.nasa.gov/home/hqnews/2003/oct/
HQ_N03109_new_CAIB_volumes.html.
22. A. Edmondson, Organizing to learn, Harvard Business School Note, 9-604-031. (2003).
23. A. Edmondson, M. A. Roberto, A. Tucker, Children’s Hospital and Clinics, Harvard Business School Case,
9-302-050. (2002).
24. M. H. Bazerman, Judgment in Managerial Decision Making (fifth edition), John Wiley & Sons, New York
(2002); Also see S. T. Fiske and S. E. Taylor, Social Cognition, Random House, New York (1984).
25. P. Baumard and B. Starbuck, Learning from failures: Why it may not happen, Long Range Planning 38(3),
(2005) doi:10.1016/j.lrp.2005.03.004.
26. Cannon and Edmondson (2001) op cit at Ref 1 above; A. Edmondson and B. Moingeon, When to learn
how and when to learn why, in B. Moingeon and A. Edmondson (eds.), Organizational learning and
competitive advantage, Sage, London (1996).
27. For research on the lack of inquiry in group discussions, see C. Argyris (1990), op cit at Ref 12 above;
D. A. Garvin and M. A. Roberto, What You Don’t Know About Making Decisions, Harvard Business
Review 79(8), 108e116 (2001); for research on premature convergence on a solution or decision see I. L.
Janis and L. Mann, Decision-Making, The Free Press, New York (1997); and E. Langer, Mindfulness,
Addison-Wesley, Reading, MA (1989).
28. T. S. Burton, By learning from failures, Lilly keeps drug pipeline full, The Wall Street Journal (21 April
2004).

318 Failing to Learn and Learning to Fail


29. J. Hagel III and J. S. Brown, Productive friction: How difficult business partnerships can accelerate
innovation, Harvard Business Review February, 83e91 (2005).
30. F. F. Reichheld and T. Teal, The Loyalty Effect: The Hidden Force Behind Growth, Profits, and Lasting Value,
Harvard Business School Press, Boston 194e195 (1996).
31. P. F. Drucker, Innovation and Entrepreneurship: Practice and Principles, Harper & Row, New York, 43 (1985).
32. S. Thomke, Experimentation matters: Unlocking the Potential of New Technologies for Innovation, Harvard
Business School Press, Boston, MA (2003).
33. M. A. Maidique and B. Zirger, A Study of Success and Failure in Product Innovation: The Case of the U.S.
Electronics Industry, IEEE Transactions on Engineering Management 31(4), 192e204 (1984).
34. P. C. Wason, On the failure to eliminate hypotheses in a conceptual task, Quarterly Journal of
Experimental Psychology 20, 273e283 (1960).
35. T. Kelley and J. Littman, The Art of Innovation: Lessons in Creativity from IDEO, America’s Leading Design
Firm, Currency Books, New York, 232 (2001).
36. A. C. Edmondson and L. Feldman, Understand and Innovate at IDEO Boston, Harvard Business School
Publishing, Case # 9-604-005 (2004).
37. J. Pfeffer and R. I. Sutton, The Knowing Doing Gap: How Smart Companies Turn Knowledge into Action,
Harvard Business School Press, Boston, 129 (2000).
38. S. Thomke and A. Nimgade Bank of America, Harvard Business School Case, 9-603-022. (2002).
39. P. Senge, The fifth discipline: The art and practice of the learning organization, Doubleday, New York
(1990); A. Edmondson, R. Bohmer and G. Pisano, Disrupted routines: Team leaning and new technology
implementation in hospitals, Administrative Science Quarterly 46, 685e716 (2001).
40. A. C. Edmondson, M. R. Roberto, R. M. J. Bohmer, E. Ferlins and L. Feldman, The recovery window:
Organizational learning following ambiguous threats in high-risk organizations, Harvard Business School
Working Paper (2004).
41. A. C. Edmondson, Speaking up in the operating room: How team leaders promote learning in
interdisciplinary action teams, Journal of Management Studies 40(6), 1419e1452 (2003).
42. C. Argyris, Strategy, change, and defensive routines, Harper Business, New York (1985).
43. A. C. Edmondson, Group process in the Challenger launch decision (B), Harvard Business School Case,
N9-603-070. (2003).
44. Action Design website www.actiondesign.com.
45. S. Sitkin (1992) p. 243 op cit at Ref 3 above.

Biographies
Mark Cannon is an Assistant Professor of Leadership, Policy and Organizations and of Human and Organizational
Development at Vanderbilt University. He investigates barriers to learning in organizational settings, such as
positive illusions, individual and organizational defenses, and barriers to learning from failure. He has published
recently on executive coaching topics, including coaching leaders in transition and coaching interventions that
produce actionable feedback. His work has appeared in the Academy of Management Executive, Human Resource
Management, and Journal of Organizational Behavior. He received his Ph.D. in Organizational Behavior from
Harvard University. Vanderbilt University, Peabody #514, Nashville, TN 37203. Tel: (615) 343-2775 Fax: (615) 343-
7094 e-mail: mark.d.cannon@vanderbilt.edu
Amy C. Edmondson, Professor of Business Administration and Chair of the Doctoral Programs Committee at
Harvard Business School, studies teams in healthcare and other industries, and emphasizes the role of psychological
safety for enabling learning, change, and innovation in organizations. In 2003, she received the Cummings Award
from the Academy of Management OB division for outstanding achievement in early-mid career. Her recent article,
Why Hospitals Don’t Learn from Failures: Organizational and Psychological Dynamics That Inhibit System Change
(with A. Tucker), received the 2004 Accenture Award for a significant contribution to management practice.
Edmondson received her PhD in organizational behavior from Harvard University. Morgan Hall T-93, Harvard
Business School, Boston, MA 02163 Tel: (617) 495-6732 Fax: (617) 496-5265 email: aedmondson@hbs.edu

Long Range Planning, vol 38 2005 319

You might also like