Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Rijpma - Complexity Tight-Coupling and Reliability Connecting Normal Accidents Theory and High Reliability Theory

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

COMPLEXITY, TIGHT-COUPLING AND RELIABILITY: CONNECTING NORMAL ACCIDENTS THEORY AND HIGH RELIABILITY THEORY 15

Complexity, Tight-Coupling and


Reliability: Connecting Normal
Accidents Theory and High Reliability
Theory
Jos A. Rijpma*

In this article, the theoretical debate between two dominant schools on the origins of accidents
and reliability, Normal Accident Theory and High Reliability Theory, is continued and
evaluated. Normal Accident Theory holds that, no matter what organizations do, accidents are
inevitable in complex, tightly-coupled systems. High Reliability Theory asserts that
organizations can contribute significantly to the prevention of accidents. To break through
this deadlock, the mutual effects of complexity and tight-coupling, on the one hand, and
reliability-enhancing strategies, on the other, are examined. It becomes clear that the theories
are sometimes in conflict but that sometimes they also reach similar conclusions when applied
to case events or generic safety problems. Cross-fertilization is, therefore, possible.

Introduction1 organizations (HROs) centralize the design of


decision premises in order to allow de-

IMilenaspects
1981, a book was published on the human
of the nuclear meltdown at Three
Island in 1979 (Sills, Wolf and Shelanski,
centralized decision making (Weick, 1987);
HROs use redundancy in their organization in
order to back-up failing parts and persons (La
1981). One contributor to this book was Charles Porte and Consolini, 1991); HROs maintain
Perrow, whose work later culminated in the several theories ± conceptual slack ± on the
seminal book Normal Accidents (1984). Another technology and the production processes in
was Todd La Porte, who founded a research order to avoid blindspots and hasty action
group at Berkeley, which has since conducted (Schulman, 1993); and, finally, HROs learn to
research on highly reliable organizational comprehend the complexities of the technology
performance under very trying conditions. Both and the production processes (Rochlin, La Porte
authors laid the foundations for two major ways and Roberts, 1987).
of thinking about accidents and reliability which, These two views on safety performance have
subsequently, were pitched against one another. been contrasted by Sagan (1993), who applied
The basic thesis of Perrow's (1984) Normal both theories in an analysis of accidents and
Accident Theory (NAT) holds that accidents are near-misses in the US nuclear weapons system.
inevitable in complex, tightly-coupled He concluded that, although the US nuclear
technological systems, such as nuclear power weapons system did not experience a major
plants. Complexity inevitably yields unexpected accident, NAT, nevertheless, offered the best
interactions between independent failures. By explanation for the system's safety record. He
means of tight-coupling, these initial interactions underpinned this conclusion by a list of near-
escalate rapidly and almost unobstructedly into a misses which occurred despite the presence of
system breakdown. The combination of the high-reliability strategies mentioned in
complexity and tight-coupling makes accidents HRT. These near-misses, Sagan claimed, could
inevitable. Perrow (1984), therefore, called such easily have escalated into large-scale nuclear
accidents `normal' accidents. catastrophes or even accidental nuclear war had
The Berkeley school on High Reliability the circumstances been slightly different: '[I]t
Theory (HRT), however, claims to have was less good design than good fortune that
discovered organizational strategies with which prevented many of the accidents from *Jos A. Rijpma, Crisis Research
organizations facing complexity and tight- escalating out of control' (Sagan, 1993: 267- Center, Leiden University,
Department of Public
coupling, have achieved outstanding safety 268). Administration, P.O. Box
records (Roberts, 1993). Highly reliable In a symposium published in this journal, both 9555, 2300 RB Leiden.

ß Blackwell Publishers Ltd 1997, 108 Cowley Road, Oxford OX4 1JF, UK and
350 Main Street, Malden, MA 02148, USA. Volume 5 Number 1 March 1997
16 JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT

Perrow (NAT) and La Porte cum Rochlin (HRT) strategies impinge on complexity and tight-
were invited to comment on the theoretical coupling and, hence, on normal-accident
review by Sagan. Perrow (1994) argued that it is proneness?
possible and valuable to contrast the two Normal Accidents Theory and High
theories; La Porte and his Berkeley colleague Reliability Theory will be briefly summarized
Rochlin disagreed (La Porte, 1994; La Porte and in the next two sections. Thereupon, the effects
Rochlin, 1994). They argued that the two of complexity and tight-coupling on HRT's
theories are complementary rather than reliability-enhancing strategies, and indirectly
competitive. Stalemate is the debate's current on overall reliability, will be assessed. Next,
half-time score. the consequences of the reliability-promoting
Underlying this debate is a difference between strategies on complexity and tight-coupling, and
the levels of aggregation of the two theories. indirectly normal-accident proneness, will be
NAT discerns causes of a certain type of examined. A discussion on the possibilities of
accidents, whereas HRT distinguishes NAT's and HRT's co-existence will conclude this
organizational strategies promoting overall article.
reliability. The conflict between Perrow and
Sagan, on the one hand, and La Porte and
Rochlin, on the other, is partly due to these
diverging levels of aggregation. Complexity and Tight-Coupling
As long as stalemate reigns, the two bodies
of knowledge are likely to go their own way, Some accidents are inevitable, according to
as they did until Sagan (1993) contrasted them. Perrow (1984). Where an organization must
The consequence is that the limits of high- cope with a complex technology, unexpected
reliability strategies will remain obscure, interactions between failures will occur sooner or
because the role of these strategies in the later. Members of the organization do not
development of particular accidents is not anticipate these interactions and, once they
examined. Also, the reach of complexity and occur, do not comprehend them, and, therefore,
tight-coupling remains unknown, because their do not immediately know how to respond to
effects on overall reliability are not assessed. them. If the technology is also tightly-coupled,
Both theories are worse off, and, since each that is, if interacting failures propagate swiftly
theory may have implications for technological and unobstructedly throughout the system, the
policies and organizational design, strategic failures will escalate into a system accident,
decisions on safety in complex organizations before comprehension and recovery are possible.
lack the benefit of a balanced, integrated If a system is both complex and tightly-coupled,
prescriptive approach. Therefore, the aim of accidents are inevitable; they are endemic in the
this article is to go beyond the current state of system. Hence, Perrow called them `normal
the debate and explore the possibilities for accidents' (Perrow, 1984).
creative dialogue between the two schools of Some systems are more prone to such normal
thought. Specifically, this article shall examine accidents than others, because some are more
how one theory's independent variables affect complex and tightly-coupled than others.
the other theory's dependent variable. What are Complex interactions are especially likely in
the effects of complexity and tight-coupling on systems where components have multiple
overall reliability? How prone to normal functions and, therefore, may fail in more
accidents are organizations applying the directions at once; in systems where components
reliability-enhancing strategies discerned by are close to each other; in systems with many
HRT? control parameters and inferential information;
Since NAT is a theory on the causation of a and in systems transforming material into other
specific type of accidents, it is hard to assess states by means of non-linear, incomprehensible
directly the impact of complexity and tight- processes, such as in chemistry and nuclear
coupling on overall reliability. Since HRT energy.
discerns strategies enhancing overall reliability, Tight-coupling is especially likely in systems
it is difficult to evaluate directly the proneness of operating unifinal, invariant, time-dependent
these strategies to specific, normal accidents. production processes; in organizations
Therefore, the effects of one theory's employing specialized personnel; in systems
independent variables on the other theory's where materials cannot easily be substituted;
dependent variable will be assessed indirectly. where safety devices are built-in and
The questions asked become: Which effects do improvization is hardly possible. Recovery from
complexity and tight-coupling have on the failure is unlikely in such systems, because
effectiveness of reliability-enhancing strategies processes cannot be turned off and because
discerned by HRT and, hence, on overall there are hardly any fortuitous recovery aids
reliability? How do HROs' reliability-promoting available. In tightly-coupled systems, the initial

Volume 5 Number 1 March 1997 ß Blackwell Publishers Ltd 1997


COMPLEXITY, TIGHT-COUPLING AND RELIABILITY: CONNECTING NORMAL ACCIDENTS THEORY AND HIGH RELIABILITY THEORY 17

interacting failures are likely to escalate into and improve standards (Rochlin, 1989; La Porte
disaster. and Consolini, 1991).

Reliability-Enhancing Strategies The Reliability of Complex, Tightly-


Coupled Systems
Highly reliable organizations apply several
strategies so as to operate as reliably as they Complexity and tight-coupling cause system
do. First, HROs apply a strategy of redundancy: accidents. But, how complexity and tight-
if one component fails, another backs it up; if one coupling impinge on overall reliability is not
operator fails to carry out his task, another one clear. One way to examine the consequences of
takes over his position; if danger lurks, multiple complexity and tight-coupling for overall
channels are used to transmit warnings. By reliability is to study their effects on redundancy,
means of redundancy, the probability of decentralized decision making within a
consequential failure is reduced and as little framework of centralized decision premises,
information as possible is lost (Rochlin, La Porte conceptual slack and organizational learning.
and Roberts, 1987). This is discussed next.
Secondly, HROs decentralize the authority to
make decisions. This is done in order to enable
those closest to the problem at hand to solve Complexity and Redundancy
problems as they emerge. In this way, rapid Complexity both enhances and subverts
problem solving is ensured. There is one redundancy. On the one hand, complex systems
problem, however. How do low-level operators are often characterized by redundancy. Complex
know how to solve such problems when they systems simply need redundancy to keep track
lack the oversight necessary to predict the of all the possible interactions between the
consequences of their actions in a complex, various parts of the system (Perrow, 1984). Since
tightly-coupled system? The answer given by redundancy, on average, improves the reliability
HRT is that HROs are pervaded by a distinctive of systems, complexity enhances reliability by
culture of reliability. Such a culture imbues promoting redundancy.
members of the organization with clear On the other hand, complexity may reduce
operational goals, decision premises and the reliability of redundancy. Redundant
assumptions. It gives them the autonomy and components sometimes depend on common
the competence to respond to complex determinants, such as weather conditions.
interactions, once these reach the surface, and Common determinants ± or common-mode
correct them before tightly-coupled processes failures ± are often a source of complexity
are set in motion which might lead to disaster. (Perrow, 1984). The design of the Challenger
So, on the one hand, decision making is de- space shuttle's Solid Rocket Booster's sealing, for
centralized; but, on the other hand, the decision instance, was redundant. Two O-rings were
premises used by operators through the system designed-in, so that if one would be eroded and,
are centralized (Weick, 1987). therefore, not seal properly, it would be backed
Thirdly, to prevent undue, hasty action based up. However, both rings were dependent on
upon rigid beliefs as to the proper course of weather conditions. If one would fail, the
action, HROs apply a strategy of conceptual probability of the other one failing as well,
slack: a number of diverging theories pertaining increased. Both O-rings failed during the 1986
to the organization's technology and production launch of the Challenger, causing a space shuttle
processes are maintained simultaneously. Only disaster (Vaughan, 1996).
after intense discussion and negotiation does the Put differently, a common-mode `failure' (the
organization reach a decision upon the course of weather) ruled out the redundancy of
action to be pursued. In this way, complex components; the two O-rings were no longer
interactions which might have been overlooked redundant. Other sources of complexity may
when seen from one perspective are taken into have the same effect. If two redundant parts are
account (Schulman, 1993). very close to each other, for instance, both parts
Finally, from the outset, no organization has may be knocked out by one blow. Complexity
perfect knowledge of its technology and may subvert redundancy and, thereby, have a
production processes. HROs have accomplished negative impact on overall reliability.
their extremely reliable performances only after a
long, trying, costly and, sometimes, lethal trial-
and-error learning process. Trial-and-error Complexity, Tight-Coupling and the Balance
learning is supplemented by constant training, Between Centralization and Decentralization
operations and simulations in order to maintain According to Perrow (1984), decentralization is

ß Blackwell Publishers Ltd 1997 Volume 5 Number 1 March 1997


18 JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT

necessary to cope with complexity; (Roberts, Stout and Halpern, 1994). Authority
centralization is necessary to ensure rapid patterns change from hierarchical to collegial;
recovery from failure before tightly-coupled informal networks supplement the formal
processes push the system over the brink. He organizational structure (Rochlin, La Porte and
argued that it is impossible to centralize and de- Roberts, 1987). Operators do not blindly follow
centralize organizations simultaneously. It is the rules, but negotiate a course of action in a
impossible, he stated, to allow operators collegial fashion, based on trust and credibility
considerable decision making authority, on the (Schulman, 1993). In these negotiations, the most
one hand, and ensure rigid command and experienced operators and supervisors are at the
control, on the other. According to Weick centre of the decision networks (Roberts, Stout
(1987), however, it is possible to give low-level and Halpern, 1994).
operators the autonomy to respond to emerging Moreover, instead of relying upon decision
problems, as long as these operators have been premises prescribing the steps to be taken, a
imbued with centrally determined goals, decision strategy of conceptual slack becomes operative:
premises and assumptions. Rigid command and multiple theories on the technology and the
control are not necessary if people have been production processes are maintained
imbued with the same outlook, a `culture of simultaneously. In this way, more potential
reliability'. Operators will behave as central complexities are accounted for. Besides,
authorities want them to behave without having `collective sins of hubris' are avoided by
to be told so. conceptual slack (Schulman, 1993): one dominant
Sagan (1993) observed that in complex theory on the technology and the production
systems, it is impossible for members at both processes may easily lead to hasty, undue action
higher and lower levels in the organizational and blindspots (Turner, 1978). This strategy
hierarchy to anticipate every contingency. This works particularly well when an organization
implies that it is also impossible to account for wants to change from a state of inaction to a
every unexpected situation in the decision state of action, if it wants to start production
premises and assumptions that are part of the processes. As long as all the complexities are not
operators' action repertoires. In complex ruled out beforehand, the organization does not
systems, unexpected situations will, inevitably, act. Numerous veto powers have been
emerge (Perrow, 1984; Rochlin, 1993). Hence, established at the Diablo Canyon nuclear power
complexity subverts the reliability of decision plant ± which Schulman (1993) examined ± to
premises designed to ensure speedy detection ensure that undue action would not be pursued.
and effective correction of initial failures. To put it differently, the system is loosely-
Complexity may subvert the effectiveness of coupled. As long as the system is loosely-
the culture of reliability and, hence, have coupled, complexity does not impair the
negative effects on reliability. reliability of conceptual slack.
If, however, the production processes are
already running, and unforeseen contingencies
Complexity, Tight-Coupling, Conceptual Slack arise at a moment when these processes cannot
and Informal Networks easily be blocked, this strategy becomes
If decentralized decision making within a problematic. If a system is very tightly-coupled,
framework of centrally determined premises is time may be too short to negotiate the proper
insufficient to cope with unexpected situations, course of action and HROs still rely upon rigid
the question becomes how operators do, in fact, adherence to prescribed rules. When the
respond to such unexpected situations. First, shutdown was required of a reactor at the
they may be simply confused, as was the case Diablo Canyon nuclear plant, instead of
during the near-disaster in the nuclear reactor at negotiating its way out of the crisis, the
Three Mile Island (Perrow, 1984). The chances organization relied upon technological fixes
for recovery are, then, seriously reduced. (automatic shutdown) and non-negotiable intra-
Secondly, they may follow the rules, even if departmental autonomy (Schulman, 1993). Such
these rules no longer apply to the situation at tight-coupling subverts the conceptual slack
hand. Vaughan (1996) noted that hardly needed to cope with complex situations.
anybody at NASA violated rules; instead, people Complexity has a dual effect on the reliability
followed rules that had become inadequate. of conceptual slack. On the one hand,
The third possibility, and the strategy applied complexity creates a need for diverging
in HROs, is to switch from one mode of decision perspectives. Even linear systems sometimes
making to another. When confronted with yield unexpected situations, but because the
unexpected situations, HROs no longer rely need for multiple perspectives is smaller in linear
upon decentralized decision making within a systems, conceptual slack is less likely to occur.
framework of centralized decision premises. This means that organizations operating linear
Instead, decision making becomes very flexible technologies are more likely to develop rigid

Volume 5 Number 1 March 1997 ß Blackwell Publishers Ltd 1997


COMPLEXITY, TIGHT-COUPLING AND RELIABILITY: CONNECTING NORMAL ACCIDENTS THEORY AND HIGH RELIABILITY THEORY 19

perceptions and are not capable of meeting any extended periods of time before learning
unexpected situation (Turner, 1978). By creating processes are sufficiently completed to prevent
a need for conceptual slack, complexity broadens accidents from happening.
organizational perspectives, making rigid
perceptions less likely and enhancing overall
reliability. On the other hand, in combination The Normal-Accident Proneness of
with tight-coupling, complexity is likely to Reliability-Enhancing Strategies
subvert the reliability of conceptual slack.
Having discussed the effects of complexity and
tight-coupling on reliability enhancing strategies,
Complexity, Tight-Coupling and Organizational it is necessary to consider the reverse impact, for
Learning example, the effects of HRO strategies on the
Much of what has been said on the effects of level of complexity and of tight-coupling.
complexity and tight-coupling on conceptual
slack, applies to the effects of complexity and
tight-coupling on organizational learning. Again,
Redundancy and Complexity
there is a paradox: complexity creates a need for Redundancy may both reduce and increase
learning, and, hence, complacency is less likely, complexity. On the one hand, if an organization
while tight-coupling limits the range for trial- has redundant means to receive and transmit
and-error learning because the first error may be warnings of impending dangers, more adequate
fatal. information is available to base decisions and
Complexity simultaneously requires and actions upon. By increasing the probability of
complicates learning. Learning from complex getting adequate information, redundancy
past events is a highly ambiguous activity reduces a system's complexity.
(Sagan, 1993). Initially, learning may progress On the other hand, redundancy increases
rapidly, as the easy lessons are learned. But when complexity in several ways. First, redundancy
accidents and near-accidents become more makes a system more opaque and, hence, more
complex, reconstruction of what exactly complex (Perrow, 1984; Sagan, 1993).
happened becomes more difficult.2 The causes Component and operator failures may be less
of accidents and near-accidents are often many, visible because they have been compensated for
so there are many `hooks' to hang lessons on, by backups. These failures may, therefore, go
and many hooks not to hang lessons on. unnoticed for a long time (Turner, 1978).
Diverging interpretations lead to conflicting Eventually, one of the backup systems may also
perspectives on how to improve activities. These fail. The operators expected that system to be
conflicts may result in one-sided solutions or backed up by the first system, but the first
even deadlocks (Bovens and `t Hart, 1996). system was already out of order. This actually
Thus, learning, which starts off as a difficult happened in the development of the near-
cognitive process, becomes more of a political disaster at Three Mile Island. The nuclear reactor
process as time goes by. Comprehensive had several cooling systems: one basic system;
learning, leading to a breakthrough in reliability, emergency feedwater pumps; and high-pressure
is more often a process of learning to build injection. When the basic system failed, the
coalitions than a process of learning to emergency pumps were set in motion. The
understand certain events. For instance, in the pumps did work, but the pipes through which
1950s and 1960s, the Dutch Railways were the water was supposed to flow were sealed
regularly confronted with accidents due to inadvertently. So, two redundant systems failed
engine drivers missing or ignoring red signs. It simultaneously and unexpectedly. Consequently,
took years, and a very serious accident, to the reactor was heated even more. The operators
convince the Dutch Railways of the need for were confused when they noticed that the
some kind of Railway Traffic Control system. temperature was still rising, because they were
The implementation of the system also took unaware of the simultaneous failures (Perrow,
many years and some accidents at railway tracks 1984).
where the system had not yet been implemented Secondly, backup systems may not be as
(Van Duin, 1992). Because making systems more independent as they are supposed to be and can
loosely-coupled is expensive and time- therefore fail simultaneously (Sagan, 1993).
consuming, tight-coupling may seriously thwart Simultaneous failure is likely to be unexpected,
the learning potential of organizations and, and, therefore, interdependence makes a system
hence, the reliability of their operations. more complex. Boeing 747s, for instance, are
So, complexity may, on the one hand, inspire designed with four engines, two attached to each
learning, but, on the other hand, make learning a wing. When one fails, the airplane supposedly
very difficult and ambiguous process. In tightly- does not lose its balance, because the failing
coupled systems, accidents remain inevitable for engine is backed up by another. During the crash

ß Blackwell Publishers Ltd 1997 Volume 5 Number 1 March 1997


20 JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT

Figure 1: The Law of Perverting Redundancy

of a Boeing 747 cargo plane in Amsterdam (4 observers are added, the probability of
October 1992), however, the engines proved not completely inadequate information almost
as independent as had been presupposed. When diminishes, but the probability of inducing
one engine ruptured and broke off, it hit the ambiguous information increases (see Figure 1).
other engine on the wing, which also ruptured
and fell. With two engines missing from one
wing, the manoeuverability of the airplane was Decision Premises and Tight-Coupling
seriously reduced (Netherlands Aviation Safety Decision premises are established to ensure
Board, 1994). rapid, coordinated responses to certain
Redundancy may increase complexity for anticipated situations (Weick, 1987). Decisions
another reason: redundant information gathering guided by such premises ensure rapid recovery
may lead to ambiguity and conflicting from initial failures before these failures escalate
perceptions. Consider a system with along tightly-coupled lines. However, rapid
independent observers; each observer has an response, based on pre-programmed decision
individual reliability of 0.8. This means that in a premises, is inappropriate when a situation
system with one observer, the probability of emerges which has not been anticipated (Sagan,
transmitting adequate information is 0.8, 1993). Still, decision premises are not always de-
whereas the probability of transmitting activated in situations they have not been
inadequate information is 0.2. Add one designed for. Standard operating procedures
redundant observer, and the chance of failure is and daily routines generally prove to be very
reduced to 0.2 times 0.2, which equals 0.04. tenacious (Allison, 1971). Sagan (1993) has
However, the chance of getting adequate, provided extensive documentation on
uniform information is also reduced: to 0.8 times organizational units continuing their daily jobs
0.8 which equals 0.64. The remaining 32 per cent in situations where doing so could have had
is left for ambiguous information. The serious consequences. During the Cuban missile
probability of one observer providing crisis in 1962, for instance, test-launching
inadequate and the other providing adequate continued, even though such tests could have
information is 0.2 times 0.8 which equals 0.16. In been perceived by the USSR as hostile attacks,
a system with two observers, the total provoking them to retaliate. This meant that the
probability of yielding ambiguous information system became more tightly-coupled. Decision
is, therefore, 0.32. When more redundant premises and routine activities could have been

Volume 5 Number 1 March 1997 ß Blackwell Publishers Ltd 1997


COMPLEXITY, TIGHT-COUPLING AND RELIABILITY: CONNECTING NORMAL ACCIDENTS THEORY AND HIGH RELIABILITY THEORY 21

the starting point of nuclear war. Decision paralyze the organization. This break-even point
premises may take up a life of their own, has not yet been discovered.
tightening system-coupling and increasing a
system's escalation potential. Learning, Complexity and Tight-Coupling
Learning may reduce complexity, not because
Conceptual Slack and Complexity learning alters the configuration of a
Conceptual slack has a dual effect on complexity. technological system, but because the potential
On the one hand, if a problem is perceived for surprising interactions may be reduced as
through several divergent lenses, the probability experience grows. In the start-up phase of a
of spotting hidden contingencies increases. technology, many interactions are unexpected,
Complex interactions can be detected earlier, but, as time goes by, the technology becomes
may even be anticipated before they arise. Such more familiar and more interactions are
hidden contingencies are very likely to remain understood (Rochlin, La Porte and Roberts,
hidden if an organization, or even a network of 1987). However, learning does not necessarily
organizations, is imbued with a common imply that anticipated interactions between
outlook. This point is best illustrated by the failures can be stopped from escalating,
disaster at Aberfan, Wales, in 1966. A tip of especially not in tightly-coupled systems. Sagan
mine dust slid into the village, killing 144 (1993) has provided documentation on accidents
citizens. The investigation revealed that the recurring despite the experience with previous,
entire industry's attention was drawn towards similar accidents. So, although learning may
mining safety and that tip safety was hardly on contribute to the understanding of complex
the agenda. Mining safety worked as a `decoy technological systems, it does not automatically
problem', obscuring the more serious problems help organizations to prevent accidents. The
looming in the background (Turner, 1978). Some record on the effects of learning on accident
level of conceptual slack might have prevented prevention is ambiguous.
this disaster.
On the other hand, if many diverging, even
conflicting, perspectives are around, confusion Conclusion
may infuse the organization and complexity is
increased instead of reduced. Turner (1978) Complexity and tight-coupling have mixed
called this pathological kind of conceptual slack effects on overall reliability. Complexity and
`a state of variable disjunction of information'. tight-coupling enhance reliability by increasing
Many different stakeholders are involved in a the need for redundancy, conceptual slack and
complex, ambiguous task. Responsibilities are organizational learning. On the other hand,
unclear and divided. Each one of the complexity and tight-coupling sometimes
stakeholders has a slightly different perception decrease the reliability of these organizational
of the situation. Information is shattered, not strategies. Complexity may neutralize the
communicated or sent to the wrong persons. beneficial effects of redundancy and it impairs
Reaching some sort of common perspective is organizational learning. Tight-coupling renders
only possible after extended periods of trial conceptual slack ineffective and also thwarts the
and error. In tightly-coupled systems, the first learning potential of organizations.
error may be fatal. For instance, in 1968, a The strategies discerned in HRT affect the
large and very slow transporter carrying a potential for normal accidents. These strategies
heavy transformer hit a train on a new type of can be considered mixed blessings. On the one
railway crossing near Hixon, UK, killing eleven hand, redundancy increases the amount of
persons. A similar incident, with no loss of life, information generated; the anticipation of a
had occurred before. Information about this higher number of complex interactions is
previous incident was distributed among improved when conceptual slack is maintained;
several divisions of British Railways, police and learning may reduce the level of complexity.
and municipal authorities. It never occurred to On the other hand, redundancy increases the
any one of them that extremely slow vehicles level of complexity by inducing ambiguity,
might be too slow to pass a crossing before opaqueness and the occurrence of simultaneous
the arrival of a rapidly proceeding train. No failures; conceptual slack may create confusion;
warnings were issued to the policemen and, finally, decision premises increase the level
escorting the vehicle or the engine driver of of tight-coupling.
the train (Turner, 1978). NAT does not only explain normal accidents:
Perhaps some break-even point exists at which it can also be used to explain overall reliability.
the capacity to anticipate complex interactions is HRT explains more than overall highly reliable
increased sufficiently, yet the number of performance: it also highlights factors which
diverging perspectives does not confuse and contribute to an organization's proneness to

ß Blackwell Publishers Ltd 1997 Volume 5 Number 1 March 1997


22 JOURNAL OF CONTINGENCIES AND CRISIS MANAGEMENT

system accidents. Therefore, it can be concluded Notes


that NAT and HRT may gain much from cross-
fertilization. Not only may theoretical discourse 1 The author gratefully acknowledges the
be enriched, but cross-fertilization is also comments and suggestions by Todd La Porte,
important to provide practitioners with more Charles Perrow, Nick Pidgeon, Paul Schulman,
comprehensive and balanced answers to Arjen Boin, Menno van Duin, Paul 't Hart,
strategic questions pertaining to accident Yvonne Kleistra, Marc Otten, and Ab van
Poortvliet.
prevention and reliability. HRT may prevent 2 I am grateful to Professor Schulman for
practitioners from over-pessimism induced by suggesting to me this distinction between `easy'
NAT. NAT may reduce over-optimism with and complex lessons (personal communication,
regard to the success of reliability-enhancing 1996).
strategies.
To foster both cooperation and competition
between NAT and HRT, and to advance
progress in accident and reliability theory at References
large, a modest research agenda may suffice.
First, it is not entirely clear to what extent Allison, G.T. (1971), Essence of Decision, Little Brown,
Boston.
accident prevention and reliability pose Bovens, M.A.P. and `t Hart, P. (1996), Understanding
conflicting demands on organizational design Policy Fiascoes, Transaction Books, New Brunswick.
and decision making. A trade-off seems to exist Journal of Contingencies and Crisis Management (1994),
between enhancing reliability and preventing `Systems, Organizations and the Limits of Safety:
normal accidents. For instance, should an A Symposium', Volume 2, Number 4, December,
organization apply redundancy to promote pp. 205±240.
reliability, or should it refrain from doing so La Porte, T.R. (1994), `A Strawman Speaks Up:
because it increases complexity and, hence, the Comments on The Limits of Safety', Journal of
potential for accidents? From a practitioners Contingencies and Crisis Management, Volume 2,
point of view, it would be interesting to examine Number 4, December, pp. 207±211.
La Porte, T.R. and Consolini, P.M. (1991), `Working
the potential consequences ± in terms of number in Practice But Not in Theory: Theoretical
of victims and amount of damage ± of various Challenges of `High Reliability Organizations'',
alternative solutions proposed. Journal of Public Administration Research and Theory,
Secondly, some strategies may both promote Volume 1, Number 1, Winter, pp. 19±47.
overall reliability and prevent normal accidents. La Porte, T.R. and Rochlin, G.I. (1994), `A Rejoinder
Learning could be such a strategy, although it is to Perrow', Journal of Contingencies and Crisis
hard to accomplish and the effects of learning are Management, Volume 2, Number 4, December, pp.
ambiguous (Sagan, 1993). As was shown in the 221±227.
case of Railway Traffic Control in the Netherlands Aviation Safety Board (1994), Final
Netherlands, efforts to un-couple tightly-coupled Report on the Accident with El Al 1862 on October
4, 1992 at Amsterdam, Bijlmermeer, SDU, The
systems provide another possible direction for Hague.
both promoting reliability and preventing Perrow, C. (1984), Normal Accidents: Living with High-
normal accidents. However, un-coupling is often Risk Technologies, Basic Books, New York.
expensive, time consuming and, sometimes, Perrow, C. (1994), `The Limits of Safety: The
almost impossible given technological Enhancement of a Theory of Accidents', Journal
constraints (Perrow, 1984). Research on the of Contingencies and Crisis Management, Volume 2,
processes of learning and de-coupling ± Number 4, December, pp. 212±220.
especially on the limitations of these processes Pidgeon, N.F. (1997), `The Limits to Safety? Culture,
± is called for. Politics, Learning and Man-Made Disasters',
Finally, the current debate on accidents and Journal of Contingencies and Crisis Management,
Volume 5, Number 1, March, pp. 1±14.
reliability is biased towards accidents and Roberts, K.H. (1993), `Introduction', in Roberts, K.H.
reliability in complex, tightly-coupled systems. (Ed.), New Challenges to Understanding
Yet, a large number of accidents still occur in Organizations, Macmillan, New York , pp. 1±10.
fairly linear, loosely-coupled systems. Such Roberts, K.H., Stout, S.K. and Halpern, J.J. (1994),
accidents run the risk of being neglected. An `Decision Dynamics in Two High Reliability
important body of knowledge has been erected Military Organizations', Management Science,
by the late Barry Turner, in which such accidents Volume 40, Number 5, pp. 614±624.
have been examined and explained (Turner, Rochlin, G.I. (1989), `Informal Organizational
1978). Future research should take account of Networking as a Crisis-Avoidance Strategy: US
Turner's so-called Disaster Incubation Theory Naval Flight Operations as a Case Study',
Industrial Crisis Quarterly, Volume 3, Number 2,
(DIT), which deserves a place in the current pp. 159±176.
debate (see Pidgeon, 1997) for an exploration of Rochlin, G.I. (1993), `Defining `High-Reliability'
the connections between NAT and HRT, on the Organizations in Practice: A Taxonomic
one hand, and DIT, on the other). Prologue', in Roberts, K.H. (Ed.), New Challenges

Volume 5 Number 1 March 1997 ß Blackwell Publishers Ltd 1997


COMPLEXITY, TIGHT-COUPLING AND RELIABILITY: CONNECTING NORMAL ACCIDENTS THEORY AND HIGH RELIABILITY THEORY 23

to Understanding Organizations, Macmillan, New Accident at Three Miles Island: The Human
York, pp. 11±32. Dimensions, Westview Press, Boulder.
Rochlin, G.I., La Porte, T.R. and Roberts K.H. (1987), Turner, B.A. (1978), Man-made Disasters, Wykeham,
`The Self-Designing High-Reliability Organization: London.
Aircraft Carrier Flight Operations at Sea', Naval Van Duin, M.J. (1992), Van Rampen Leren (Learning
War College Review, Volume 40, Number 4, from Disasters), Haagse Drukkerij en
Autumn, pp. 76±90. Uitgeversmaatschappij, Den Haag (in Dutch;
Sagan, S.D. (1993), The Limits of Safety: Organizations, Summary in English).
Accidents and Nuclear Weapons, Princeton Vaughan, D. (1996), The Challenger Launch Decision:
University Press, Princeton. Risky Technology, Culture, and Deviance at NASA,
Schulman, P.R. (1993), `The Negotiated Order of Chicago University Press, Chicago.
Organizational Reliability', Administration and Weick, K.E. (1987), `Organizational Culture as a
Society, Volume 25, Number 3, November, pp. Source of High Reliability', California Management
353±372. Review, Volume 29, Number 2, Winter, pp. 112±
Sills, D., Wolf, C. and Shelanski, V. (Eds) (1981), The 127.

ß Blackwell Publishers Ltd 1997 Volume 5 Number 1 March 1997

You might also like