How The Baldrige Award Really Works
How The Baldrige Award Really Works
How The Baldrige Award Really Works
By David A. Garvin
In just four years, the Malcolm Baldrige National Quality Award has become the most
important catalyst for transforming American business. More than any other initiative, public
or private, it has reshaped managers thinking and behavior. The Baldrige Award not only
codifies the principles of quality management in clear and accessible language. It also goes
further: it provides companies with a comprehensive framework for assessing their progress
toward the new paradigm of management and such commonly acknowledged goals as
customer satisfaction and increased employee involvement.
There is, however, another side to the Baldrige Award. Despite its popularity, the award has
come under increasingly heavy fire. Critics cite three major deficiencies. First, reports of
enormous investments by companies intent on winning the Baldrige contest have led to
claims that the Baldrige can be bought. Both Xerox, a 1989 winner, and Corning, a 1989
finalist, admit to having spent, respectively, $800,000 and 14,000 labor hours preparing
applications and readying employees for site visits by Baldrige examiners. Second, Baldrige
critics note that the award does not reflect outstanding, or even exceptionally good, product
quality. Here they single out Cadillac, a 1990 winner that has yet to crack the top ranks of
most surveys of automobile quality. And third, the poor sales and earnings growth of some
past winners have led critics to question whether the award is in fact an accurate gauge of a
companys competitiveness and profit potential. The evidence here is Cadillac again, but also
Motorola and Federal Express.
At first blush, these criticisms may seem plausible. But a more thoughtful analysis of the
award, its criteria, and its judging process shows that the criticisms in fact reflect deep
misunderstandings. This confusion is not surprising: part of it is due to the confidentiality
requirements of the award itself, which have made the necessary data unavailable. While
Baldrige winners are required by law to share their experiences publicly, critical comparative
informationthe applications, scores, and examiners evaluations of all companies that have
applied, winning and non-winningis unavailable due to strict confidentiality rules set by
the National Institute of Standards and Technology. Consequently, the award has been
subject to little systematic study.
Ideally, NIST should find some way both to respect confidentiality and to make the data
available as a learning tool for American quality improvement. In the absence of the data, the
best alternative is to tap into the people who are most familiar with the Baldrige Award: the
Baldrige judges, senior examiners, and examiners. Last June and July, I interviewed 20 of
these quality experts to seek their views on the award, the evaluation process, and how
companies can best use the criteria. (See the insert Baldrige Interview Participants.) Their
comments and candid observations provide an accurate and sophisticated understanding of
what the Baldrige Award is and, equally important, what it is not.
Baldrige Interview Participants
I would like to thank the following Baldrige judges, senior examiners, and examiners for their
participation in interviews and roundtable discussions:
Judges:
Senior Examiners:
Examiners:
The award originated from the Malcolm Baldrige National Quality Improvement Act, signed
by President Ronald Reagan on August 20, 1987. That act, named after a former Secretary of
Commerce, called for the creation of a national quality award and the development of
guidelines and criteria that organizations could use to evaluate their quality improvement
efforts. Awards were to be given in three categoriesmanufacturing, service, and small
businesswith no more than two awards per category per year. The legislation gave
favorable mention to a number of management principles and tools: worker involvement,
strategic quality planning, statistical process control, management-led and customer-oriented
programs. But the act said little about the awards scoring system, judging process, or criteria
for evaluation. It was left to the National Bureau of Standards (known today as the National
Institute of Standards and Technology, or NIST) to work out the details.
Collaborating closely with industry experts, NIST produced the seven-category, 1,000-point
scoring system and the three-level judging process that are still used today. (The criteria and
subcategories have evolved over time; for the most recent version, see the insert Scoring the
1991 Baldrige Award.) Companies submit applications of up to 75 pages (up to 50 pages for
small businesses) describing their quality practices and performance in each of seven required
areasleadership, information and analysis, strategic quality planning, human resource
utilization, quality assurance of products and services, quality results, and customer
satisfactionand are then graded by teams of trained examiners. The Baldrige judges, who
come from industry, academia, and consulting firms and are all recognized quality experts,
choose a small set of high-scoring applicants for site visits. A team of senior examiners and
examiners visits each company for at least several days, conducting interviews and checking
documents. The judges then meet a final time to review the top applicants and to select
winners.
6.2 Business Process, Operational, and Support Service Quality Results (50)
This process, so simple on the surface, has produced an extraordinary level of confusion. At
its core are two problems: a misreading of the Baldrige criteria and a flawed conception of
the award. Together, they have made the Baldrige into something of a Rorschach test, a
projective device in which people see what they want to see and from which they draw their
own, often self-serving, conclusions. One way to clear up the confusion and develop a more
accurate picture of the award is to take a close look at the three main criticismsor myths
of the Baldrige Award.
Myth #1: The Baldrige Award requires large expenditures on the application and
preparation for site visits.
This claim, prompted by reports of heavy up-front investments at Xerox and Corning, has
settled into a highly stylized debate. Critics of the award continue to cite the Xerox and
Corning examples; supporters respond by pointing to Globe Metallurgical and Milliken &
Co., 1988 and 1989 Baldrige winners, who spent far less on the application process. They
also note that Xerox and Corning describe their expenditures as long-term investments in
quality improvementattempts to find the warts and develop a new set of quality goals
and initiatives rather than spending money just to bring home a prize.
Lost in this debate are the underlying assumptions that are being made about the Baldrige
process. Arguing that the Baldrige can be bought is akin to saying that there are required
steps and activities that must be described on an application or discussed on a site visit if one
is to win the awardan off the shelf quality kit that companies buy and install. An
example might be statistical process control or a cost-of-quality reporting system. Without
these standard items, the argument goes, an applicant will quickly run into trouble, because
they are essential to success. Better to spend the money now and put the systems in place to
satisfy the predilections of the Baldrige judges and examiners.
Unfortunately, this argument reflects a complete misreading of the Baldrige Award. The
award criteria are indeed strongly prescriptive on philosophy and values. But they are open-
minded about practices and procedures. To win, companies must have customer-oriented
quality programs that are led by senior management, a high level of employee involvement,
an understanding of internal processes, and management by fact rather than by instinct or
feel. But each company is free to choose its own precise techniques to achieve these goals,
and there is room for enormous variety. For example, nowhere in the Baldrige criteria is there
a requirement that stipulates the techniques to be used for problem solving. Examiners regard
such standard tools as statistical process control, Pareto charts, and quality function
deployment favorably because they have proven track records. But these tools are not
mandatory, and any other approach is acceptable, as long as it produces verifiable, repeatable,
controllable results that meet customers needs. In fact, in the upper ranges of scoring,
tailored and idiosyncratic approaches are the norm. Several judges observed that every
Baldrige winner had developed its own house brand of quality.
The best way to understand the Baldrige criteria is as an audit framework, an encompassing
set of categories that tells companies where, and in what ways, they must demonstrate
proficiencybut not how to proceed. The categories are in no sense a to do list, and it is
simply incorrect to suggest that the criteria specify particular programs or techniques. There
is no Baldrige system to be bought off the shelf. Completeness, however, is an important
virtue (meaning that a company must address all 32 areas on the application), as is
deployment of the quality effort throughout the organization. Here, too, it is impossible for a
company to spend its way to success. Examiners not only want to know what programs are in
place, they also want to know when they were introduced. With access to nearly every
employee in an organization during site visits, they inevitably learn the truth.
Myth #2: The Baldrige Award is flawed because it fails to predict a companys
financial success.
Several Baldrige winners have stumbled after winning the award. The reasons have varied
design problems at Motorola, international difficulties at Federal Express, depressed demand
at Cadillacbut the results have been the same: poor financial performance. Critics have
seized on these problems as evidence that the Baldrige Award is doing little to enhance
American competitiveness or improve corporate performance. A high score on the Baldrige,
they claim, tells us nothing about tomorrows financial results.
Here the critics are rightbut wrong. The Baldrige Award and short-term financial results
are like oil and water: they dont mix and were never intended to. As one judge commented,
I dont believe that financial performance belongs in the award. If you put it in, then, almost
automatically, there is only one category, because it would overshadow everything else.
Anyway, we all know the companies that have good bottom lines. Thats not much of a
secret. What we dont know are the companies that have good total quality management
processes.
To fault the Baldrige Award for not rewarding financial success is meaninglessit was never
meant to. There are no profit guarantees accompanying a high score. Indeed, winning is
neither a necessary nor a sufficient condition for financial success. It is not necessary because
there are routes to profitability other than superior quality management. A long-standing
patent, for example, or a one-of-a-kind production process can ensure financial success even
if a company falls woefully short on the Baldrige criteria. Nor are the criteria sufficient for
financial success, since they leave out such vital tasks of management as effective marketing,
innovative R&D, and sound financial planning. None of these is considered in the evaluation
process, leading to what one examiner has called the problem of the Baldrige-winning
buggy-whip manufacturer.
Some Wall Street analysts carry these arguments a step further: they actually short the stock
of Baldrige winners in anticipation of poor financial performance. Winning, after all, is
usually followed by a letdown, and the managers of these companies have grown accustomed
to a long-term perspective rather than the pursuit of quarterly earnings. While this reasoning
may work for brief periods, over any reasonable time horizon, it is destined to fail. Baldrige
winners are as vulnerable as other companies to economic downturns, changes in fashion,
and shifts in technology. But they are far better positioned to recover gracefully because they
have superior management processes in place. The Baldrige Award is thus a strong predictor
of long-term survival and a leading indicator of future profitability. In fact, the General
Accounting Office has found that Baldrige winners and semifinalists perform well on a host
of important operating and financial measuresquality, of course, but also market share,
return on assets, customer satisfaction, and employee relations. (See the insert Does the
Baldrige Award Improve Corporate Performance?) These are the kinds of indicators that
suggest future profitability.
The GAO study is a giant step toward quantitatively documenting TQM practices and their
effect on corporate performance. But readers should be wary of extrapolating the results to
other companies: the study was not performed scientifically using statistical methods, and the
20 participating companies did not answer all questions. The average response was only 9
companies per question.
Why was the response rate so low? Allan I. Mendelowitz, director of the GAOs International
Trade, Energy, and Financial Issues Department and director of the study, attributed it to
company confidentiality requirements. Many of the privately held companies and divisions of
public companies that the GAO surveyed have strict prohibitions on divulging data. For
instance, the 11 companies that did not answer the employee satisfaction question were not
hiding negative results, Mendelowitz said, but were simply following corporate nonreporting
guidelines. He insisted the poor response rate does not introduce a bias into the results
because the findings were reinforced by follow-up interviews conducted by GAO staffers at
the 20 companies.
The limitations of the GAO study highlight the difficulty of testing hypotheses about the
effectiveness of the Baldrige Award. The one source of complete data, the National Institute
of Standards and Technology, which oversees the award, has its own confidentiality
requirements. NIST refuses to release aggregate data for scrutiny. There are valid reasons for
its refusal, but without the data, serious study of quality principles will continue to be
hampered.
Selected Results from the GAO Study Source: GAOs analysis of company-provided data. To
order a copy of the GAOs report free of charge, call 202-275-6241 and ask for GAO NSIAD
91-190.
Ideally, Wall Street should be using a longer, less-biased lens to assess the impact of the
Baldrige Award. It might, for example, apply the same standards that it uses to judge research
and development spending. Like R&D, quality efforts are long term in nature, with an
uncertain payback. And like R&D, quality initiatives are costly up front. Yet in both cases,
continued investment is essential to competitive success.
Myth #3: The Baldrige Award does not honor superior product or service quality.
This is also known as the Cadillac criticism because of the hue and cry over Cadillacs
1990 Baldrige Award. Critics claimed that the companys cars had yet to distinguish
themselves, earning less than stellar product ratings from sources such as Consumer
Reports and J.D. Powers. The August 1991 Powers Report ranked Cadillac eighth in overall
customer satisfaction for 1990, down from fourth place in 1989.
Like the criticism in myth #2, this concern, while accurate, misses the point. The Baldrige
Award was never designed to reward product or service excellence alone. Quality results do
matter; 250 of the available 1,000 points are for product, service, process, supplier, and
customer satisfaction results. But the bulk of the award focuses on management systems and
processes. There Cadillac performed extremely well.
Yet a central question remains: How should the Baldrige Award be positioned for maximum
effectiveness? At one extreme lies a narrowly defined award, limited to product and service
excellence and, perhaps, traditional quality control. At the other extreme lies an all-
encompassing award, designed to reward overall management excellence and not quality
management alone. Surprisinglyand importantlyin its current form, the Baldrige Award
sits firmly between the two poles.
The award is certainly not limited to product and service excellence or traditional quality
control. The judges award only a quarter of the total points for quality results, and categories
such as employee well-being and morale and public responsibility (which includes business
ethics, environmental protection, and waste management) are well outside the scope of a
narrowly focused quality award. Yet, at the other extreme, the criteria clearly omit areas of
great importance to managers: innovativeness, marketing savvy, strategic positioning,
organization design, financial performance, and countless others. Baldrige is in no way a
complete award for corporate excellence.
This positioning opens the award to criticism from both sides. The Cadillac criticism suggests
that the Baldrige criteria are not narrow enough; Tom Peters, among other management
consultants, argues that additional categories are required, noting that the award fails to
address bureaucratic bulge. Is the Baldrige Award stuck hopelessly in the middle, with
little chance of redemption? Far from it. The middle ground is precisely where the Baldrige
should be.
Consider the two extremes. If the Baldrige Award became a pure product or service award or
was limited to traditional quality control, it would no longer capture the attention of senior
managers. Winning would no longer be viewed in personal terms, as something for which
they shared responsibility; it would become the job of the companys engineers, factory
workers, and QC department. The groundswell of interest created by the Baldrigeand
especially the animated conversations now occurring in executive suiteswould soon fade
away because it is precisely the awards comprehensiveness that is so appealing to corporate
leaders.
But if comprehensiveness is an asset, why not include additional categories and make the
award a more demanding test of management excellence? Largely because such an award
would be almost impossible to judge. Applications would undoubtedly run several hundred
pages, and knowledgeable examiners, skilled in all required areas, would be hard to find.
Moreover, determining the weights to be placed on each categoryIs innovation worth
5% of the total score? 10%? What about effective advertising?would be enormously
difficult because the process would rapidly become political. Different industries have
different competitive needs, and each would be certain to lobby for a scoring system slanted
in its favor.
The Baldriges current positioning, then, is in many ways ideal. The award is neither so
narrow that it is uninspiring, nor so broad as to be unmanageable. While this leaves it an easy
targetas one examiner put it, Were sort of straddling the fencethe position is
ultimately one of strength. Since 1988, NIST has distributed over 450,000 copies of the
application guidelines throughout the world. There may be flaws in the Baldrige criteria, but
as a framework for assessing management processes and launching improvement programs, it
is, according to this number, at the top of many managers lists.
Understanding Baldrige is one thing, putting it to use another. After all, quality improvement
takes time. It is a developmental process with many steps, not a quick fix that managers
install and then forget. A company can create a world-class quality system only with years of
work and constant refinement.
For companies about to embark on this journey, the first question is obvious: Where do we
stand? Here, Baldrige provides a helpful road map. Most judges and examiners agree that
there are clear patterns in the applications they have reviewed, with distinct bands of
performance as measured by the scores in each category. Companies can be arrayed along a
continuum, from best to worst. The mature, high-scoring quality programs, the medium-rung
performers, and the low scorers will each cluster around common profiles and shared
strengths and weaknesses.
At the heart of these groupings are two concepts central to the judging process: deployment
and integration. Deployment is derived from the military term meaning to spread troops
across an extended front; today, in business, it retains much of the original flavor. Quality
experts use the term in two ways: to measure the extent or spread of the quality effort across
an organization (what I call horizontal deployment) and to measure the extent to which
strategic, customer-oriented objectives have made their way from the CEO to lower levels of
the organization (what I call vertical deployment).
Horizontal deployment is important because it provides a quick, but effective, maturity test.
A company that has firmly established its quality program in manufacturing or operations but
has made little dent in support areas will not receive a high Baldrige score. Such programs
are typically in the adolescent stagethey have grown up, but have yet to move away from
their original home. Examiners work hard to distinguish them from more mature programs,
where quality activities exist throughout the company and have become pervasive.
Examiners use various techniques to assess horizontal deployment. One examiner noted that,
on site visits, he heads immediately for a companys legal department or maintenance group
to see how it has responded to the quality effort; another seeks out the companys worst
department; a third pushes hard to uncover marginal or secondary product lines where the
quality effort may not yet have taken hold. (As one judge observed, site visits uncover the
darnedest things. One of his favorites, in paraphrase: We build Ferraris, and the Ferraris are
terrific. And, by the way, we didnt mention this on our application, but we also make
toasters. Theyre not so hot.)
Vertical deployment, by contrast, involves the tying together of lower-level activities and
strategic goals. It is closely linked to the Japanese concept of policy deployment,
or hoshin planning: the cascading of senior managements vision and objectives down
through the organization so that, at each succeeding level, activities are aligned with, and
derived from, higher goals. The top and the bottom of the organization will then move in
concert. As one judge remarked: If the CEO says, Were going to go West, were doing it
by train, and were leaving Wednesday, the person [in the factory] knows when to start
packing.
Here too, companies differ sharply in maturity, and examiners have developed a number of
litmus tests. One, used on virtually all site visits, is to ask the same question, usually
something about customers or a major quality initiative, of both the CEO and shop floor
employees. If their answers are the same, vertical deployment is high; if they diverge, it is
low. A related test involves quality action teams, consisting of low- and mid-level employees,
and the degree to which their activities are independent and unrelated or fit within a larger,
well-understood strategy and plan. Low-scoring companies have few quality action teams and
offer them little or no strategic guidance; medium scorers have many teams that work with
few ties to the strategic plan; and high scorers have a large number of teams all working to
meet strategic goals.
Integration is closely related to deployment, and at times, judges and examiners seem to use
the terms interchangeably. Integration refers to the degree of alignment or harmony in an
organizationwhether different departments and levels speak the same language and are
tuned to the same wavelength. This concept is often hard to grasp, and an analogy may help.
Consider the childrens game of telephone. Several children stand in a circle and, one by one,
pass along a message or phrase by whispering it in their neighbors ear. The last person in the
circle says the phrase out loud; to everyones delight, the final wording is usually a far cry
from the original.
In the typical game of telephone, integration is poor. Each child remains in a separate world,
and there is no attempt to improve communication or eliminate the sources of bias and
distortion. A well-integrated telephone group would be quite different. Because of prior
training, common goals, continued practice, and years of collaboration, there would be few
slips in communication. The group would function like a superb relay team, with flawless
hand-offs. Moreover, the hand-offs would be quick. Because of the teams shared
understanding, it would communicate words and phrases rapidly, with little fear of error.
Speed is a particularly good measure of integration because it reflects the tightness of the
coupling within the organization. Sharply reduced cycle times are both a result and a key
indicator of organizational integration.
Cycle times measure an organizations speed of response and are an increasingly important
part of the Baldrige examination. In part, this is because improvements in quality and speed
are often traceable to the same sourceswork simplification, for example, or process
redesign. But equally important is the role of cycle time as a proxy for, and a measure of,
integration. New product development times can be cut in half only if engineering and
manufacturing are better integrated; patent filings can be done in three rather than ten
months only if lab technicians and lawyers are more tightly linked. This observation suggests
a simple litmus test: Do critical activities and processes (new product development, billing,
patent filings, customer complaint resolution) still take the same amount of time that they did
five years ago, or have they been speeded up? In most cases, the greater the reduction in
cycle time, the tighter the integration.
With these concepts in mind, we can return to the three bands of performance observed by
judges and examiners. Low scorers300 points or lessare weak in most areas. They often
have the right ideas, but implementation is poor. At best, there are islands of excellence:
one or two showcase projects, or a single Baldrige category in which the company performs
well. In some cases, this is leadership, because senior management has aggressively led the
quality effort (even though it has little tangible to show for it); at other times, it is an area of
traditional strength, like human resource utilization at a service company or quality assurance
at a high-tech manufacturer. Breadth, however, is lacking, and most categories receive
mediocre or poor scores. The result is a single spike in the scoring. (See the chart
Mapping Progress Through the Baldrige.) Measurable improvements in performance are
also lacking, with spotty trends and few substantial gains. In part, this is because the quality
programs at low-scoring companies have only recently taken hold, and progress takes time.
But equally important is the fact that deployment is nonexistent outside of manufacturing or
operations, and integration is poor. As one examiner put it, At these companies, there is no
evidence that different functional areas work well together.
Mapping Progress Through the Baldrige These charts are illustrative only and are derived
from discussions with Baldrige judges, senior examiners, and examiners. They depict general
trends of how companies progress through the Baldrige criteria over time.
Companies in the middle rungscores of 400 to 600 pointshave more balanced programs.
Typically, they are strong in two or three Baldrige categories, with pronounced spikes, but
significantly weaker in others. The strengths are often in the core Baldrige categories of
leadership, human resource utilization, and customer satisfaction, and the weaknesses are in
information and analysis, quality results, and strategic quality planning. Companies in this
group have senior managers who are committed to quality, understand its concepts and
methods, and are actively engaged in improvement efforts. They have encouraged employee
involvement, begun training programs, and initiated problem-solving teams. They have both
surveyed and satisfied their customers.
Weaknesses, however, do exist. Many of the medium-scoring companies are still using
packaged programs or off-the-shelf solutions: employee suggestion programs, customer
surveys, or statistical process control packages that they have purchased from outside
vendors. They remain consumers of quality management, not creators, and have yet to put a
personal stamp on their programs. Deployment is incomplete outside manufacturing or
operations, and lower-level activities are not fully aligned with strategic goals. Most teams,
for example, are still doing their own thing. A complete quality information systeman
important tool for integration and alignmentis not yet in place, nor are quality results
impressive, sustained, or organizationwide.
As quality programs reach this level, they develop signatures of their own. Each winning
company has a mark of distinction, an area of special expertise that fits its culture and sets its
quality program apart. At Milliken, the focus is people and teamwork; at IBM-Rochester, it is
technology transfer; and at Federal Express, it is problem solving. In each case, the company
has used its central theme to build a tailored, personalized approach to quality management.
Lets say youve conducted a Baldrige self-assessment and find yourself in the middle of the
pack. Your quality program is alive and kicking, but hardly surging ahead. Youve made
progress and are committed to more, but realize that improvements are needed in all seven
Baldrige categories. What next?
Clearly, the answer is to identify areas of weakness and set targets for improvement.
Unfortunately, that is easier said than done. Few managers have the required understanding
of quality management or the Baldrige categories to make concrete proposals about how to
get from here to there. For them, the concepts remain vague and the categories distressingly
open-ended. Yet according to Baldrige judges and examiners, in every category, there are
simple litmus tests that managers can apply to identify strengths or weaknesses and suggest
needed improvements.
The twin pillars of this category are symbolism and active involvement. Symbolic acts are
required to cement the importance of quality in the minds of employees and to elevate it
above financial and efficiency goals, which have long dominated decision making. The
examples are legion: Bob Galvin, the former CEO of Motorola, restructuring his policy
committee meetings so that quality was the first item on the agendaand then walking out
after quality issues had been discussed and before financial matters were introduced; Roger
Milliken, the CEO of Milliken & Company, insisting that he and his senior management team
take an intensive course in statistical methods before the material was taught to lower-level
managers and employees; David Kearns, the former CEO of Xerox, holding up a vital new
product launch because of minor quality problems, despite strong protests from the sales
force. In each of these cases, the CEO used the same, heroic approach: he undertook a highly
visible action, involving personal inconvenience or risk, to reinforce the companys quality
message and capture the attention of the work force.
To the dismay of many chief executives, heroic acts are not enough to ensure a successful
quality program. It takes day-to-day leadership as well. This can take many forms: the CEO
and senior managers may help teach quality training classes; they may lead or belong to
quality improvement teams; they may personally conduct quality systems reviews; or they
may meet individually with customers and employees. (One judge, who considers frequent
meetings between the CEO and customers to be essential for excellence, calls it the Mayor
Koch test named after the three-term mayor of New York who was always asking his
constituents, How am I doing?) Whatever the form, senior managers day-to-day quality
activities must involve real commitments of time and energy and must go well beyond
slogans and lip service.
One way judges separate rhetoric and reality is by reviewing logbooks or calendars. On site
visits, many examiners ask the CEO and other senior managers to open their calendars and
review their last few weeks of activity. How much time did they spend talking with
customers? Meeting with employees? Leading quality improvement teams? Reviewing the
progress of the quality program? What percentage of their total time was spent on quality-
related activities? There are no magic numbers that define acceptable levels of commitment.
Still, these reviews are extremely revealing because they marshal hard data that show whether
senior managers have actually been walking the talk. James R. Houghton, the chairman and
CEO of Corning Inc., was so taken with this approach that he reviewed his calendar for two
years, 1987 and 1989, to discover how he had allocated his time and to see whether changes
were necessary. (See the insert Lessons in Leadership: James R. Houghton.)
Answering this question requires an understanding of how ones time is currently spent and a
model or template of the desired time allocation. The Quality Council, an informal group of
quality executives from leading U.S. corporations, has developed a model to help CEOs do
both. The council began with a list of seven key CEO responsibilities devised by
Westinghouse to evaluate the health of the CEOs office, and came up with a time
allocation that all CEOs can use to determine how to divide their time.
In 1990, James R. Houghton, chairman and CEO of Corning Inc., used this seven-point
checklist to evaluate his own daily schedule. The company was already well into its quality
initiative, and Houghton was given the checklist by his senior vice president and corporate
director of quality, David Luther, who is also a member of the Quality Council, to assess his
personal commitment. Houghton analyzed his diary for 1987 and 1989, categorizing the time
as either value-added or nonvalue-added. Value-added time makes a productive contribution
toward the companys operations; nonvalue-added does not. Houghton classified most
corporate meetings as value-added, as well as most outside board or committee meetings. But
he viewed activities such as unscheduled office hours and travel time to be nonproductive.
Most of these hours, if not all, are necessary but do not in themselves add value. In 1989, of a
total of 2,462 hours worked, Houghton figured he spent 1,483 hours in value-added activities
and 979 hours in nonvalue-added activities.
Next Houghton used the seven Quality Council criteria to categorize his 1989 work hours for
both total and value-added time worked. Comparing his value-added hours to the ideal, he
found a significant divergence in only two categories.
Create appropriate organization environment and value system which stimulates the
morale and productivity of the work force and leadership.
In neither case was Houghton overly concerned with the contrast. His rationale for devoting
less time to strategy, planning, and customer focus was that, by 1989, Cornings corporate
and quality strategies were firmly in place. Between 1983 and 1986, when the company was
redefining its strategic direction and starting its TQM effort, Houghton estimated he spent
more than 20% on these needs.
As for creating the appropriate environment and value system, Houghton claims that his high
score reflects personal style and a commitment to beating the drums of quality. He
observed: This is where leadership is important, and I have no intention of stopping or
slowing down this part of my activities. Each year, Houghton visits 40 to 50 Corning
locations around the world. The visits focus on meeting with as many people as possible, and
the emphasis is almost exclusively on progress toward world-class quality goals.
Houghton then went a step further, identifying all quality-related activities. In 1989, he spent
492 hours on quality, divided among customer visits (11.6%), training (14%), speeches,
tapes, and interviews (20.7%), corporate meetings (27.2%), and unit visits (29.1%).
This exercise helped Houghton understand how to utilize his time better, so he devotes more
of his energy to quality-related, value-added activities. His example is a powerful message to
senior and middle managers that Cornings CEO is serious about changing his habits.
Key CEO Responsibilities
At the very best companies, leaders share two additional qualities: they have intimate
knowledge of how their companys work actually gets doneas one examiner explained,
they have developed a sense of the reality of the organizationand they possess
impressive listening skills. The two are obviously related. A willingness to listen carefully to
employees and customers and, if necessary, to hear the bad news means that senior managers
will not become isolated or removed from critical feedback. Managers who jump several
levels below their direct reports to interact directly with the grass roots of the organization,
known as skip-level communication, almost always know what is really going on. The
technique eliminates many traditional biases that plague companies and leads to heightened
sensitivity and an almost intuitive understanding of how the organization works.
Strategic quality plans are the glue holding together a companys quality effort. The plans
need not be elaborate, stand-alone documents; at the best companies, they are practically
indistinguishable from the business plan. Far more important than their form is their content;
and on that dimension, excellent quality plans have much in common. All are concrete,
focused, integrated, and aggressive.
As one examiner commented, What were looking for are two or three specific goals in a
one- to two-year period, and the fact that the company can tell us, explicitly and specifically,
what theyre going to improve and why. Endless promises and laundry lists of objectives
win few points, as do hazy, ill-formed goals. It is far better to seek a 20% improvement in
the reliability of our three major product lines or a 50% cut in telephone waiting times
than it is to propose grandiose objectives, like becoming the auto industrys supplier of
choice or being recognized as the worlds best metal-working company. These last goals
lack focus and provide few guidelines; they are general enough to justify almost any
behavior.
The best quality plans provide alignment and integration. They incorporate the findings of
benchmarking visits, use customer data to drive goal-setting and improvement activities,
dovetail neatly with the business planning process, and provide an umbrella for a
constellation of quality initiatives and projects. At the winning companies, they also include
aggressive stretch goals: staggering rates of improvementten or hundredfold reductions
in defects over a five- to ten-year periodthat cannot be accomplished without massive
changes. They force companies to rethink the way they do business and ensure that
complacency is kept at bay.
The idea that companies should empower their employees and unleash the full potential of
the work force did not arise with the Baldrige Award. It has been a staple of the human
relations movement for years. But the award has provided a powerful boost, because it has
added precision and compliance tests.
To see if empowerment really exists, examiners look at the ability of frontline employees to
act in the interest of customers without getting prior approval. Can a saleswoman, on her own
authority, make a $5,000 adjustment for a customer, or is she limited to $10 or less? Can a
customer service representative deviate from established procedures if he feels it will help a
client, or must he clear it first with his boss? Within the factory, do shop floor employees
have access to a stop the line button that they can use to halt the assembly line if they
detect quality problems? Because these steps are so much more effective when they are
accompanied by a supportive environment, examiners probe further on supervisory practices.
What happens when things go wrong? Are employees punished, or do they receive coaching
and support? Is personal initiative valued or feared?
There are other, more tangible tests of empowerment and involvement. The number of
employee teams is one; another is the number of employee suggestions. In both cases,
effectiveness matters: the volume of ideas is less important than the percent that are
implemented. To encourage the cooperation of their employees, the best companies have
tight loops for responding quickly to proposalswithin 24 hours or, at most, 48 hoursand
translating them into action. As one judge observed, Good companies tell you how
they collect employee suggestions. Great companies tell you how they useemployee
suggestions.
Because a well-trained employee is more likely to contribute than one who lacks essential
skills, examiners probe to find ample education and training. Quality training involves a
package of skills that includes increased awareness, problem-solving tools (statistics, data
analysis, customer-supplier relationships), group process skills (leading meetings, team-work,
making presentations), and job-specific skills. All are necessary, and all must be deployed
widely. As quality programs mature, companies typically spend a larger percentage of their
revenues on education and training. They also tighten the links between training and
application, because the sooner that lessons can be applied the more likely they are to stick.
The best training programs couple skills training with real-time problem solving and
feedback. After employees have completed the programs, they are asked to reevaluate and
propose changes; these are incorporated in the next version. Follow-up systems are also in
place to ensure that training programs produce the desired results. If they do not, the
programs are retooled.
Monitoring of this sort is simply one type of bottom-up communication; others include
attitude surveys and informal meetings of managers and lower-level employees. Even at
successful companies, this type of communication is rare. One judge observed, At some
level of excellence, the company can tell you how the troops find out what the leaders are
thinking. And they do that really precisely. But they fail to tell you how the troops tell the
leaders what [they] are thinking. That must be more difficult, because its more likely to be
missing.
In the end, excellence in this category comes down to a simple test: the voice of the people.
On site visits, examiners practice management by sitting around: they sit down to lunch
with a random group of employees and ask them about their work. How do they view their
jobs? Are they rewarded for taking initiative? Do they see themselves as hamstrung by others
(The home office tells us to do this, the home office tells us to do that)? Or are they
passionately involved (This is the approach I plan to take, and heres why I think it will
benefit our customers)? Do employees covet quality awards, or are they viewed as a
management ploy? The answers to the questions are the bottom line on human resource
management because they capture the combined impact of training, communication, and
involvement programs. One examiner put it succinctly, Empowerment is in the eyes of the
empowered. Its that simple.
Category #5: Quality Assurance of Products and Services.
Weaknesses in this category usually reflect poor process thinking. The idea that companies
have processesnew product development, for example, or billingthat combine activities
from different departments to produce a specific output, is unfamiliar to many managers who
are accustomed to thinking along functional lines. There, both the organization and
information flow are vertical. In a process, by contrast, both move horizontally.
The poorest performing companies have little understanding of their fundamental processes;
they have not mapped them using process flow diagrams, measured them quantitatively, or
controlled them through statistical methods. The better companies understand the processes
that are central to their business and may even have improved their performance. At a high-
tech company, a key process might be new product development; at a hotel, it might be guest
registration and departure. But they have little knowledge of their business and support
processes, such as billing, customer service, engineering changes, or patent applications, and
have made little effort to reduce their defects or cycle times. The best companies have tackled
these areas as aggressively as they have pursued improvements in their core processes.
The measures in this category must be objective, like number of defects or on-time delivery
rates. Examiners are looking for meaningful trends; they are not impressed by a single year
of stellar performance, or improvements in areas of no strategic significance. Successful
companies report timely data, covering at least three years, show sustained improvements on
critical measures, achieve high levels of performance relative to competitors and best-in-class
benchmarks, and have proof that quality initiatives, not serendipity, drove their trends.
Winners go a step further: using statistical methods, they are able to correlate their objective
quality results with measures of customer satisfaction. Winners are able to say, We knew
that our customers valued on-time delivery, but we werent sure how much. Now weve
established that a 10% improvement in on-time delivery performance raises our customer
satisfaction scores two to three points. This ability, which allows companies to predict
changes in customer satisfaction from a small set of internal quality measures, is one of the
most powerful litmus tests for separating Baldrige winners from runners-up.
This is the most heavily weighted Baldrige category, and the one that many examiners turn to
first. They are looking for evidence of customer understanding and commitment, as well as
impressive results. Companies must show that they possess customer information from a
wide range of sourcesfocus groups, surveys, one-on-one meetings, letters to the chairman,
sales visits, telephone hot linesand that their measures are objective and validated, not
anecdotal. A common failing is to access only current customers, ignoring those that have
been lost or are still being pursued. Lost customers have a great deal to share about their
sources of dissatisfaction; a competitors customers can help pinpoint vulnerabilities. Both
contribute important nuances that current customers alone seldom provide. With this
information, companies can stratify their customers into groups; typically, the more groups,
the more refined the understanding. Xerox has divided its copier customers into six
categorieslarge- and small-customer major accounts, large and small named accounts,
general markets, and government/educationand has identified the purchase criteria in each
segment.
The system that a company has for collecting, monitoring, and responding to customer
complaints demonstrates its customer commitment. Surprisingly, a small number of
complaints is often cause for alarm; it suggests an unwillingness to view them as a form of
customer feedback rather than a source of bad news. At the best companies, complaints are
easy to register and are actively pursued. Nintendo, for example, has a toll-free telephone hot
line for assisting users of its video games; every caller is asked to evaluate the companys
products and offer suggestions, even if the call was for another purpose. Excellent companies
then analyze the complaints they have collected, aggregate the data, circulate it internally,
and insist on remedial action. What is true of employee suggestions is true of customer
complaints: the best companies do not simply collect them, they use them to prevent future
problems.
Yet even the best complaint-handling systems are incomplete. Because they are confined to
customer dissatisfaction, they are able to tell companies little about the things that truly
please customers or make them happy. Companies repeatedly fail to make this distinction;
according to examiners, many are managing complaint systems without also holding
conversations with customers. To overcome this, managers must ask their customers what
they are looking for and observe them in action. There is no substitute for customer contact.
Winning companies set their sights still higher, aiming at customer delight. Their goal is to
exceed expectations and anticipate needs, even if customers have not yet articulated their
needs. The Sony Walkman and Black & Decker Dustbuster are examples of products that
meet this test. The required levels of understanding are daunting, but the payoff is huge. As a
judge observed, You need to know things about customers that they dont know about
themselves. Xerox found that a very satisfied customer was six or seven times more likely
to repurchase its products than a satisfied customer. Its goal today is 100% very satisfied
customers.
The Baldrige Award is a demanding competition, with every company subject to the same
stringent tests. Points are awarded for originality, and there are only six possible winners a
year. One would expect these rules to produce clannishness and secrecy, as each company
pursues its own gains. In fact, the results have been the opposite: an outpouring of
cooperative behavior and a level of corporate sharing seldom seen in this country.
Business audiences have shifted from politely listening to speeches about quality to absorbing
them. Xerox talks to over 100,000 people a year, many of them customers and suppliers. All
come seeking information and advice. We absolutely dont believe this would have
happened without the Baldrige Award, said one Baldrige examiner.
The award has created a common vocabulary and philosophy bridging companies and
industries. Managers now view learning across the boundary lines of business as both
possible and desirable. The abhorrence for anything not invented here, once a source of
corporate uniqueness and pride, is being replaced by an unabashed zeal for borrowing ideas
and practices from others.
In many ways, this spirit of cooperation is the legacy of the Baldrige Award. Winners are
compelled by law to share their knowledge; that they have done so without suffering
competitively has led other companies to follow suit. Benchmarking is by definition a
cooperative activity, and it is an award requirement. Even warring factions of the quality
movement have united under the Baldrige banner. To become more competitive, American
companies have discovered cooperation.