2016 Industrial Engineering Technology and Operations Management
2016 Industrial Engineering Technology and Operations Management
2016 Industrial Engineering Technology and Operations Management
ADVISORY BOARD
Hans-Jo rg Bullinger Professor and Head Fraunhofer IAO and Stuttgart University
Germany Hans-Joerg.Bullinger@iao.fhg.de John A. Buzacott Professor Schulich Scho
ol of Business York University Toronto, Ontario Canada ibuzacot@bus.yorku.ca Ken
neth E. Case Regents Professor School of Industrial Engineering and Management O
klahoma State University Stillwater, Oklahoma USA kcase@okway.okstate.edu Don B.
Chaf n The Johnson Prof. of Industrial & Operations Engr. Director, Ergonomics Ce
nter The University of Michigan Ann Arbor, Michigan USA dchaf n@engin.umich.edu Jo
hnson A. Edosomwan Chairman and Senior Executive Consultant Johnson & Johnson As
sociates, Inc. Fairfax, Virginia USA jedosomwan@jjaconsultants.com Takao Enkawa
Professor Department of Industrial Engineering Tokyo Institute of Technology Tok
yo, Japan enkawa@ie.me.titech.ac.jp Shlomo Globerson Professor School of Busines
s Administration Tel Aviv University Tel Aviv, Israel globe@post.tau.ac.il John
J. Jarvis Professor and Director Dept. of Industrial & Systems Engineering Georg
ia Institute of Technology Atlanta, Georgia USA john.jarvis@isye.gatech.edu C. R
ichard Liu Professor School of Industrial Engineering Purdue University West Laf
ayette, Indiana USA liuch@ecn.purdue.edu Nicolas Marmaras Assistant Professor De
partment of Mechanical Engineering National Technical University of Athens Secto
r of Industrial Management and Operational Research Zografou, Greece marmaras@ce
ntral.ntua.gr
vii
viii
ADVISORY BOARD
Aura Castillo Matias Associate Professor and Deputy Executive Director National
Engineering Center University of Philippines College of Engineering, Dept. of IE
Diliman Quezon City Philippines matias@engg.upd.edu.ph Barry M. Mundt Principal
The Strategy Facilitation Group Rowayton, Connecticut USA barry mundt@earthlink
.net Gerald Nadler IBM Chair Emeritus in Engr. Mgmt. Industrial & Systems Engr.
Dept. University of Southern California Los Angeles, California USA nadler@mizar
.usc.edu Deborah J. Nightingale Senior Lecturer, Lean Aircraft Initiative Aerona
utics and Astronautics Massachusetts Institute of Technology Cambridge, Massachu
setts USA dnight@mit.edu Shimon Y. Nof Professor School of Industrial Engineerin
g Purdue University West Lafayette, Indiana USA nof@ecn.purdue.edu Ahmet Fahri O
zok Professor & Head Department of Industrial Engineering Technical University o
f Istanbul Istanbul, Turkey isozok@tritu.bitnet Juan R. Perez Industrial Enginee
ring Manager Strategic Process Management Group United Parcel Service Atlanta, G
eorgia USA mla2jxp@is.ups.com
John V. Pilitsis President and Chief Executive Quantum Component LCC Shrewsbury,
Massachusetts USA stejvp@attme.att.com John Powers Executive Director Institute
of Industrial Engineers Norcross, Georgia USA A. Ravi Ravindran Professor and Head
Department of Industrial and Manufacturing Engineering Pennsylvania State Univer
sity University Park, Pennsylvania USA aravi@psu.edu Vinod Sahney Senior Vice Pr
esident Planning and Strategic Development Executive Of ces Henry Ford Health Syst
em Detroit, Michigan USA vsahney@hfhs.org Laurence C. Seifert Senior Vice Presid
ent AT&T Wireless Services American Telephone & Telegraph Corporation Redmon, Wa
shington USA larry.seifert@attws.com Michael J. Smith Professor Department of In
dustrial Engineering University of Wisconsin-Madison Madison, Wisconsin USA mjsm
ith@engr.wisc.edu James A. Tompkins President Tompkins Associates, Inc. Raleigh,
North Carolina USA jtompkins@tompkinsinc.com
ADVISORY BOARD
ix
Mitchell M. Tseng Professor and Head Department of Industrial Engineering Hong K
ong University of Science and Technology Hong Kong tseng@usthk.ust.hk Hans-Ju rg
en Warnecke Professor and President Fraunhofer Gesellschaft (Society) Leonrodstr
asse Munich, Germany warnecke@zv.fhg.de
Cheng Wu Professor and Director National CIMS Engineering Research Center Tsingh
ua University Beijing, P.R. China wuc@tsinghua.edu.cn
CONTRIBUTORS
Mary Elizabeth A. Algeo Computer Scientist National Institute of Standards and T
echnology Gaithersburg, Maryland USA algeo@cme.nist.gov Susan Archer Director of
Operations Micro Analysis and Design, Inc. Pearl East Circle Boulder, CO 80301
USA Lajos Ba lint HUNGARNET Vice President NIIFI Head of International Relations
Budapest, Hungary h48bal@ella.hu Robert M. Barker Associate Professor Computer
Information Systems University of Louisville Louisville, Kentucky USA rmbarker@l
ouisville.edu Edward J. Barkmeyer Computer Scientist Manufacturing Systems Integ
ration Division National Institute of Standards and Technology Gaithersburg, Mar
yland USA edward.barkmeyer@nist.gov Carl N. Belack Principal Oak Associates, Inc
. Maynard, Massachusetts USA cnb@oakinc.com Yair Berson Polytechnic University Y
avuz A. Bozer Co-Director Joel D. Tauber Manufacturing INS Professor, Industrial
-Operations Engineering College of Engineering University of Michigan Ann Arbor,
Michigan yabozer@engin.umich.edu James T. Brake eld Professor Department of Manag
ement Western Illinois University Macomb, Illinois USA J-Brake eld@wiu.edu Martin
Braun Research Scientist Competence Center Human Engineering Fraunhofer Institut
e for Industrial Engineering Stuttgart, Germany martin.braun@iao.fhg.de Ralf Bre
ining Scientist Competence Center Virtual Reality Fraunhofer Institute for Indus
trial Engineering Stuttgart, Germany ralf.breining@iao.fhg.de Hans-Jo rg Bulling
er Professor, Head and Director Fraunhofer Institute of Industrial Engineering a
nd IAT University Stuttgart Stuttgart, Germany Hans-Joerg.Bullinger@iao.fhg.de
xi
xii
CONTRIBUTORS
Richard N. Burns BCW Consulting Limited Kingston, Ontario Canada burnsrn@attCANA
DA.net John A. Buzacott Professor Schulich School of Business York University To
ronto, Ontario Canada jbuzacot@bus.yorku.ca Michael A. Campion Professor Kranner
t School of Management Purdue University West Lafayette, Indiana USA campionm@mg
mt.purdue.edu Pascale Carayon Associate Professor Department of Industrial Engin
eering University of Wisconsin-Madison Madison, Wisconsin USA carayon@ie.engr.wi
sc.edu Tom M. Cavalier Professor The Harold and Inge Marcus Department of Indust
rial and Manufacturing Engineering The Pennsylvania State University University
Park, Pennsylvania USA tmc7@psu.edu Thomas Cebulla Project Leader Fraunhofer Ins
titute of Industrial Engineering Stuttgart, Germany Jose A. Ceroni Professor of
Industrial Engineering Dept. of Industrial Engineering Catholic University of Va
lpara Chile jceroni@ucv.cl
Don B. Chaf n The Lawton and Louise Johnson Professor Industrial and Operations En
gineering The University of Michigan Ann Arbor, Michigan USA dchaf n@engin.umich.e
du S. Chandrasekar Professor School of Industrial Engineering Purdue University
West Lafayette, Indiana USA chandy@ecn.purdue.edu Chung-Pu Chang Professor Eurek
a Consulting Company USA Tien-Chien Chang Professor School of Industrial Enginee
ring Purdue University West Lafayette, Indiana USA tchang@ecn.purdue.edu Chin I.
Chiang Institute of Traf c and Transportation National Chiao Tung University Taiw
an R.O. China Soon-Yong Choi Economist / Assistant Director Center for Research
in Electronic Commerce Department of MSIS University of Texas Austin, Texas USA
Tim Christiansen Montana State University txchris@montana.edu Terry R. Collins A
ssistant Professor Industrial Engineering Department University of Arkansas Faye
tteville, Arkansas USA
CONTRIBUTORS
xiii
Kevin Corker Associate Professor Computer Information and Systems Engineering De
partment San Jose State University San Jose , California USA kcorker@email.sjsu.
edu Tarsha Dargan Professor Department of Industrial Engineering FAMU-FSU Colleg
e of Engineering Tallahassee, Florida USA Ko de Ruyter Professor Department of M
arketing and Marketing Research Maastricht University Faculty of Economics & Bus
iness Maastricht The Netherlands k.deruyter@mw.unimaas.nl Xiao Deyun Vice Direct
or of Institute of Inspect and Electronic Technology Department of Automation Ts
inghua University Beijing P.R. China xiao@cims.tsinghua.edu.cn Brian L. Dos Sant
os Frazier Family Professor of Computer Information Systems College of Business
& Public Administration University of Louisville Louisville, Kentucky USA bldoss
01@acm.org Colin G. Drury Professor Department of Industrial Engineering State U
niversity of New York at Buffalo Buffalo, New York USA drury@buffalo.edu
Laura Raiman DuPont Consultant Quality Engineering Consultant League City, Texas
USA lrdupont@aol.com Taly Dvir Lecturer Faculty of Management Tel Aviv Universi
ty Tel Aviv, Israel talyd@post.tau.ac.il Andrea Edler Head, Technological Planni
ng Systems Dept. Fraunhofer Institute for Production Systems and Design Technolo
gy Berlin, Germany andreas.edler@ipk.fhg.de Johnson A. Edosomwan Chairman and Se
nior Executive Consultant Johnson & Johnson Associates, Inc. Fairfax, Virginia U
SA jedosomwan@jjaconsultants.com John R. English Professor Department of Industr
ial Engineering University of Arkansas Fayetteville, Arkansas USA jre@engr.uark.
edu Takao Enkawa Professor Industrial Engineering & Management Tokyo Institute o
f Technology Tokyo, Japan enkawa@ie.me.titech.ac.jp Klaus-Peter Fa hnrich Head F
HG-IAO Stuttgart University Stuttgart, Germany klaus-peter.faehnrich@iao.fhg.de
xiv
CONTRIBUTORS
Richard A. Feinberg Professor Consumer Sciences & Retailing Purdue University We
st Lafayette, Indiana feinberger@cfs.purdue.edu Klaus Feldmann Professor and Dir
ector Institute for Manufacturing Automation and Production Systems University o
f Erlangen-Nuremberg Erlangen, Germany feldmann@faps.uni-erlangen.de Martin P. F
inegan Director KPMG LLP Montvale New Jersey USA m negan@kpmg.com Jeffrey H. Fisch
er Director UPS Professional Services Atlanta, Georgia USA ner1jhf@ups.com G. A.
Fleischer Professor Emeritus Department of Industrial Engineering University of
Southern California Los Angeles, California USA eische@mizar.usc.edu Donald Fusa
ro Member of Technical Staff Optoelectronics Quality Lucent Technologies Bell La
bs Innovations Breinigsville, Pennsylvania USA fusaro@lucent.com Alberto GarciaDiaz Professor Department of Industrial Engineering Texas A&M University College
Station, Texas USA agd@tamu.edu
Boaz Golany Professor Faculty of Industrial Engineering and Management TechnionIs
rael Institute of Technology Haifa, Israel golany@ie.technion.ac.il Juergen Goeh
ringer Scienti c Assistant Institute for Manufacturing Automation and Production S
ystems Friedrich Alexander University Erlangen NurembergErlangen Germany goehrin
ger@faps.un-erlangen.de Frank Habermann Senior Doctoral Researcher Institute for
Information Systems Saarland University Saarbruecken, Germany Habermann@iwi.uni
-sb.de Michael Haischer Fraunhofer Institute of Industrial Engineering Stuttgart
, Germany John M. Hannon Visiting Associate Professor Jacobs Management Center S
tate University of New York-Buffalo Buffalo, New York USA jmhannon@acsu.buffalo.
edu Joseph C. Hartman Assistant Professor Industrial and Manufacturing Systems E
ngineering Lehigh University Bethlehem, Pennsylvania USA jch6@lehigh.edu On Hash
ida Professor Graduate School of Systems Management University of Tsukuba Tokyo,
Japan hashida@gssm.otsuka.tsukuba.ac.jp
CONTRIBUTORS
xv
Peter Heisig Head Competence Center Knowledge Management Fraunhofer Institute fo
r Production Systems and Design Technology Berlin, Germany Peter.Heisig@ipk.fhg.
de Markus Helmke Project Leader Institute for Machine Tools & Factory Management
Technical University Berlin Berlin, Germany Markus.Helmke@ipk.fhg.de Klaus Herf
urth Professor Technical University of Chemnitz Germany Ingo Hoffmann Project Ma
nager Competence Center Knowledge Mgmt. Fraunhofer Institute for Production Syst
ems and Design Technology Berlin, Germany Info.Hoffmann@ipk.fhg.de Chuck Holland
Portfolio Project Manager United Parcel Service Atlanta, Georgia USA Clyde W. H
olsapple Rosenthal Endowed Chair in MIS Gatton College of Business and Economics
University of Kentucky Lexington, Kentucky USA cwhols@pop.uky.edu Chin-Yin Huan
g Assistant Professor Department of Industrial Engineering Tunghai University Ta
iwan
Ananth V. Iyer Professor Krannert School of Management Purdue University West La
fayette, Indiana USA aiyer@mgmt.purdue.edu Robert B. Jacko Professor School of C
ivil Engineering Purdue University West Lafayette, Indiana USA jacko@ecn.purdue.
edu Jianxin Jiao Assistant Professor School of Mechanical & Production Engineeri
ng Nanyang Technological University Singapore Albert T. Jones Operations Researc
h Analyst Manufacturing Systems Integration Division National Institute of Stand
ards and Technology Gaithersburg, Maryland USA albert.jones@nist.gov Swatantra K
. Kachhal Professor and Chair Department of Industrial and Manufacturing Systems
Engineering University of Michigan-Dearborn Dearborn, Michigan USA kachhal@umic
h.edu Kailash C. Kapur Professor Industrial Engineering The University of Washin
gton Seattle, Washington USA kkapur@u.washington.edu
xvi
CONTRIBUTORS
Ben-Tzion Karsh Research Scientist Center for Quality and Productivity Improveme
nt University of Wisconsin-Madison Madison, Wisconsin USA bkarsh@facstaff.wisc.e
du Waldemar Karwowski Professor and Director Center for Industrial Ergonomics Un
iversity of Louisville Louisville, Kentucky USA karwowski@louisville.edu Anton J
. Kleywegt Professor School of Industrial and Systems Engineering Georgia Instit
ute of Technology Atlanta, Georgia USA Anton.Kleywegt@isye.gatech.edu Tom Kontog
iannis Professor Department of Production Engineering and Management Technical U
niversity of Crete Greece Stephan Konz Professor Emeritus Department of Industri
al and Manufacturing Systems Engineering Kansas State University Manhattan, Kans
as USA sk@ksu.edu Timothy M. C. LaBreche Senior Research Engineer Environmental
& Hydraulics Engineering Area School of Civil Engineering Purdue University West
Lafayette, Indiana USA
Frank-Lothar Krause Professor Institute for Machine Tools & Factory Management T
echnical University Berlin Berlin, Germany Frank-L.Krause@ipk.fhg.de Douglas M.
Lambert Mason Professor of Transportation and Logistics Fisher College of Busine
ss The Ohio State University Prime F. Osborn III Eminent Scholar Chair in Transp
ortation University of North Florida lambert@cob.ohio-state.edu R. McFall Lamm,
Jr. Chief Investment Strategist Deutsche Bank New York, New York USA mac.lamm@db
.com K. Ronald Laughery, Jr. President Micro Analysis and Design, Inc. Boulder,
Colorado USA rlaughery@maad.com Yuan-Shin Lee Professor Department of Industrial
Engineering North Carolina State University Raleigh, North Carolina USA Mark R.
Lehto Associate Professor School of Industrial Engineering Purdue University We
st Lafayette, Indiana USA lehto@ecn.purdue.edu Jens Leyh Project Leader Fraunhof
er Institute of Industrial Engineering Nobelstrasse 12 Stuttgart, Germany
CONTRIBUTORS
xvii
C. Richard Liu Professor School of Industrial Engineering Purdue University West
Lafayette, Indiana USA liuch@ecn.purdue.edu Raymond P. Lutz Professor School of
Management The University of Texas at Dallas Richardson, Texas USA Ann Majchrza
k Professor, Information Systems Dept. of Information and Operations Management
Marshall School of Business University of Southern California University Park Lo
s Angeles, California USA majchrza@bus.usc.edu Chryssi Malandraki Lead Research
Analyst United Parcel Service Atlanta, Georgia USA Tamas Maray Associate Profess
or Muegyetem Rakpart 13 Technical University of Budapest Budapest H-1111 Hungary
maray@fsz.bme.hu Nicolas Marmaras Assistant Professor Department of Mechanical E
ngineering National Technical University of Athens Sector of Industrial Manageme
nt and Operational Research Zografou, Greece marmaras@central.ntua.gr Frank O. M
arrs President / CEO Risk Management Partners, Inc. USA fmarrs@wpe.com
Roy Marsten Cutting Edge Optimization USA Aura Castillo Matias Associate Profess
or and Deputy Executive Director National Engineering Center University of Phili
ppines College of Engineering, Dept. of IE Diliman Quezon City Philippines matia
s@engg.upd.edu.ph Gina J. Medsker Senior Staff Scientist Human Resources Researc
h Organization Alexandria, Virginia USA gmedsker@humrro.org Emmanuel Melachrinou
dis Associate Professor Department of Mechanical, Industrial and Manufacturing E
ngineering Northeastern University Boston, Masachusetts USA emelas@coe.neu.edu K
ai Mertins Division Director Corporate Management Fraunhofer Institute for Produ
ction Systems and Design Technology Berlin, Germany Kai.Mertins@ipk.fhg.de Najme
din Meshkati Associate Professor Institute of Safety and Systems Management Univ
ersity of Southern California Los Angeles, California USA meshkati@usc.edu Georg
e T. Milkovich Catherwood Professor of Human Resource Studies Center for Advance
d Human Resource Studies Cornell University Ithaca, New York USA gtml@cornell.ed
u
xviii
CONTRIBUTORS
Hokey Min Executive Director Logistics and Distribution Institute University of
Louisville Louisville, Kentucky USA h0min001@gwise.louisville.edu Kent B. Monroe
Professor Department of Business Administration University of Illinois at Urban
aChampaign Champaign, Illinois USA k.monroe1@home.com Barry M. Mundt Principal T
he Strategy Facilitation Group Rowayton, Connecticut USA barry mundt@earthlink.n
et Kenneth Musselman Senior Business Consultant Frontstep, Inc. West Lafayette,
Indiana USA Barry L. Nelson Professor Department of Industrial Enginering and Ma
nagement Sciences Northwestern University Evanston, Illinois USA nelsonb@iems.nw
u.edu Douglas C. Nelson Professor Department of Hospitality and Tourism Manageme
nt Purdue University West Lafayette, Indiana USA nelsond@cfs.purdue.edu Jens-Gu
nter Neugebauer Director, Automation Fraunhofer Institute for Manufacturing Engi
neering and Automation Stuttgart, Germany jen@ipa.fhg.de
Reimund Neugebauer Professor / Managing Director Fraunhofer Institute for Machin
e Tools and Forming Technology Chemnitz, Germany neugebauer@iwu.fhg.de Jerry M.
Newman Professor School of Management State University of New York-Buffalo Buffa
lo, New York USA jmnewman@acsu.buffalo.edu Abe Nisanci Professor and Director Re
search and Sponsored Programs Industrial and Manufacturing Engineering and Techn
ology Bradley University Peoria, Illinois USA ibo@bradley.edu Shimon Y. Nof Prof
essor School of Industrial Engineering Purdue University West Lafayette, Indiana
USA nof@ecn.purdue.edu Colm A. OCinneide Associate Professor School of Industria
l Engineering Purdue University West Lafayette, Indiana USA colm@ecn.purdue.edu
Phillip F. Ostwald Professor Emeritus Mechanical and Industrial Engineering Univ
ersity of Colorado at Boulder Boulder, Colarado USA philip.ostwald@colorado.edu
CONTRIBUTORS
xix
Raja M. Parvez Vice President of Manufacturing and Quality Fitel Technologies Pe
rryville Corporate Park Perryville, Illinois USA rparvez@ teltech.com Richard B. P
earlstein Director Training Performance Improvement American Red Cross Charles D
rew Biomedical Institute Arlington, Virginia USA pearlstr@usa.redcross.org Juan
R. Perez Industrial Engineering Manager Strategic Process Management Group Unite
d Parcel Service Atlanta, Georgia USA mla2jxp@is.ups.com Ralph W. Pete Peters Princi
pal Tompkins Associates Inc. Raleigh, North Carolina USA ppeters@tompkinsinc.com
Don T. Phillips Professor Department of Industrial Engineering Texas A&M Univer
sity College Station, Texas USA drdon@tamu.edu Michael Pinedo Professor Departme
nt of Operations Management Stern School of Business New York University New Yor
k, New York USA mpinedo@stern.nuy.edu
David F. Poirier Executive Vice President & CIO Hudsons Bay Company Toronto, Onta
rio Canada Lloyd Provost Improvement Advisor Associates In Process Improvement A
ustin, Texas USA provost@fc.net Ronald L. Rardin Professor School of Industrial
Engineering Purdue University West Lafayette, Indiana USA Rardin@ecn.purdue.edu
Ulrich Raschke Program Manager Human Simulation Technology Engineering Animation
, Inc. Ann Arbor, Michigan USA ulrich@eai.com A. Ravi Ravindran Professor and Head D
epartment of Industrial and Manufacturing Engineering Pennsylvania State Univers
ity University Park, Pennsylvania USA aravi@psu.edu David Rodrick University of
Louisville Louisville, Kentucky USA James R. Ross Resource Management Systems, I
nc. USA William B. Rouse Chief Executive Of cer Enterprise Support Systems Norcros
s, Georgia USA brouse@ess-advisors.com
xx
CONTRIBUTORS
Andrew P. Sage Founding Dean Emeritus University and First American Bank Profess
or School of Information Technology and Engineering Department of Systems Engine
ering and Operations Research George Mason University Fairfax, Virginia USA asag
e@gmu.edu Franc ois Sainfort Professor, Industrial and Systems Engineering Georg
ia Institute of Technology 765 Ferst Drive Atlanta, Georgia USA sainfort@isye.ga
tech.edu Hiroyuki Sakata NTT Data Corporation Tokyo, Japan sakata@rd.nttdata.co.
jp Gavriel Salvendy Professor School of Industrial Engineering Purdue University
West Lafayette, Indiana USA salvendy@ecn.purdue.edu August-Wilhelm Scheer Direc
tor Institute of Information Systems Saarland University Saarbrucken, Germany sc
heer@iwi.uni-sb.de Stefan Schmid Professor Department Assembly Systems Fraunhofe
r Institute for Manufacturing Engineering and Automation Stuttgart, Germany sas@
ipa.fhg.de Rolf Dieter Schraft Director Automation Fraunhofer Institute for Manu
facturing Engineering and Automation Stuttgart, Germany rds@ipa.fhg.de
Lisa M. Schutte Principal Scientist Human Simulation Research and Development En
gineering Animation Inc. Ann Arbor, Michigan USA 1schutte@eai.com Shane J. Schva
neveldt Fulbright Scholar Visiting Researcher Tokyo Institute of Technology Toky
o, JAPAN, and Associate Professor of Management Goddard School of Business and E
conomics Weber State Univeristy Ogden, Utah USA schvaneveldt@weber.edu Robert E.
Schwab Engineering Manager Chemical Products Catepillar Inc. Mossville, Illinoi
s USA schwab robert e@cat.com Sridhar Seshadri Professor Department of Operation
s Management Stern School of Business New York University New York, New York USA
J. George Shanthikumar Professor Haas School of Business University of Californ
ia at Berkeley Berkeley, California USA shanthik@haas.berkeley.edu Alexander Sha
piro Professor School of Industrial and Systems Engineering Georgia Institute of
Technology Atlanta, Georgia USA ashapiro@isye.gatech.edu
CONTRIBUTORS
xxi
Gunter P. Sharp Professor Industrial and Systems Engineering Georgia Institute o
f Technology Atlanta, Georgia USA gsharp@isye.gatech.edu Avraham Shtub Professor
Industrial Engineering and Management TechnionIsrael Institute of Technology Hai
fa, Israel shtub@ie.technion.ac.il Edward A. Siecienski Executive Director JMA S
upply Chain Management Hamilton, New Jersey USA eas.999@worldnet.att.net, Wilfri
ed Sihn Director Corporate Management Fraunhofer Institute for Manufacturing Eng
ineering and Automation Stuttgart, Germany whs@ipa.fhg.de D. Scott Sink Presiden
t World Confederation of Productivity Science Moneta, Virginia USA dssink@avt.ed
u David Simchi-Levi Professor Department of Civil and Environmental Engineering
Massachusetts Institute of Technology 77 Massachusetts Avenue, Rm. 1171 Cambridge
, Massachusetts USA dslevi@mit.edu Edith Simchi-Levi Vice President Operations L
ogicTools Inc. 71 Meriam Street Lexington, MA 02420 USA (781)861-3777 (312)803-0
448 (FAX) edith@logic-tools.com
Douglas K. Smith Author and Consultant La Grangeville, New York USA dekaysmith@a
ol.com Francis J. Smith Principal Francis J. Smith Management Consultants Longme
adow, MA 01106 USA FSmith1270@aol.com Keith V. Smith Professor Krannert School o
f Management Purdue University West Lafayette, IN 47907-1310 USA kvsmith@mgmt.pu
rdue.edu George L. Smith Professor Emeritus The Ohio State University Columbus,
Ohio USA Jerry D. Smith Executive Vice President Tompkins Associates, Inc. Ralei
gh, North Carolina USA jsmith@tompkinsinc.com Michael J. Smith Professor Departm
ent of Industrial Engineering University of Wisconsin-Madison Madison, Wisconsin
USA mjsmith@engr.wisc.edu Kay M. Stanney Associate Professor Industrial Enginee
ring and Mgmt. Systems University of Central Florida Orlando, Florida USA stanne
y@mail.ucf.edu
xxii
CONTRIBUTORS
Julie Ann Stuart Assistant Professor School of Industrial Engineering Purdue Uni
versity West Lafayette, Indiana USA stuart@ecn.purdue.edu Robert W. Swezey Presi
dent InterScience America, Inc. Leesburg, Virginia USA isai@erols.com Alvaro D.
Taveira Professor University of Wisconsin-Whitewater Whitewater, Wisconsin USA J
ohn Taylor Coordinator, Research Programs & Service Department of Industrial Eng
ineering FAMU-FSU College of Engineering Tallahassee Florida USA jotaylor@.eng.f
su.edu Oliver Thomas Senior Doctoral Researcher Institute for Information System
s Saarland University Saarbruecken, Germany Thomas@iwi.uni-sb.de James A. Tompki
ns President Tompkins Associates, Inc. Raleigh, North Carolina USA jtompkins@tom
pkinsinc.com Mitchell M. Tseng Professor and Head Department of Industrial Engin
eering Hong Kong University of Science and Technology Hong Kong tseng@usthk.ust.
hk
Gwo Hshiung Tzeng Professor College of Management National Chiao Tung University
Taiwan R.O. China Reha Uzsoy Professor School of Industrial Engineering Purdue
University West Lafayette, Indiana USA uzsoy@ecn.purdue.edu Ralf von Briel Proje
ct Leader Fraunhofer Institute for Manufacturing Engineering and Automation Stut
tgart, Germany Harrison M. Wadsworth Jr. Professor School of Industrial and Syst
ems Engineering Georgia Institute of Technology Atlanta, Georgia USA harrison.wa
dsworth@isye.gatech.edu William P. Wagner Professor Decision and Information Tec
hnologies College of Commerce and Finance Villanova University Villanova, Pennsy
lvania USA Evan Wallace Electronics Engineer Manufacturing Systems Integration D
ivision National Institute of Standards and Technology Gaithersburg, Maryland US
A Ben Wang DOE Massie Chair and Professor Department of Industrial Engineering F
AMU-FSU College of Engineering Tallahassee, Florida USA indwang1@engr.fsu.edu
CONTRIBUTORS
xxiii
H. Samuel Wang Provost Chung Yuan Christian University Taiwan R.O. China Hans-Ju
rgen Warnecke Professor and President Fraunhofer Gesellschaft (Society) Leonrods
trasse Munich, Germany warnecke@zv.fhg.de Joachim Warschat Professor Frunhofer I
nstitute of Industrial Engineering Nobelstrasse 12 Stuttgart, Germany Martin Wet
zels Assistant Professor Maastricht Academic Center for Research in Services Maa
stricht University Faculty of Economics & Business Administration Maastricht The
Netherlands m.wetzel@mw.unimaas.nl Andrew B. Whinston Hugh Cullen Chair Profess
or of Information Systems, Economics and Computer Science Director, Center for R
esearch in Electronic Commerce University of Texas Graduate School of Business A
ustin, Texas USA abw@uts.cc.utexas.edu Richard T. Wong Senior Systems Engineer T
elcordia Technologies, Inc. Piscataway, New Jersey USA rwong1@telcordia.com Andr
ew L. Wright Assistant Professor College of Business & Public Administration Uni
versity of Louisville Louisville, Kentucky USA andrew.wright@louisville.edu
Cheng Wu Professor and Director National CIMS Engineering Research Center Tsingh
ua University Beijing, P.R. China wuc@tsinghua.edu.cn Xiaoping Yang Doctoral Stu
dent School of Industrial Engineering Purdue University West Lafayette, Indiana
USA xiaoping@ecn.purdue.edu David D. Yao Professor Industrial Engineering and Op
erations Research Columbia University New York, New York USA yao@ieor.columbia.e
du Yuehwern Yih Associate Professor School of Industrial Engineering Purdue Univ
ersity West Lafayette, Indiana USA yih@ecn.purdue.edu Po Lung Yu Carl A. Scupin
Distinguished Professor School of Business University of Kansas Lawrence, Kansas
USA pyu@bschool.wpo.ukans.edu Fan YuShun Professor and Vice Director Institute
of System Integration Department of Automation Tsinghua University Beijing, P.R.
China fan@cims.tsinghua.edu.cn David Zaret Lead Research Analyst United Parcel
Service Atlanta, Georgia USA
FOREWORD
Many people speculated about what industrial engineering might look like in the
21st century, and now here we are. It is exciting to see how the professions de nit
ion of the work center has broadened to embrace the information age and the glob
al economy. Industrial engineers, with their ever-expanding toolbox, have a grea
ter opportunity to help corporations be successful than ever before. But while t
hese changes and opportunities are exciting, they presented a challenge to the e
ditor of this handbook. I met with Gavriel Salvendy as he began working to integ
rate all the thoughtful input he had received from his advisory committee, so I
had a rsthand opportunity to observe the energy and creativity required to develo
p this content-rich edition. This handbook is truly an accomplishment, fully sup
porting industrial engineers engaged in traditional as well as new facets of the
profession. This edition has stepped up to this multidimensional challenge. The
Technology section updates the previous edition, with coverage of topics such a
s decision support systems, but also delves into information-age topics such as
electronic commerce. Look closer and youll see that attention has been given to t
he growing services sector, including chapters covering speci c application areas.
Enterprise has become a popular way to describe the total system and the broad orga
nizational scope of problem-solving initiatives. I am pleased to see this addres
sed speci cally through topics such as enterprise modeling and enterprise resource
planning, as well as globally, as in the Management, Planning, Design, and Cont
rol section. This edition truly recognizes that IEs can and do contribute in eve
ry phase of the total product life cycle. The mission of the Institute of Indust
rial Engineers is to support the professional growth of practicing industrial en
gineers. This third edition is an allencompassing Handbook for the IE profession
al that rises to the task. Students, engineers of all types, and managers will nd
this a useful and insightful reference. JOHN POWERS Executive Director Institut
e of Industrial Engineers
xxv
PREFACE
Industrial Engineering has evolved as a major engineering and management discipl
ine, the effective utilization of which has contributed to our increased standar
d of living through increased productivity, quality of work and quality of servi
ces, and improvements in the working environments. The Handbook of Industrial En
gineering provides timely and useful methodologies for achieving increased produ
ctivity and quality, competitiveness, globalization of business and for increasi
ng the quality of working life in manufacturing and service industries. This Han
dbook should be of value to all industrial engineers and managers, whether they
are in pro t motivated operations or in other nonpro t elds of activity. The rst editi
on of the Handbook of Industrial Engineering was published in 1982. It has been
translated into Japanese and published by the Japan Management Association; tran
slated into Spanish and published by Editorial Lemusa; published in a special ed
ition in Taiwan by Southeast Book Company; and translated into Chinese and publi
shed by Mechanical Industry Publisher; and adopted by the Macmillan book club. T
he Handbook was selected by the Association of American Publishers as the Best P
rofessional and Scholarly Publication in 1982 and was widely distributed by the
Institute of Industrial Engineers (IIE). The Foreword of the rst edition of the H
andbook was written by Donald C. Burnham, retired Chairman of Westinghouse Elect
ric Corporation. In this Foreword Burnham wrote, The Industrial Engineering princi
ples that are outlines are timeless and basic and should prove useful to corpora
tions, both large and small, both continuous process as well as discrete part ma
nufacturing, and especially to those working in the service industries where mos
t of the jobs are today. The second edition of the Handbook of Industrial Engineer
ing maintained this thrust. In the Foreword to the second edition, the former pr
esident of NEC Corporation wrote, The Second Edition of the Handbook of Industrial
Engineering will serve as an extremely powerful tool for both industrial engine
ers and managers. The many contributing authors came through magni cently. I thank t
hem all most sincerely for agreeing so willingly to create this Handbook with me
. Each submitted chapter was carefully reviewed by experts in the eld and myself.
Much of the reviewing was done by the Advisory Board. In addition, the followin
g individuals have kindly contributed to the review process: Stephan A. Konz, K.
Ronald Laughery, Jack Posey, William B. Rouse, Kay M. Stanney, Mark Spearman, a
nd Arnold L. Sweet. For the third edition of this Handbook, 97 of the 102 chapte
rs were completely revised and new sections added in project management (3 chapt
ers), supplychain management and logistics (7 chapters) and the number of chapte
rs in
xxvii
xxviii
PREFACE
service systems increased from 2 to 11 chapters. The 102 chapters of this third
edition of the handbook were authored by 176 professionals with diverse training
and professional af liations from around the world. The Handbook consists of 6441
manuscript pages, 922 gures, 388 tables, and 4139 references that are cited for
further in-depth coverage of all aspects of industrial engineering. The editing
of the third edition of the Handbook was made possible through the brilliant, mo
st able, and diligent work of Kim Gilbert, my administrative assistant, who so e
ffectively coordinated and managed all aspects of the Handbook preparation. My s
incere thanks and appreciation go to her. It was a true pleasure working on this
project with Bob Argentieri, the John Wiley senior editor, who is the very best
there is and was a truly outstanding facilitator and editor for this Handbook.
GAVRIEL SALVENDY West Lafayette, Indiana September 2000
CONTENTS
I. Industrial Engineering Function and Skills 1. Full Potential Utilization of I
ndustrial and Systems Engineering in Organizations, by D. Scott Sink, David F. P
oirier, and George L. Smith 2. Enterprise Concept: Business Modeling Analysis an
d Design, by Frank O. Marrs and Barry M. Mundt II. Technology A. Information Tec
hnology 3. Tools for Building Information Systems, by Robert M. Barker, Brian L.
Dos Santos, Clyde W. Holsapple, William P. Wagner, and Andrew L. Wright 4. Deci
sion Support Systems, by Andrew P. Sage 5. Automation Technology, by Chin-Yin Hu
ang and Shimon Y. Nof 6. Computer Integrated Technologies and Knowledge Manageme
nt, by Frank-Lothar Krause, Kai Mertins, Andreas Edler, Peter Heisig, Ingo Hoffm
ann, and Markus Helmke 7. Computer Networking, by Lajos Ba lint and Tama s Ma ray
8. Electronic Commerce, by Soon-Yong Choi and Andrew B. Whinston 9. Enterprise
Modeling, by August-Wilhelm Scheer, Frank Habermann, and Oliver Thomas B. Manufa
cturing and Production Systems 10. The Factory of the Future: New Structures and
Methods to Enable Transformable Production, by Hans-Ju rgen Warnecke, Wilfried
Sihn, and Ralf von Briel 11. Enterprise Resource Planning Systems in Manufacturi
ng, by Mary Elizabeth A. Algeo and Edward J. Barkmeyer 12. Automation and Roboti
cs, by Rolf Dieter Schraft, Jens-Gu nter Neugebauer, and Stefan Schmid 1
3 26 61 63
65 110 155
177 227 259 280 309
311 324 354
xxix
xxx
CONTENTS
13. Assembly Process, by K. Feldmann 14. Manufacturing Process Planning and Desi
gn, by TienChien Chang and Yuan-Shin Lee 15. Computer Integrated Manufacturing,
by Cheng Wu, Fan YuShun, and Xiao Deyun 16. Clean Manufacturing, by Julie Ann St
uart 17. Just-in-Time, Lean Production, and Complementary Paradigms, by Takao En
kawa and Shane J. Schvaneveldt 18. Near-Net-Shape Processes, by Reimund Neugebau
er and Klaus Herfurth 19. Environmental Engineering: Regulation and Compliance,
by Robert B. Jacko and Timothy M. C. LaBreche 20. Collaborative Manufacturing, b
y Jose A. Ceroni and Shimon Y. Nof C. Service Systems 21. Service Industry Syste
ms and Service Quality, by Martin Wetzels and Ko de Ruyter 22. Assessment and De
sign of Service Systems, by Michael Haischer, Hans-Jo rg Bullinger, and Klaus-Pe
ter Fa hnrich 23. Customer Service and Service Quality, by Richard A. Feinberg 2
4. Pricing and Sales Promotion, by Kent B. Monroe 25. Mass Customization, by Mit
chell M. Tseng and Jianxin Jiao 26. Client/Server Technology, by On Hashida and
Hiroyuki Sakata 27. Industrial Engineering Applications in Health Care Systems,
by Swatantra K. Kachhal 28. Industrial Engineering Applications in Financial Ass
et Management, by R. McFall Lamm, Jr. 29. Industrial Engineering Applications in
Retailing, by Richard A. Feinberg and Tim Christiansen 30. Industrial Engineeri
ng Applications in Transportation, by Chryssi Malandraki, David Zaret, Juan R. P
erez, and Chuck Holland 31. Industrial Engineering Applications in Hotels and Re
staurants, by Douglas C. Nelson III. Performance Improvement Management A. Organ
ization and Work Design 32. Leadership, Motivation, and Strategic Human Resource
Management, by Taly Dvir and Yair Berson
401 447 484 530
544 562 589 601 621 623
634 651 665 684 710 737 751 772
787 825 837 839 841
CONTENTS
xxxi
33. Job and Team Design, by Gina J. Medsker and Michael A. Campion 34. Job Evalu
ation in Organizations, by John M. Hannon, Jerry M. Newman, George T. Milkovich,
and James T. Brake eld 35. Selection, Training, and Development of Personnel, by
Robert W. Swezey and Richard B. Pearlstein 36. Aligning Technological and Organi
zational Change, by Ann Majchrzak and Najmedin Meshkati 37. Teams and Team Manag
ement and Leadership, by Franc ois Sainfort, Alvaro D. Taveira, and Michael J. S
mith 38. Performance Management, by Martin P. Finegan and Douglas K. Smith B. Hu
man Factors and Ergonomics 39. Cognitive Tasks, by Nicolas Marmaras and Tom Kont
ogiannis 40. Physical Tasks: Analysis, Design, and Operation, by Waldemar Karwow
ski and David Rodrick 41. Ergonomics in Digital Environments, by Ulrich Raschke,
Lisa M. Schutte, and Don B. Chaf n 42. Human Factors Audit, by Colin G. Drury 43.
Design for Occupational Health and Safety, by Michael J. Smith, Pascale Carayon
, and Ben-Tzion Karsh 44. HumanComputer Interaction, by Kay M. Stanney, Michael J
. Smith, Pascale Carayon, and Gavriel Salvendy IV. Management, Planning, Design,
and Control A. Project Management 45. Project Management Cycle: Process Used to
Manage Project (Steps to Go Through), by Avraham Shtub 46. Computer-Aided Proje
ct Management, by Carl N. Belack 47. Work Breakdown Structure, by Boaz Golany an
d Avraham Shtub B. Product Planning 48. Planning and Integration of Product Deve
lopment, by Hans-Jo rg Bullinger, Joachim Warschat, Jens Leyh, and Thomas Cebull
a 49. Human-Centered Product Planning and Design, by William B. Rouse 50. Design
for Manufacturing, by C. Richard Liu and Xiaoping Yang
868 899 920 948 975 995 1011 1013 1041 1111 1131 1156 1192 1237 1239 1241 1252 1
263 1281
1283 1296 1311
xxxii
CONTENTS
51. Managing Professional Services Projects, by Barry M. Mundt and Francis J. Sm
ith C. Manpower Resource Planning 52. Methods Engineering, by Stephan Konz 53. T
ime Standards, by Stephan Konz 54. Work Measurement: Principles and Techniques,
by Aura Castillo Matias D. Systems and Facilities Design 55. Facilities Size, Lo
cation, and Layout, by James A. Tompkins 56. Material-Handling Systems, by Yavuz
A. Bozer 57. Storage and Warehousing, by Jerry D. Smith 58. Plant and Facilitie
s Engineering with Waste and Energy Management, by James R. Ross 59. Maintenance
Management and Control, by Ralph W. Pete Peters E. Planning and Control 60. Queuing
Models of Manufacturing and Service Systems, by John A. Buzacott and J. George
Shanthikumar 61. Production-Inventory Systems, by David D. Yao 62. Process Desig
n and Reengineering, by John Taylor, Tarsha Dargan, and Ben Wang 63. Scheduling
and Dispatching, by Michael Pinedo and Sridhar Seshadri 64. Personnel Scheduling
, by Richard N. Burns 65. Monitoring and Controlling Operations, by Albert T. Jo
nes, Yuehwern Yih, and Evan Wallace F. Quality 66. Total Quality Leadership, by
Johnson A. Edosomwan 67. Quality Tools for Learning and Improvement, by Lloyd Pr
ovost 68. Understanding Variation, by Lloyd Provost 69. Statistical Process Cont
rol, by John R. English and Terry R. Collins 70. Measurement Assurance, by S. Ch
andrasekar 71. Human Factors and Automation in Test and Inspection, by Colin G.
Drury 72. Reliability and Maintainability, by Kailash C. Kapur 73. Service Quali
ty, by Laura Raiman DuPont 74. Standardization, Certi cation, and Stretch Criteria
, by Harrison M. Wadsworth, Jr.
1332 1351 1353 1391 1409 1463 1465 1502 1527 1548 1585 1625 1627 1669 1695 1718
1741 1768 1791 1793 1808 1828 1856 1877 1887 1921 1956 1966
CONTENTS
xxxiii
75. Design and Process Platform Characterization Methodology, by Raja M. Parvez
and Donald Fusaro G. Supply Chain Management and Logistics 76. Logistics Systems
Modeling, by David Simchi-Levi and Edith Simchi-Levi 77. Demand Forecasting and
Planning, by Ananth V. Iyer 78. Advanced Planning and Scheduling for Manufactur
ing, by Kenneth Musselman and Reha Uzsoy 79. Transportation Management and Shipm
ent Planning, by Jeffrey H. Fischer 80. Restructuring a Warehouse Network: Strat
egies and Models, by Hokey Min and Emanuel Melachrinoudis 81. Warehouse Manageme
nt, by Gunter P. Sharp 82. Supply Chain Planning and Management, by Douglas M. L
ambert and Edward A. Siecienski V. Methods for Decision Making A. Probabilistic
Models and Statistics 83. Stochastic Modeling, by Colm A. OCinneide 84. DecisionMaking Models, by Mark R. Lehto 85. Design of Experiments, by H. Samuel Wang and
ChungPu Chang 86. Statistical Inference and Hypothesis Testing, by Don T. Phill
ips and Alberto Garcia-Diaz 87. Regression and Correlation, by Raja M. Parvez an
d Donald Fusaro B. Economic Evaluation 88. Product Cost Analysis and Estimating,
by Phillip F. Ostwald 89. Activity-Based Management in Manufacturing, by Keith
V. Smith 90. Discounted Cash Flow Methods, by Raymond P. Lutz 91. Economic Risk
Analysis, by G. A. Fleischer 92. In ation and Price Change in Economic Analysis, b
y Joseph C. Hartman C. Computer Simulation 93. Modeling Human Performance in Com
plex Systems, by K. Ronald Laughery, Jr., Susan Archer, and Kevin Corker
1975 2005 2007 2020 2033 2054 2070 2083 2110 2141 2143 2145 2172 2224 2241 2264
2295 2297 2317 2331 2360 2394 2407
2409
xxxiv
CONTENTS
94. Simulation Packages, by Abe Nisanci and Robert E. Schwab 95. Statistical Ana
lysis of Simulation Results, by Barry L. Nelson 96. Virtual Reality for Industri
al Engineering: Applications for Immersive Virtual Environments, by Hans-Jo rg B
ullinger, Ralf Breining, and Martin Braun D. Optimization 97. Linear Optimizatio
n, by A. Ravi Ravindran and Roy Marsten 98. Nonlinear Optimization, by Tom M. Cavali
er 99. Network Optimization, by Richard T. Wong 100. Discrete Optimization, by R
onald L. Rardin 101. Multicriteria Optimization, by Po Lung Yu, Chin I. Chiang,
and Gwo Hshiung Tzeng 102. Stochastic Optimization, by Anton J. Kleywegt and Ale
xander Shapiro Author Index Subject Index
2445 2469 2496 2521 2523 2540 2568 2582 2602 2625 2651 2699
3.3
REFERENCES ADDITIONAL READING
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
3
4
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
1.
OVERVIEW
The theme of this chapter is achieving full potential. We explore how industrial and
systems engineering and engineers (ISEs) can achieve full potential and how the
ISE function and individual ISEs can assist their organizations in the achievem
ent of full potential. Our fundamental premise is that organizations that desire
to achieve full potential can enhance their success by more fully utilizing the
potential of their ISEs. This will require holding a different de nition for ISE,
organizing the function differently, and having a different expectation regardi
ng the ISE value proposition. The practicing ISE will also need to envision the
role(s) he or she can play in large-scale transformation. This possibility has i
mplications for the way ISE is organized and positioned in organizations, for hi
gher education, and for the individual ISE.
1.1.
Full Potential Introduced
Have you ever experienced being a 10 ? Perhaps you struck a perfect golf shot, had a
great day when everything went perfectly, or awlessly executed a project. Were th
inking of something that turned out even better than you expectedwhen you were in
ow, creating the optimal experience and optimal results (Csikszentmihalyi 1990). Ful
l potential is about realizing personal and organizational possibilities. Its abo
ut getting into ow and staying there. Its about creating optimal results from the
organizational system.
1.2.
Structure of the Chapter
In its simplest form, an organization might be modeled as a complex collection o
f actions that drive particular results. These actions take place in a context o
r environment that mediates or moderates the results. Figure 1 illustrates such
a model. This model will be our organizing frame for the chapter. We will discus
s the role of ISE in corporate transformation, in the achievement of organizatio
nal full potential performance. As you see in Figure 1, there are three roles th
at the ISE can play in this model: 1. Strategy and positioning (e.g., what is th
e value proposition? Are we doing the right things? What is the strategic planni
ng process, who is involved, what is the system, how do we ensure it works?) 2.
Conditions for success (what are the conditions surrounding the actions that dri
ve results? Is the environment right to support success?) 3. Drivers, or operations
improvement (are we doing the right things right? How are we doing things?) We w
ill discuss these roles in that order in the chapter. Note that the third role h
as been the traditional focus of ISE; we will suggest an expansion of the ISE doma
in. Rather than cover what is well covered in the rest of this handbook in the thr
ee roles for ISE in the future state organization, we will highlight, in each of
the three roles, work that we believe ISEs will migrate to in achieving full po
tential.
1.3.
ISE Domain De ned
As James Thompson points out, a domain can be de ned by the technologies employed, the
ISE
Human Factors Engineering
Accounting Economics
Management Systems
Psychology
Organizational Behavior
Figure 2 An Academic Portrayal of Part of the Domain De nition for ISE.
6
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
with basic knowledge areas and / or application areas such as statistics, psycho
logy, mathematics, information sciences, accounting, and economics. While this m
odel is useful for portraying ISE from a curricular perspective, it is much less
useful from an application perspective. Once the ISE begins reduction of theory
to practice, the academic distinctions rapidly disappear. The ISE typically mig
rates to a setting that is de ned by business processes rather than subdiscipline.
The edgling ISE is thrown into a system of people and capital that survives and
thrives by continuing to enhance customer loyalty while at the same time reducin
g costs and improving ef ciency. Fortunately, the ISE value proposition is so robu
st that enterprising ISEs nd that they can contribute at any point and any level
in the enterprise system. ISEs at work are more accurately portrayed by the pote
ntial value contribution or offerings they bring to the enterprise as it migrate
s to full potential and future state. ISEs at work will increasingly nd that they
must transcend and include their academic training in order to contribute meani
ngfully to the quest for full potential. Figure 3 is an example of such a portra
yal. The speci c contribution that ISEs make, the true value contribution, is the
focus of this model. One example might be creating more effective measurement sy
stems that lead to better information about the connection between improvement i
nterventions and customer behaviors. Others might include optimized supply chain
systems or increased safety and reduced lost time injuries. What is important i
n these examples is their impact on business results rather than their particula
r tools, techniques, or disciplinary focus. It is the cause-and-effect relations
hip between the value proposition and the business result that is the emerging e
mphasis. The disciplinary tool, knowledge, or technique becomes valuable when it
is applied and we see its instrumentality for achieving positive business resul
ts. The ISE value proposition isnt only knowledge; it is the ability to reduce th
at knowledge to practice in such a way that it produces positive business result
s. In the coming decades, ISE practice is going to be very, very focused on crea
ting results that move ISEs and their organizations toward full potential.
1.6. Integrating Work in Strategy and Policy, Conditions for Success, and Operat
ions Improvement Leads to Full Potential
Our point of view is that integrating work done to achieve strategy and position
ing, the management of conditions for success, and operations improvement create
full potential performance (again, see Figure 1). The role of ISE needs to expa
nd to include more involvement with this integration. ISE work in operations imp
rovement, at minimum, needs to be seen in the context of the other two roles. At
the extreme, ISE needs to be active in strategy and positioning and condition f
or success work. And it is value creation for the organization that is the end i
n the coming decades. A profession that
Building more Effective Measurement Systems for improvement Strategy & Positioni
ng/ Policy System & Process Improvement (e.g. Business Process Reengineering)
Aligned, Focused Improvement
Condition of Mind: Intention, Alignment
Program & Project Management Discipline (Benefits Realization) Information Syste
ms & Technology
Change Leadership Management
Figure 3 ISE Value Proposition: A Real World Perspective.
$955
$2000
$4000
$6000
$8000
Full Potential = ?
General Market Comparison Firm
Figure 4 Relative Performance Data from Collins and Porras (1994): What Was Full
Potential?
* E.g., 3M the visionary company, Norton the comparison rm; Boeing the visionary
company, McDonnell Douglas the comparison rm; Citicorp the visionary company, Cha
se Manhattan the comparison rm.
8
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
What are the attributes of full potential organizations? In living companies, De
Geus found four signi cant factors: 1. Sensitivity to the environment: the ability
to learn and adapt (we might add, in a timely fashion) 2. Cohesion and identity
: the ability to build community and a distinct cultural identity (we might add,
that supports full potential performance) 3. Tolerance and decentralization: th
e ability to build constructive relationships with other entities, within and wi
thout the primary organization 4. Conservative nancing: the ability to manage gro
wth and evolution effectively DeGeus states, Like all organisms, the living compan
y exists primarily for its own survival and improvement: to ful ll its potential a
nd to become as great as it can be (1997, 11). Collins and Porras went beyond DeGe
us, identifying one distinguishing variable and six explanatory variables they attri
buted to performance differences between visionary companies and comparison rms.
The distinguishing variable for the visionary rms was Leadership during the formati
ve stages. The six explanatory variables that they identi ed were: 1. Evidence of co
re ideology: statements of ideology, historical continuity of ideology, balance
in ideology (beyond pro ts), and consistency between ideology and actions (walk th
e talk) 2. Evidence of the use of stretch goals, visioning, de ning full potential
for a given period of time: bold hairy audacious goals (BHAGs); use of BHAGs, audac
ity of BHAGs, historical pattern of BHAGs. 3. Evidence of cultism: building and sust
aining a strong culture, seeing culture as an independent variable, not a contex
t variable; indoctrination process, tightness of t (alignment and attunement) 4.
Evidence of purposeful evolution: conscious use of evolutionary progress, operat
ional autonomy, and other mechanisms to stimulate and enable variation and innov
ation 5. Evidence of management continuity: internal vs. external CEOs, no post-he
roic-leader vacuum, formal management development programs and mechanisms, careful
succession planning and CEO selection mechanisms 6. Evidence of self-improvemen
t: long-term investments, investments in human capabilities (recruiting, trainin
g and development), early adoption of new technologies and methods and processes
, mechanisms to stimulate improvement; effective improvement cycles established
and a way of doing business. We will highlight this last nding particularly as we
explore the expanding role for the ISE in the future.
2.3.
Enterprise Excellence Models
We nd enterprise models or business excellence models to be increasingly relevant
to the central message of this chapter. They are a way to portray the lessons f
rom the work of DeGeus and Collins and Porras. For example, the Lean Enterprise
Institute is working with MIT to develop a Lean Enterprise Model (LEM) (Womack a
nd Jones 1996). The Malcolm Baldrige Award has created a Performance Excellence
Framework (National Institute of Standards and Technology 1999). We believe that
the Baldrige model provides valuable insight into the variables and relationshi
ps, consistent with the lessons from Collins and Porras. Figure 5 depicts the Ba
ldrige Criteria for Performance Excellence: (1) leadership; (2) strategic planni
ng; (3) customer and market focus; (4) information and analysis; (5) human resou
rce focus; (6) process management; and (7) business results (overarchingcustomerand market-focused strategy and action plans). Compare and contrast these varia
bles to the seven identi ed in Collins and Porras. Each of these models prescribes
strategies for achieving full potential. Our contention is that this striving f
or full potential is the context within which ISE will be practiced. ISE will be
challenged to present a value proposition in the context of large-scale organiz
ational transformation. Enterprise models of excellence provide insights into ho
w to position our profession and the work of ISE in organizations. With this bas
e, each and every ISE has the potential to be a visionary representative of our
profession. This is an excellent way to think about both personal and collective
value propositions.
2.4.
Implications for ISE
The context shift we describe above for ISE has several speci c implications. Firs
t, each and every initiative undertaken by an ISE must be causally linked to bus
iness results. Lets use a large retail
10
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
C - cater to crises, fight fires A - administer the business, do the job
D - do the dumb
B - build the business improve performance, fix the process or system
Figure 7 ABCD Model: How We Spend Our Time.
merator) shifts the paradigm about our value contribution. Our contention is tha
t ISEs will be challenged to work on both the numerator and the denominator of t
he pro t equation. ISEs and the ISE function will be required to explain our contr
ibution in terms of the pro t ratio. It wont be enough to say that we improved the
ef ciency of a process or work cell. We will have to demonstrate how our actions l
ead to lling the treasure chest. The seventh variable that Collins and Porras (19
94) identi ed was evidence of selfimprovement. Time is the most critical resource in t
he knowledge-based organization. People at all levels spend their time doing fou
r things: they Administer the business, that is, do jobs (A work); they Build the bu
siness, that is, improve performance and x systems or processes (B work); they Cater
to crises, that is, ght res, x problems (C work); and they Do the dumb, that is, nonva
ue-adding things (D work). Figure 7 depicts how we spend our time. Organizations tha
t intend to achieve full potential will spend more time on B work. They will est
ablish improvement cycles such as the Deming and Shewhart Plan-Do-Study-Act mode
l. Rather than addressing targets of opportunity, ISEs will deploy improvement cycle
s that are thought through strategically, comprehensive in scope, and well integ
rated. Enterprise excellence models clearly indicate this. Systems thinking will
be applied at the enterprise level. This has been the clear migratory path for
the past 30 years, and it will continue to be. Our professions value proposition
will focus on the special knowledge and skills the ISE brings to the quest for f
ull potential. In the more traditional model, ISE work is often detached from th
e work of transformation. ISE improvement efforts tend to be done outside the co
ntext of the enterprise improvement cycle, so they lack a clear causal connectio
n to organizational business results (e.g., lling the treasure chest). So the big
implication for ISEs in the next several decades is that they must think enterp
rise, think total systems, and be connected to the enterprise improvement cycle.
ISEs cannot afford to (sub)optimize targeted subsystems at the expense of the l
arger system, and they cannot afford to be isolated from the large-scale transfo
rmation work that characterizes the full potential organization. We might go a b
it further and suggest that increasingly ISEs are going to be challenged to pres
cribe migration paths that move an organizational system toward full potential a
t a faster rate.
3.
3.1.
THE FUTURE STATE VALUE PROPOSITION FOR ISE
Overview
On the threshold of the 21st century, Michael Porter is perhaps the best-known r
esearcher, author, teacher, and consultant on the subject of competitive strateg
y. Porter (1996) makes a distinction
12
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
(Prepare to Plan) STEP 1 Organizational Systems Analysis
Mission, charter, purpose Vision Values, principles Scanning, situation appraisa
l Assumptions Past and current performance assessment Gaining systems appreciati
on I/Q analysis Enlightenment Upline plan review
RECYCLE
(Study - Act) STEP 7
> Quarterly reviews 3 level meetings Think statistically Annual cycle
STEP 6
Performance Measurement Systems Visible management systems
(Do) (Plan) STEPS 2 - 3
Strategic Tactical Great performance
Infrastructure
STEPS 4 - 5
Scoping Implementation Implementation Management
OBJECTIVES
P D S A
t
(a)
WIDESPREAD COMMUNICATION LEADERSHIP BY EXAMPLE VISIBILITY/WAR ROOMS MID-YEAR REV
IEWS REWARDS & RECOGNITION QUARTERLY LAWS
ENLIGHTENMENT/AWARENESS PAST INITIATIVES CURRENT CURRENT PERFORMANCE ROADBLOCKS
ASSUMPTIONS GUIDING PRINCIPLES VISION MISSION
REVISIT/RECYCLE STEP ONE ZERO-BASE PLANS EVERY THREE TO FIVE YEARS
STEP FIVE Accountability Mechanisms
STEP ONE Preparation to Plan KEY PERFORMANCE INDICATORS
AUDITS STRATEGY STATEMENTS EXPANDED DEFINITIONS TACTICALOBJECTIVES GOALS/STRATEG
IC OBJECTIVES
STEP TWO Participatively Develop Goals & Objectives Visible Management Systems I
nfrastructure Problem Solving Process Accountability Matrix STEP THREE Implement
ation Planning & Implementation STEP FOUR Continuous Performance Improvement Mea
surement
(b)
Figure 9 Improvement Cycle Process.
$s
Customers quality of customer interaction satisfaction % speed loyalty Market Sh
are
value trust partnership quality consistency
cooperation win -- win
goods services people
trust
dialogue
Employees seamlessness value loyalty competencies successful attitudes Compellin
g Place to Work value of goods & services
Compelling Place to Provide (partner with, supply)
Compelling Place to do Business With
Figure 10 Full-Potential Performance Requires the Optimization of Key Relationsh
ips. ( represents relationships (or value exchanges) we will optimize in this tra
nsformation.)
14
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
fectively will ultimately determine the effectiveness of the implementation. At
the macro level and with a focus on business results, it will be the relationshi
p with the customer that will determine how well the treasure chest is lled. Acqu
iring information about the customer, monitoring how that information is used, a
nd helping the customer be successful are all key aspects of customer relationsh
ip management (CRM). The role ISE should play is, at a minimum, to be keenly awa
re of the importance of CRM and to align ISE project work with CRM efforts. In t
hat larger context and domain, value creation will derive from the ISEs ability
to manage our relationship with various partner and customer segments and create
results for and with them. In our view there is only one customer, the consumer
. The end users or customers are the population we serve. There are no internal
customers in our model. Everyone inside the organization should function as a pa
rtner on a team working to serve the customer. The ISE is in partnership with ot
hers in the enterprise, and all are working to achieve organizational full poten
tial performance. The ISE brings technologies to the enterprise that will facili
tate achieving full potential performance. At the maximum, ISE professionals wou
ld be part of the CRM program team. This would involve designing the strategy, w
orking on information systems, and thinking through how to deploy CRM data and i
nformation to achieve alignment and optimize the lifetime value of the customer.
Perhaps the most dramatic change for ISEs in the decades to come will be the re
quirement that they understand the needs of the customers and have a role in man
aging customer relationships. ISEs understand the process of converting data to
information, using information to support decisions and actions. This understand
ing is sorely needed in organizations today and is a key to unlocking full poten
tial. How will ISE solutions enhance the organizations relationships with its cus
tomers? This will be a key area of emphasis for the ISE in the future. The relat
ionship management aspect of strategy and policy deployment goes well beyond jus
t improved customer relationship management systems and processes. As Figure 10
shows, full potential performance requires that all relationships be managed dif
ferently. It has not been uncommon for supplier and vendor relationships to be i
n transition in the past; what has the role of ISE been in that transition? We c
ontend that relationships with employees must also be managed differently; what
is the role of ISE in that? Sears has a well-publicized transformation effort th
at focuses on being a compelling place to invest, a compelling place to shop, an
d a compelling place to work. This concept will show up in Figure 10. We would a
rgue that ISEs can and should play a role in the relationship management associa
ted with making the organization compelling from all constituent vantage points.
This will lead to full potential performance. Managing these relationships diff
erently will take different levels of personal mastery. Listening skills will be
even more important. How we hold ourselves in relationship to others in the sys
tem is foundational; do we see ourselves as partners or as competitors / adversa
ries to be negotiated with? First we need a different model for full potential.
Then we need to develop the skills to work the model effectively. We have discus
sed grand strategy, improvement cycle, policy deployment, and relationship manag
ement as key elements of planning system (strategy and policy development and ex
ecution) transformation. Other changes are required in the planning system in or
der for an organization to migrate to full potential performance, which we will
either mention here and not elaborate on or mention in upcoming sections. The tr
ansformation to full potential is a stream of improvement cycles. There are impr
ovement cycles embedded in improvement cycles. There is one for the overall ente
rprise and then embedded improvement cycles (aligned) for subsystems. Coordinati
ng all these improvement cycles is critical to overall successyet another natural
role for ISEs to play in the future. Improvement cycles, at all levels, are con
ceptually a process of Planning, Doing, Studying (progress and performance), and
then Adjusting plans based on results. Weve addressed the Plan and Do steps. In
the third role, operations effectiveness and ef ciency, well discuss the role of me
asurement in the Study process and outline how to build more effective measureme
nt systemsyet another key role for ISEs in the work to achieve full potential per
formance. All the Planning and Doing and Studying and Adjusting is being done in
a context or organizational environment. Figure 1 depicted this. Culture shift
is central to creating conditions that will support migration to full potential
performance. We would once again contend that ISEs can and should play a role in
designing and executing the culture shift to create conditions that will fully
support transformation to full potential. This might be a support role to the HR
function, team member / collaborator, and so on. We will esh this out in the nex
t section.
3.2.3.
Change Leadership: The IE as Change Master
Pulling off a major shift in the improvement cycle process in organizations will
require many, if not most, to get outside their comfort zones. People are going
to be asked to do different things differently. One key function of ISE in the
context of improvement cycles will be bringing solutions to the business that he
lp ll the treasure chest. Ensuring that the bene ts of improvement are actually rea
lized is a three-ball juggling challenge. The rst ball is solution design and dev
elopment. Solutions
What is the full potential culture? Figure 11 is a pictorial we use to describe our
answer. The descriptors on the right side re ect values, attitudes, and behaviors
that we believe are consistent with achieving full potential. Listed on the lef
t are commonly experienced values, attitudes, and behaviors typical of underperf
orming organizations. We believe that most people inherently have core values th
at match those on the right: serving, learning, integrity, and excellence. Unfor
tunately, we have found that when individuals come together in organizations, th
ings conspire to cause their attitudes and behaviors to migrate to the left. By
raising consciousness about personal choices and what motivates those choices, i
t is possible to induce a shift to the right. We might more accurately say it is
possible to create conditions that will naturally bring people back to their co
re values and to the attitudes and behaviors that are so critical to full
16
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
OPPORTUNITIES
Ourselves Conventional Attack Ideas Individual Fearful Indecisive Being Popular
Focus on Activity Suspicion Blaming Competition Avoidant Defending Arguing Hiera
rchical Territorial Service to Others Creative Nurture Ideas Team Courage Decisi
ve Right Decision Focus on Results Trusting Accountable Collaboration Direct Lea
rning Listening Empowering Sharing
SERVING
EXCELLENCE
INTEGRITY
LEARNING
FULL POTENTIAL PERFORMANCE
Under Performance
Figure 11 Leadership Values Model.
potential performance. Whether the ISE is an active or a passive player in this
process, there still must be an awareness of the role culture plays in performan
ce improvement. Having said this, we are convinced that there can be no passive
players. Everyones behavior must re ect full-potential values. (i.e., walk the talk.) T
he ISE function is too important to be ignored. The ISEs must model these core v
alues in order to be effective as individuals and as a collective force for impr
ovement. Note that we entitled the section Culture System. The implication is that c
ulture is the product of a system. If we want / need the culture to be different
in order to achieve full potential, then we need to change the system that shap
es culture. This is an active rather than passive approach to culture. We are co
nsciously shifting culture toward what is natural (in our view) and toward value
sattitudes-behaviors that support achievement of full potential. Because ISEs ar
e about integrating systems to optimize performance, recognizing that culture is
a subsystem that needs to be led and managed is integral to our workyet another
area of potential work for ISEs in the future.
3.3.2.
Infrastructure for Improvement
Earlier we alluded to the ABCD model (Figure 7) depicting how we spend our time.
Spear and Bowen (1999) highlight the importance of integrating administering th
e business (A work) with building the business. What are the roles and rules in
the context of A and B? Toyota manages to demand rigid performance speci cations while
at the same time ensuring exibility and creativity. This seeming paradox is the DN
A or essence of their success. Disciplined execution of A makes it possible to als
o do B. The infrastructure for this is supported by culture, discipline, learnin
g, role clarity, and rules. The potential ISE role in this regard is tremendous.
Many organizations lack discipline and speci cation relative to A work, leading t
o a great deal of variation and unpredictability. This variability in performanc
e is a source of D and ineffective management of C. Established methods and proc
edures are central to gaining control of A-work and freeing up more time for B.
It is important for ISEs to reestablish their role in the continued rationalizatio
n of A. As Spear and Bowen (1999) mention, rationalizing A involves using experime
ntation and the scienti c method to improve things. A word of caution: things will
get worse before they get better. If the experiments are well designed and care
fully thought through, the loss in performance will be manageable and the nal res
ults will be well worth the cost. Ideally, one could create a system that would
perform and improve simultaneously. How to organize this and make it happen is c
learly an important domain of activity for ISEs. How is your organization organi
zed to do B, and what role should ISE play in the infrastructure piece? We have
found that the infrastructure for B will determine how effective and ef cient the
B work is. ISEs can play an important role in this design variable.
ing. This will require, rst, recognition that this is important, and second, will
ingness and desire to learn. We believe that creating a learning orientation is
part of the culture shift work. People have a natural tendency and desire to lea
rn and grow. When you change the condition within which people are working, this
natural tendency will reappear. Then it is just a matter of directing the learn
ing toward the knowledge and skills that support performance improvement. Much o
f this will be quality and productivity and improvement related, and this is whe
re the ISE comes in. Changing the culture and then changing the system for learn
ing are central to creating conditions that support full potential performance.
ISEs can and should play a role in this shift.
3.3.5.
Summary
The involvement of ISE in strategy and positioning and in conditions for success
occurs too infrequently. Full potential, as we have posited, requires an integr
ation of work in the three areas. We asserted earlier that for ISE to be positio
ned differently, to adopt a broader domain, a senior champion or sponsor is ofte
n required. This will be someone who is an ISE or understands the value proposit
ion of ISE and is constantly looking for effective pieces of work to involve the
m in, ways to use ISE to migrate to full potential. Each month, Industrial Manag
ement features ISEs who are in these roles.*
* An outstanding example of an executive champion of ISE is our coauthor and col
league, David Poirier. His story appears in the May / June 1998 issue of Industr
ial Management.
18
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
The true impact of leadership in achieving and sustaining full potential perform
ance is most often seen in the work they do in establishing conditions for succe
ss. All ISEs would be well served to consider the role they play in establishing
and sustaining the proper conditions or environment to support full potential p
erformance. We believe that ISE modes of thinking will be valuable in the challe
nging strategy and positioning work of the organizations of the future and in cr
eating optimal conditions for success. A perspective built on systems thinking i
s ideally suited for the type of planning and analysis required as organizations
work to enhance positioning and strategy and creating positive internal conditi
ons in the everchanging markets. The challenge to practicing ISEs is to understa
nd the connection between their work and organizational attainment of its full p
otential. Seminal questions include How does my work support the overall positioni
ng and strategies of the organization? What are the cause-and-effect linkages betwee
n my work and lling the treasure chest? How is my work connected to other efforts in t
e organization, and how can I strengthen the connection and create synergies whe
re they dont currently exist? What is full potential performance and how is my work m
oving the organization toward that goal? How can the ISE role work on key condition
for success issues in a way that enables or enhances the success we achieve over
time? We have offered some examples of what we call strategy and positioning and con
ions for success and described the roles that ISE can play in these categories of
endeavor. Too often the work of ISE gets over- or underemphasized due to a lack
of understanding of where they t in the organization. ISEs need to be aware of th
e big picture and the leverage points to ensure that their work is contributing
in an optimal fashion to the greater good. To ensure that we are not misundersto
od, we would offer the following. We are not calling for ISEs to be all things t
o all people. We understand the core value proposition of ISE, what it has been,
and we have our views on what it will be. We are simply suggesting that if ISEs
are to live up to their lofty de nition as systems integrators, the future will r
equire us to continue to transcend and include more traditional roles and migrat
e to enterprise level contributions.
3.4. 3.4.1.
Operations Improvement Role Overview
We will now turn to the traditional role of ISE, that of problem solving and ope
rations improvement. We place improvements in ef ciency, quality, and technology (
e.g., methods, hardware, software, procedures, processes) in this category. Rega
rdless of whether the ISE specializes in operations research, human factors, man
ufacturing systems, or management systems methodology, the challenge will be to
change how things are done such that total system performance is improved. Tradi
tionally, ISEs have operated at the work center level. Over the past 30 years the
scope of the system of interest has broadened. The word system in industrial and sy
stems engineering has taken on increased importance. This migration has occurred
naturally; it is what was required for the ISE to be more successful. Unfortuna
tely, industrial engineering tended to lag behind other disciplines in the evolu
tionary process. Our profession has missed opportunities to continue to add valu
e over the past 30 years. The more striking examples include abandoning the qual
ity movement, and missing both the tactic of business process reengineering and
the emergence of the balanced scorecard in the measurement domain. We believe th
at the decades to come will provide opportunities for ISE to reintegrate and rep
osition itself as a leader and doer in the eld of performance improvement. At the
risk of being repetitive, this requires rethinking the role and relationship am
ong positioning and strategy, conditions for success, and operations improvement
. We will not duplicate material in the remaining chapters of this Handbook. How
ever, we do want to ll in some blanks by highlighting a couple of areas of operat
ions improvement that we feel might not be represented: systems and process impr
ovement (more speci cally, business process reengineering) and measurement systems
. Again, our contention is that the bulk of this Handbook focuses on the traditi
onal ISE role in achieving operations effectiveness.
3.4.2.
Business Process Improvement
The term unit of analysis applies to the scope of the system of interest. When I
E rst began, the unit of analysisthe scope of the system of interestwas con ned to th
e worker, the work cell, and individual work methods. Over time, the scope of th
e system of interest to the ISE has increased for example, from economic lot size
to inventory control to warehouse management system to supply chain optimizatio
n to enterprise or supply chain synthesis. It is important to keep in mind that
this expansion in focus is an and, not an or. Attending to larger units of analysis i
transcend and include strategy. Inventory control remains in the ISE toolkit, but n
ow it is being applied in the context of larger systems. This causes the ISE to
rethink what it means to optimize the system of interest. If a systems performanc
e objective is simply to ll customer orders, one might employ a
Figure 12 Business Process Reengineering Roadmap: How the ISE Might Contribute.
* Interestingly, quality assurance, control, and sampling, etc. were taught almo
st exclusively in ISE programs before the 1980s. Now it would be surprising to nd
an MBA program without TQM coursework.
20
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
think systems. Second, many of the substeps in the process (see Figure 12) requi
re speci c knowledge and skills that ISEs are grounded in. We understand that ISEs
are involved in many BPR efforts in organizations today. We also know that many
BPR efforts are information technology-driven and many exclude ISE involvement.
We believe this is an error that leads to underperformance. At several points i
n the chapter we have mentioned the importance of measurement in transforming to
full potential performance. Lets turn our attention to a speci c focus on this rol
e and see if we can highlight differences we see in the future in terms of ISE c
ontribution.
3.4.3.
Building Effective Measurement Systems
Work measurement and methods engineering are terms that have traditionally descr
ibed actions taken to quantify the performance of a worker or a work unit and to
improve the performance of the individual and the unit. The relevant body of kn
owledge has been developed and preserved through the entire history of ISE. Thro
ughout that history, one principle has served as its unifying force: an apprecia
tion for and the applied knowledge of systems. In fact, Frederick Taylor (1856191
5), who is generally recognized as the father of industrial engineering, wrote i
n 1911, The system must be rst. Methods engineering was pioneered by Frank Gilbreth a
nd his wife Lillian, whose lives were memorialized in the Hollywood lm Cheaper by
the Dozen. Soon after methods engineering began to be practiced, the need for m
easurement technology became clear. This was inevitable. Once analysts proposed
improvements to the way a job was done, natural curiosity led to the question How
much better is it? The ISEs translation of and response to that question led to th
e development of tools and techniques for measuring work. More recently, the imp
ortance of methods engineering and work measurement was underscored by the late
W. Edwards Deming in his legendary seminars. Dr. Deming, when questioned about a
system performance de cit, would confront his audience with the challenge By what m
ethod? He virtually browbeat thousands of paying customers into the realization th
at insuf cient thought and planning went into the design of work systems. Dr. Demi
ng believed that workers, by and large, strive to meet our expectations of them.
The lack of suf cient high-quality output, he taught, stemmed not from poor worke
r attitude, but from poor management and poor design of the methods, tools, and
systems we provide to the otherwise willing worker. Dr. Deming also promoted the
ISEs contribution through work measurement with an additional challenge. Once a
solution to the By what method question was offered, Deming would ask, How would you
know? This query highlighted his insistence that decisions regarding process impro
vements be data driven. In practice, this means that effective systems improveme
nt activities require evidence as to whether the changes make any difference. Th
e requirement is that we use our measurement expertise to quantify the results o
f our efforts to design and implement better systems. The Deming questionsBy what m
ethod? and How would you know?articulate the de ning concerns of the early ISEs, conce
that continue to this very day. The ability to measure individual and group per
formance allowed organizations to anticipate work cycle times, which led to more
control over costs and ultimately more pro tability and better positioning in the
marketplace. Understanding how long it actually takes to do a task led to inqui
ry about how long it should take to do work through the application of scienti c m
ethods. Standard times became prescriptive rather than descriptive. The next ste
p in the evolution was the integration of production standards into incentive pa
y systems that encouraged workers to exceed prescribed levels of output. Applica
tion of extrinsic rewards became an additional instrument in the ISE toolbox, ve
stiges of which linger on. So much for the evolution of the work measurement and
methods aspects of traditional ISE practice. The following are some evolutionar
y enhancements that have become part of the ISE measurement value proposition:
Statistical thinking plays a more critical role in understanding work performanc
e. Variation is
inherent in all processes and systems. Discovering the underlying nature of vari
ation and managing the key variables has become more critical than just establis
hing a standard. The role of production quotas is being reexamined. Should we th
row out all standards, quotas, and targets, as Dr. Deming suggested? We think no
t. We contend that the effective approach is to establish a system within which
teams of employees hold themselves and each other accountable for system perform
ance and are encouraged to reduce variation and improve performance on their own
. Standards that control and limit employee creativity should be eliminated. The
key is understanding performance variation and what causes it and creating a pa
rtnership with employees so that they are integral members of the team working t
o improve it. Work measurement and methods improvement became detached, to some
extent, from the larger system of improvement efforts. Today, efforts to improve
what workers do and how they do it is being tied to overall business strategy a
nd actions. This means that measures of performance at the work unit level will
have to be tied to and integrated with measures of performance for
4
2
What is Being Led and Managed
Value Adding Processes
Inputs Upstream Systems
Outputs Downstream Systems
Figure 13 Management System Model.
22
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
the manager; but in the contemporary organization, Who refers to a management team.
For the long-range planning horizon, the management team establishes the goals a
nd objectivesthe vision of what the organization is to become. In the short time h
orizon, levels of system performance must be speci ed. The What is the system that is
being managed; the organization, system, or process of interest, the object of t
he management efforts. How refers to managerial processes and procedures and more sp
eci cally to the transformation of data about the performance of the organization
(What) into information regarding the need for actions or interventions. The managem
ent system model can also be characterized as a feedback or closed-loop control
system. In this version, the management team is the controller (who), the proces
s is the system being controlled (what), and the instrumentation (how) monitors
the system states and feeds these back to the controller so that deviations betw
een the actual and the desired states can be nulled. The interfaces between each
of the elements also represent the management process. Between the what and the how
ments is the measurement-to-data interface. Between the how and who elements is the i
rmation portrayal / information perception interface. And between the who and the what
lements is the decision-to-action interface. Viewed from the perspective of this
model, the management of a function would entail: 1. 2. 3. 4. Determining what
performance is expected from the system Monitoring the system to determine how w
ell it is performing in light of what is expected Deciding what corrective actio
n is necessary Putting the correction into place
Note that any embedded organizational system is operating in the context of a la
rger system and the linkages are critical to total system optimization (Sink and
Smith 1994). This model provides the frame and the outline to be followed in bu
ilding an effective measurement system. The steps one takes to do this are cover
ed in other references we have provided. We will simply restate that understandi
ng what constitutes success for an embedded system must be clearly understood an
d operationally de ned. The model for success must also be operationalized and und
erstood. Senge (1990) and others devote much of their work to helping us underst
and the tools of systems modeling in this regard. Without a clear, speci c, focuse
d understanding of what the key result areas and their related key performance i
ndicators are, it is dif cult for workers or managers to assess how they are doing
and on what basis to make those assessments. The measurement system allows for
effective Study (S) in the Shewhart / Deming PDSA improvement cycle. A modern me
asurement system will have to be comprehensive, well integrated, and strategic a
s well as operational. It needs to portray causal linkages from the system of in
terest to the next-larger system. This assists in ensuring that we do not fall i
nto the trap of optimizing the subsystem at the expense of the larger system. In
our experience, ISEs understand measurement perhaps better than other disciplin
es, and yet the traditional approach is often reductionist in its orientation. T
he key to ISE being better integrated to enterprise improvement is that we apply
our strengths in a way that avoids suboptimization and clearly ties to the high
er organizational good. This is especially true with measurement.
4.
4.1.
ORGANIZING FOR FULL-POTENTIAL ISE CONTRIBUTION
Overview
Weve portrayed what we believe is an extended and expanded value proposition for
ISE. To summarize: the ISE is playing a key role in strategy and positioning, pl
anning, change leadership and management, culture, measurement, learning, reengi
neering, and infrastructure. When you couple this view with the more traditional
focus that is discussed in the bulk of the Handbook, we think you will get a avo
r of the full potential for ISE and how exciting ISE will be in the decades to c
ome. The issue here is how to position and organize the ISE role such that the p
otential contribution can be realized. The traditional view is that the ISE func
tion should be located in a dedicated unit that is well connected to positions o
f power. An alternative view is emerging. We believe that the environment that m
ade the ISE department or function effective has changed. The challenge is to po
sition the ISE value proposition, not the function. Once again, this transformat
ion requires that there be strong motivation and enlightened executive sponsorsh
ip for deploying ISE skills throughout the organization. ISEs will be positioned
in ways that were impossible to envision in the past.
4.2.
Executive Sponsorship
At the risk of becoming too repetitive, we will reassert that full-potential org
anizations will have a senior leader (CEO, VP) who champions the ISE contributio
n. This sponsor will also have a clear mental model of what needs to be done for
the organization to migrate toward full potential and how
24
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
for the individual ISE. If ISEs accept personal responsibility for their own gro
wth and development, full potential will be within the grasp of ISEs, our custom
ers, and our organizations. The fundamental questions that make up the process a
re straightforward:
What is our purpose in life (our enduring reason for being)? What is our vision
(full potential,
the look and the feel of success)? What will it take for us to be successful? Wh
at work do we have to do to migrate toward our particular visions? What conditio
ns do we have to create for ourselves in order to be successful? How will we kno
w how we are doing, on what basis will we make this assessment, and how will we
manage our performance over time?
So much good work has been done today to achieve personal and professional maste
ry that it is dif cult to suggest a starting point. Each of us has found Senges and
Coveys work on personal and professional mastery a good place to begin (Senge, 1
990; Covey, 1994). As Dr. Deming used to say, the journey to full potential perf
ormance will take a lifetime and thats good because thats all youve got and you can
begin anytime you want as long as you start now!
REFERENCES
Akao, Y., Ed. (1991), Hoshin Kanri: Policy Deployment for Successful TQM, Produc
tivity Press, Cambridge, MA. Collins, J. C., and Porras, J. I., (1994), Built to
Last: Successful Habits of Visionary Companies, HarperCollins, New York. Covey,
S. R. (1994), First Things First, Simon & Schuster, New York. Csikszentmihalyi,
M. (1990), Flow: The Psychology of Optimal Experience, Harper & Row, New York.
DeGeus, A. (1997), The Living Company: Habits for Survival in a Turbulent Busine
ss Environment, Harvard Business School Press, Boston. Fritz, R. (1991), Creatin
g: A Practical Guide to the Creative Process and How to Use It to Create Everyth
ing, Fawcett Columbine, New York. Kaplan, R. S., and Norton, D. P. (1996), Trans
lating Strategy into Action: The Balanced Scorecard, Harvard Business School Pre
ss, Boston. Lawler, E. E., III, (1986), High Involvement Management, Jossey-Bass
, San Francisco. National Institute of Standards and Technology (1999), Criteria f
or Performance Excellence, in Malcolm Baldrige Business Criteria, NIST, Gaithersbu
rg, MD. Porter, M. E. (1996), What Is Strategy, Harvard Business Review, Volume 74,
November December, pp. 6178. Schein, E. H. (1992), Organizational Culture and Lead
ership, Jossey-Bass, San Francisco. Senge, P. M. (1990), The Fifth Discipline, D
oubleday-Currency, New York. Sink, D. S. (1998), The IE as Change Master, IIE Soluti
ons, Vol. 30, No. 10, pp. 3640. Sink, D. S., and Poirier, D. F. (1999), Get Better
Results: For Performance Improvement, Integrate Strategy and Operations Effectiv
eness, IIE Solutions, Vol. 31, No. 10, pp. 2228. Sink, D. S., and Smith, G. L. (199
4), The In uence of Organizational Linkages and Measurement Practices on Productivit
y and Management, in Organizational Linkages: Understanding the Productivity Parad
ox, D. H. Harris, Ed., National Academy Press, Washington, DC. Spears, S., and B
owen, H. K. (1999), Decoding the DNA of the Toyota Production System, Harvard Busine
ss Review, Vol. 77, No. 5, pp. 96106. Thompson, J. D. (1967), Organizations in Ac
tion, McGraw-Hill, New York. Tompkins, J. (1999), No Boundaries: Moving Beyond S
upply Chain Management, Tompkins Press, Raleigh, NC. Wilbur, K. (1996), A Brief
History of Everything, Shambala Press, Boston. Womack, J. T., and Jones, D. T. (
1996), Lean Thinking: Banish Waste and Create Wealth in Your Corporation, Simon
& Schuster, New York.
ADDITIONAL READING
Miller, D. Pro tability
Productivity Price Recovery, Harvard Business Review, Vol. 62
May June, 1984. Sink, D. S., and Tuttle, T. C., Planning and Measurement in Your
Organization of the Future, IEM Press, Norcross, GA, 1989. Sink, D. S., and Mor
ris, W. T., By What Method ?, IEM Press, Norcross, GA, 1995. Vaill, P., Learning
as a Way of Being, Jossey-Bass, San Francisco, 1996.
this chapter we use the term enterprise in its classical sense: an undertakin
especially one of some scope, complication, and risk. Thus, an enterprise cou
be a business corporation or partnership, a government agency, or a not-for-p
t organization. The business modeling concepts described herein can be applied
any kind of enterprise.
28
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
1.1.
The Enterprise as a Complex, Living System
De ning any given enterprise is a dif cult endeavor because the enterprise is percei
ved differently by each individual or group that views it. Furthermore, each ent
erprise is a complex, living system that is continually changing, so todays view
may be very different from yesterdays. Often people attempt to de ne an enterprise
by its organizational structure and the executives who occupy key positions. But
this is only a small part of the picture. The enterprise actually operates as a
complex system, with many parts that interact to function as a whole. In additi
on to organizational structure, an enterprises system includes its economic and s
ocial environment; the customers it serves; other enterprises with which it coop
erates to achieve its objectives; and the internal processes that are designed t
o set strategic direction, identify and satisfy the customers, and acquire and p
rovide the resources necessary to keep the enterprise running. Thus, to de ne an e
nterprise properly one must de ne the system within which it operates. Ultimately,
the success of an enterprise depends on the strength of its intra- and intercon
nectionsthe couplings among the organizations internal processes and between the o
rganization and its external economic agents.
1.2.
The Impact of the Global Business Environment
In recent years, technological advances in communications have paved the way for
enterprises to operate effectively in a global, rather than just a local, envir
onment. The foundation for this globalization was set by the technological advan
ces in transportation experienced during the twentieth century. As a result, glo
bal expansionoften through mergers, acquisitions, and alliancesis now commonplace.
Indeed, in some industries globalization has become a requisite for survival. B
ut globalization brings a whole new level of complexity to the enterprise. When
an enterprise seeks to operate in a new environment, markets, competition, regul
ations, economies, and human resources can be very different from what an enterp
rise has experienced. Accommodating such differences requires understanding them
and how they will affect the strategies and processes of the enterprise.
1.3.
Increasing and Changing Business Risks
Another aspect of globalization is that it signi cantly increases the enterprises b
usiness risksthat is, risks that threaten achievement of the enterprises objective
s. Traditionally, management of risks has been focused on the local external env
ironment, including such areas as the nature and size of direct competition, the
labor market, the cost of capital, customer and supplier relationships, and com
petitor innovations. But in a global operation the business risks become greater
and are not always well de ned. For example, regulatory environments in foreign c
ountries may favor local enterprises; vying for limited resourcesboth natural and
humanmay be problematic; and the foreign work ethic may not be conducive to prod
uctive operation and delivery of quality products and services. The business ris
ks in a foreign environment need to be identi ed, de ned, and managed if the enterpr
ise is to be successful. Even the business risks of local enterprises are affect
ed by globalization. For example, new market entrants from foreign countries can
provide unexpected, lower-priced competition, or product innovations originatin
g in another country can become direct product substitutes. As a result, even lo
cal enterprises must anticipate new business risks brought on by the strategies
of global organizations.
1.4.
The Business Model and Its Purpose
An enterprise business model is designed to compile, integrate, and convey infor
mation about an enterprises business and industry. Ideally, it depicts the entire
system within which the enterprise operatesboth internal and external to the org
anization. Not only does the construction of a model help enterprise management
better understand the structure, nature, and direction of their business, but it
provides the basis for communicating such information to employees and other in
terested stakeholders. The model can be the catalyst for developing a shared und
erstanding of what the business is today and what needs to be done to move the e
nterprise to some desired future state. A business model can be as detailed as t
he users deem necessary to t their needs. Other factors regarding level of detail
include the availability of information and the capabilities and availability o
f the business analysts who will build the model.
1.5.
The Business Model as the IEs Context
The business model is a tool that helps the industrial engineer develop an under
standing of the effectiveness of the design and management of the enterprises bus
iness, as well as the critical performance-related issues it faces, to evaluate
opportunities and manage risk better. One of the industrial engineers key roles i
s to improve the productivity of enterprise business processes. Often he or she
is assigned to analyze a particular process or subprocess and make rec-
30
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
Business processes, including: Strategic management processes: the processes by
which the enterprises mission is developed business objectives are de ned business
risks that threaten attainment of the business objectives are identi ed business r
isk management processes are established progress toward meeting business object
ives is monitored Core business processes: the processes that develop, produce,
market, and distribute the enterprises products and services.
Resource management processes: the processes by which resources are acquired, de
veloped,
and allocated to the core business activities
Alliances: the relationships established by the enterprise to attain business ob
jectives, expand
business opportunities, and / or reduce or transfer business risk
Core products and services: The value that the enterprise brings to the market C
ustomers: the individuals and organizations who purchase the enterprises output
Each of these components is discussed more fully later in this chapter.
2.3.
The IEs Use of the Business Model
The business model can
epending on his or her
al important potential
gineer might apply the
2.3.1.
Gaining an Understanding of the Whole Business
Regardless of a persons placement and role, a good understanding of the whole bus
iness will help the individual see how he or she ts in, both operationally and st
rategically. This should help the person ensure that his or her objectives are c
onsistent with and supportive of those of the enterprise, improving the potentia
l for advancement within the organization.
2.3.2.
Facilitating a Common Understanding of the Business by Others
Each person in an enterprise has an individual view of the business, and this vi
ew is often limited in scope and parochial in nature. Variations among viewpoint
s can cause considerable misunderstanding among individuals and groups about the
purposes and direction of the enterprise. A welldocumented, comprehensive busin
ess model can facilitate a common understanding of the business, both internally
and externally.
2.3.3.
Identifying Opportunities for Process Improvements
As noted earlier, process improvement is at the heart of the industrial engineers
purpose, regardless of where and at what level he or she is placed in the enter
prise. The business model provides the framework for assessing process performan
ce and interrelationships with other processes. Such assessment can lead directl
y to the identi cation and design of process changes that will improve the perform
ance of the process as well as the enterprise as a whole.
2.3.4.
Identifying and Mitigating Business Risks
Critical to the success of any enterprise is effective management of business ri
sk. The business modeling process provides the basis for identifying the most si
gni cant risks, both internal and external, and developing means for mitigating th
ose risks.
2.3.5.
Developing the Basis for Process Performance Measurement
Performance measurement is fundamental to continuous improvement. Development of
a comprehensive business model includes the identi cation or establishment of spe
ci c performance objectives for each business process. The performance objectives
then provide the basis for an ongoing process performance measurement program.
2.3.6.
Facilitating the Development of the Enterprises Directional Course
The comprehensive business model can provide the basis for painting a future pic
ture of the enterprise and determining the course of action to get there. This i
s done by developing a vision of what leadership wants the enterprise to be at s
ome future pointfor example, in three years. This vision is translated into a mod
el of what the enterprise needs to look like to support the vision (the to be model)
. The to be model then is compared with todays as is model, and a migration plan is
The sponsor and others, as appropriate, clearly articulate the purposes and expe
cted uses of the model. The purpose and use statement provides the basis for det
ermining the scope of the model (e.g., in terms of geographic coverage, business
unit coverage, and as is vs. to be views of the enterprise). The purpose, use, and s
e statements then are translated into model-development time and cost objectives
for the design team.
3.3.
Gather and Orient the Model-Building Team
The team that will develop the enterprise business model is assembled and briefe
d on the purpose, scope, and framework of the model. The team members may be fro
m various parts of the enterprise, typically including representatives from the
key business processes (strategic, core, and resource management). The internal
team may be supplemented by outside resources, as necessary (e.g., information s
pecialists, process facilitators, and the like). A skilled project manager is ap
pointed whose role is to ensure that the model-development effort is properly pl
anned and completed on time and within budget.
3.4.
Determine Information Requirements for Each Element of the Model
Each element of the model requires the development of information. In most cases
the information will be readily available within the enterprise, but, particula
rly in the external forces area, the information may have to be developed with t
he assistance of outside information providers. Determining the information requ
irements for each element will highlight those areas that will be problematic an
d require special attention.
3.5.
Construct the Business Model
The team gathers, compiles, and integrates the information to develop a draft of
the business model. The model is depicted graphically and supported by textual
material, as necessary. Reviews are conducted to ensure that the model is develo
ped to an appropriate level of detail and that the various elements are properly
integrated.
3.6.
Build Consensus for the Model
Consensus for the model is built best through involvement. Such involvement can
come through participating directly as a member of the design team, participatin
g as a member of a project steering committee, or acting as a reviewer of the draft
business model. The key is knowing who needs to be involved and in what ways the
y can participate most effectively.
4.
4.1.
BUSINESS MODEL CONTENT
Developing the Business Model Content
As previously mentioned, the business model is a tool that helps the industrial
engineer develop an understanding of the effectiveness of the design and managem
ent of the enterprises business, as well as the critical performance-related issu
es it faces, to evaluate opportunities and manage risk better. When completed, t
he business model is a strategic-systems decision frame that describes (a) the i
nterlinking activities carried out within a business entity, (b) the external fo
rces that bear upon the entity, and (c) the business relationships with persons
and other organizations outside of the entity. In the initial stage of developin
g the business model, pertinent background information is gathered to gain a ful
l understanding of the industry structure, pro tability, and operating environment
. This industry background information and preliminary analysis then is used to
determine the impact on
32
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
the enterprises business. Each of the elements of the business model provides a s
ummary of information that is pertinent to developing an understanding of the co
mpetitive environment and the enterprises relative strengths and weaknesses in th
e marketplace. The processes the engineer uses to assimilate the acquired knowle
dge will be unique for each enterprise and each engineer and therefore cannot an
d should not be reduced to highly structured formats, such as templates, checkli
sts, and mathematical models. A thorough understanding of ve key business princip
lesstrategic analysis, business process analysis, business measurement, risk mana
gement, and continuous improvementwill be necessary as the engineer seeks to acqu
ire knowledge about the companys business and industry for the purpose of develop
ing the full business model. These business principles and their interrelationsh
ips are depicted in Figure 2. Throughout the model-building process, the enginee
r is working toward the ultimate goal of integrating the knowledge he or she obt
ains about the enterprises systems dynamics and the congruence between strategy a
nd the environment. He or she may use mental processes or more formal business s
imulation and systems thinking tools, or some combination of both, to structure
his or her thinking about the dynamics of the enterprises strategic systems.
4.2.
Model Element 1: External Forces and Agents
External forces and agents encompass the environment in which an enterprise oper
ates. They are the forces that shape the enterprises competitive marketplace and
provide new opportunities, as well as areas of risk to be managed. Continuous mo
nitoring and assessment of external forces is critical to the future of any busi
ness. The environment plays a critical role in shaping the destinies of entire i
ndustries, as well as those of individual enterprises. Perhaps the most basic te
net of strategic management is that managers must adjust their strategies to re ec
t the environment in which their businesses operate. To begin understanding what
makes a successful business, one must rst consider the environment in which the
enterprise operates and the alignment of its strategy with that environment. Envir
onment covers a lot of territoryessentially everything outside the organizations con
trol. The analysis of external forces and agents includes an assessment of both
the general environment and the competitive environment. To be practical, one mu
st focus attention on those parts of the general and competitive environments th
at will most affect the business. Figure 3 provides an example of a framework th
at can be used in assessing the general environment. The general environment con
sists of factors external to the industry that may have a signi cant impact on the
enterprises strategies. These factors often overlap, and developments in one are
a may in uence those in another. The general environment usually holds both opport
unities for and threats to expansion. The competitive environment, generally ref
erred to as the industry environment, is the situation facing an organization within
its speci c competitive arena. The competitive environment combines
Business Process Analysis Strategic Analysis
Business Measurement
Figure 2 Business Improvement Principles. (From Bell et al. 1997)
C
o
w
t
t
T U A L A N D P O T E N T IA L C O M P E T IT IO N
n g E x i s t in g F i r m s
th C o n c e n t r a t io n D if f e r e n t ia t io n S w i
s S c a l e / L e a r n i n g E c o n o m ie s F ix e d - V
s E x c e s s C a p a c it y E x it B a r r i e r s
T h re a t o f N e w E n tra n ts
S c a l e E c o n o m ie s F ir s t M o v e r A d v a n t a g e D is t r ib u t
i o n A c c e s s R e la t io n s h ip s L e g a l B a r r ie r s
T h r e a t o f S u b s t i tu t e P r o d u c t s
R e la t iv e P r ic e a n d P e r f o r m a n c e B u y e r s W i ll in g n e s
s t o S w it c h
IN D U S T R Y P R O F IT A B IL IT Y
B A R G A IN IN G P O W E R O F M A R K E T S
B a rg a in in g P o w e r o f B u y e r s
S w it c h i n g C o s t s D if f e r e n t ia t io n Im p o rta n c e o f P r o
d u c t N um b er o f B uy ers V o lu m e p e r B u y e r
B a r g a i n i n g P o w e r o f S u p p l ie r s
S w it c h i n g C o s t s D if f e r e n t ia t io n Im p o rta n c e o f P r o
d u c t N u m b e r o f S u p p liers V o lu m e p e r S u p p li e r
Figure 4 Five Forces Model of Competition. (Adapted from Porter 1985)
34
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
4.3.
Model Element 2: Markets
Understanding the markets in which the enterprise competes is critical in develo
ping the knowledge base for the business model. The extent to which an enterpris
e concentrates on a narrowly de ned niche or segment of the market is referred to
as focus. The engineer should understand the relevant advantages and disadvantag
es of particular levels of focus in terms of their impact on competitive advanta
ge. For example, a differentiation strategy is often associated with focusing on
a narrowly de ned market niche. In contrast, a cost leadership strategy is often
associated with a broadly de ned target market. Markets are not staticthey emerge,
grow, mature, and decline. As a market moves from one life cycle stage to anothe
r, changes occur in its strategic considerations, from innovation rates to custo
mer price-sensitivity to intensity of competitive rivalry and beyond. The market
life cycle provides a useful framework for studying markets and their impact on
the enterprises value proposition.
4.4.
Model Element 3: Business Processes
In the enterprise, work gets done through a complex network of business processe
s. Work processes are the vehicles of business life. If properly con gured and ali
gned and if properly coordinated by an integrated set of goals and measures, the
y produce a constant ow of value creation. Process view of the business involves
elements of structure, focus, measurement, ownership, and customers. A process i
s a set of activities designed to produce a speci ed output for a particular custo
mer or market. It implies a strong emphasis on how work is done rather than what
is done. Thus, a process is a structured set of work activities with clearly de n
ed inputs and outputs. Understanding the structural elements of the process is k
ey to understanding work ow, measuring process performance, and recommending proce
ss improvements.
4.5.
Model Element 4: Alliances and Relationships
Financial pressures and time constraints continually squeeze managers who do not
have the resources to ll the resource gaps through internal development. Acquisi
tions have not always been the most effective way to ll these resource gaps. They
have proved expensive and brought not only the capabilities needed, but also ma
ny that were not desired. As a result, an increasing number of global enterprise
s recognize that strategic alliances can provide growth at a fraction of the cos
t of going it alone. In addition to sharing risks and investment, a well-structu
red, well-managed approach to alliance formation can support other goals, such a
s quality and productivity improvement. Alliances provide a way for organization
s to leverage resources. The rapid emergence of strategic collaborations as alte
rnatives to the usual go-it-alone entrepreneurial ventures is evident everywhere
, from the growing collaborative efforts of large multinationals to the continui
ng use of alliances to help maintain competitive advantage.
4.6.
Model Element 5: Core Products and Services
a n
r n
fig
o m
iz a tio n
a l E n v ir o n m e n t
u ra tio n C u ltu r e C o m p e ten c ie s P r o c e ss A d v a n ta
m u n ic a tio n
Economi c
Te c h n o l og i c a l
S oci al
Environme nt
C o m p e ti ti on
E n v ir o n m e n t
Customer/S uppl i er Rel ati onshi ps
Figure 6 External / Internal AnalysisA Global Perspective. (Adapted from Bell et
al. 1997)
36
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
the whole global economy. Prudent managers continually scan these environments f
or indications of the emergence of business opportunities and risks. They unders
tand current trends and the relevant implications for their business. Many trend
s are impacting the way business will be conducted in the future, but the New Ec
onomy is taking shape at the intersection of three very signi cant long-term trend
s that will continue to gather momentum in the decades ahead: the globalization
of business, the revolution in information technology, and the growth of knowled
ge work. These trends are undermining the old order, forcing businesses to restr
ucture and dramatically change their business models. In the following paragraph
s, we discuss these trends and other signi cant social, demographic, political, an
d business reporting trends.
5.1.1.
Globalization of Business
Simply put, capitalism is spreading around the worldif not full-blown capitalism,
at least the introduction of market forces, freer trade, and widespread deregul
ation. Its happening in the former Communist countries, in the developing world o
f Latin American and Asia, and even in the industrialized West, with economic un
ion in Europe and the Free Trade agreement in North America. The number of forei
gn companies operating in the United States is growing rapidly, at about 2% per
year. They now total more than 40,000 and account for about 7% of all corporate
assets in the United States. These foreign rms bring to bear the nancial and produ
ction resources of their home countries on almost any emerging market opportunit
y in the United States and can quickly take market share with products manufactu
red less expensively abroad. Foreign competition has arisen not just in large in
dustrial enterprisesautomobiles, trucks, aircraft, and computersbut even in some t
raditional natural monopolies, such as telecommunications and satellite broadcas
ting. International trade and investment will play a much larger role in the U.S
. economy in the future. Exports and imports already constitute over 25% of the
economy. Increasingly porous national borders, changing corporate cultures, and
continuing labor shortages are contributing to the emergence of a global workfor
ce. As regions develop into pockets of speci c talent, more workers will relocate
to them. In other cases, companies will go to workers, hiring them where they li
ve. Technology will enable the ef cient execution of tasks regardless of proximity
to home of ce. This uid interchange of individuals and information will bring toge
ther people of disparate backgrounds. Individuals with dual nationalities will b
e commonplace.
5.1.2.
Revolution in Information Technology
The foundation of the New Economy is the revolutionary explosion of computer pro
cessing power. Computing power doubles every 18 months. In addition, we have see
n a 22-fold increase in the speed of data transmission obtained over ordinary te
lephone lines during the past two decades. This wave of innovation in data commu
nications has promoted the extraordinary build-out of the worlds computer network
s. Over 60 million computers are connected via the Internet. The network phenome
non is just as remarkable as the explosion in computing power. As information te
chnology reduces the trade-off between the depth and quality of information and
its access, the economics that underlie industry and organizational structures w
ill be transformed. As more individuals and businesses connect electronically, t
he economics of scope will change organizational relationships, competition, and
make vs. buy decisions. Technology is likely to continue on its path of being s
maller, faster, cheaper, and less visible in the everyday world. The intersectio
n of computing and telecommunications will bring about a fundamental change in t
he perception of distance and time. At the same time, the traditional interface
that has existed between humans and technology will rapidly disappear. Remote se
nsing, data collection systems, cameras, and adjuncts to sensing abilities are a
mong the major new applications in this eld. More signi cantly, information technol
ogy is transforming the global business environment. Housing and autos used to d
rive the U.S. economy. Now information technology accounts for a quarter to a th
ird of economic growth.
5.1.3.
Growth of Knowledge Work
Increasingly, knowledge has become more valuable and more powerful than natural
resources and physical facilities. Value propositions are based on information a
nd ideas, rather than on the mere physical attributes of products. This phenomen
on is true in both service businesses and in businesses that have historically f
ocused heavily on tangible products. Knowledge is being used to enhance greatly
the value of all physical products. The result is that much of our economic grow
th in recent years has been intangible. As value creation shifts from the mere e
conomics of physical products to the economics of information, managing informat
ion, knowledge, and innovation will become a business imperative.
he most important longer-term consequences of the aging baby boomers is that hou
sehold formation is slowing down. For companies that sell products and services
to new households (housing, home furnishings, new cars, insurance), the days of
growing with the market are over. Opportunities will now have to come from under
standing the composition of the household, more than sheer growth in numbers. Mo
re and more sales will be driven by increasing the provision of value-added good
s and services to households. 5.1.5.4. Families Continue to Change The compositi
on of the household is changing. The share of households made up of married coup
les with children declined from 40% in 1970 to 25% in 1995. That share will cont
inue to go down. The households with the most dynamic growth rates will be marri
ed couples without children. Business will make more sales to smaller households
, and the sales will demand more one-onone interaction. This is because needs an
d tastes will no longer be driven by family imperatives, which tend to be simila
r to each other, but by those of adults, which tend to be more personal. This me
ans much wider swings in the purchasing patterns of households. 5.1.5.5. Income
Mobility of Workers Increasing immigration rates, the growing importance of educ
ation, and changing systems of compensation that reward high performers have con
tributed to a
38
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
disturbing trend in the United States: a growing inequality in income between th
e rich and the poor. In the last three decades, the number of households with in
comes under $15,000 (in constant 1996 dollars) has grown from 14 million to 21 m
illion, while the number with incomes over $75,000 (in constant 1996 dollars) ha
s grown from 4 million to 17 million. The good news is that every year about one
third of adults of working age move out of their income quintile. In ve years, a
lmost half move. Consumers purchasing behavior is driven by household resources.
But access to credit means that consumer purchases may not always be limited by
current earnings. Many household purchases are based on expected changes in futu
re income. The fact that as many as 50% of adults can expect to nd themselves in
a different income quintile in the next ve years suggests that payment exibility w
ill be a critical tool for enterprises trying to meet the ever-more elusive need
s of the 21st century consumer. 5.1.5.6. The Virtual Workforce Technology and ch
anging organizational cultures are enabling more people to work wherever they ch
oose to live. In the next ve years, the number of telecommuters is expected to re
ach 20 million or more. Some predict that half the workforce will be telecommuti
ng from home, of ce, or shared facilities within the next decade. At the same time
, the number of temporary workers, freelancers, independent contractors, and the
like exceeds 25 million by some estimates. Virtual partnerships / alliances bet
ween independent contractors are expected to ourish as sophisticated telecommunic
ations capabilities enable people to link up with anyone, anywhere. 5.1.5.7. Pol
itical and Regulatory Changes The regulation of commerce and industry is being a
dapted to meet the needs of the new consumer. The United States spent almost a c
entury building a regulatory network to protect citizens from the complex risks
of a rich, urbanized, industrialized society. Now a more sophisticated consumer,
new technologies, and a much more competitive global market are gradually creat
ing an environment more self-regulating and open to consumer discretion, in whic
h it is easier to spread throughout the marketplace the risk that the government
formerly took on. As a result, regulatory barriers are coming down. Sophisticat
ed consumers, one of the key drivers of the move toward deregulation, will becom
e a roadblock if they feel that their fundamental birthrights are threatened: (a
) affordable and accessible health care choices, (b) safety and security of the n
ancial system, (c) quality of life that environmental regulations protect; and (
4) any issue that seems to lead to an unwanted invasion of privacy. 5.1.5.8. Pri
vatization The transformation of state-owned or controlled enterprises into priv
ately owned or managed enterprises is sweeping the world for the second decade.
This phenomenon indicates expanding con dence in the bene ts of market forces the wo
rld over. While privatization initiatives have been common all over the world, E
urope has felt the largest impact. Almost 60% of the private money raised in the
last few years has been in Europe. The ow of public enterprises into the private
sector will continue in the 21st century, though probably at a slower rate. Pri
vatization has increased competition and lowered prices, given consumers more ch
oices, increased the rate of innovation, ended subsidies to state-run enterprise
s, provided new investment opportunities, and replaced monopolies with new playe
rs. 5.1.5.9. Key Trends to Follow The New Economy will be affected largely by fu
ture developments in three key areas: 1. Information technology, providing re neme
nts of computer technologies to optimize the possibilities of electronic commerc
e 2. Biotechnology, where the manipulation of genes will allow new control over
diseases and health possibilities 3. Nanotechnology, the development of miniatur
ized systems so that everyday instruments such as mobile phones and calculators
can be used in extraordinary ways
5.2.
Customers
In the New Economy, the companies with the smartest customers win. The richness
and reach of information created by the network economy has moved the power down
stream to the customer. With more timely, accurate, and relevant information, cu
stomers will be in the drivers seat as existing products are improved and new pro
ducts are introduced. Their power in the New Economy cannot be overemphasized. T
he extent to which customers are integrated into the design, development, and im
provement of products and services will determine competitive advantage. Knowled
ge is the driver of the New Economy, and customers generally have much more of i
t than the producers of products and services.
40
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
created through privatization, are entering the worlds capital markets. But as co
mpanies tap capital sources outside their home countries, capital suppliers are
likely to demand representation on their boards of directors.
5.10.
The Economy
Macroeconomic developments, such as interest rate uctuations, the rate of in ation,
and exchange rate variations, are extremely dif cult to predict on a medium- or l
ong-term basis. Unpredictable movements of these macroeconomic indicators cannot
only affect a companys reported quarterly earnings, but even determine whether a
company survives. There is general agreement that the nancial environment, chara
cterized by increased volatility in nancial markets, is more risky today than in
the past. Growing uncertainty about in ation has been followed quickly by uncertai
nty about foreign exchange rates, interest rates, and commodity prices. The incr
eased economic uncertainty has altered the way nancial markets function. Companie
s have discovered that their value is subject to various nancial price risks in a
ddition to the risk inherent in their core business. New risk management instrum
ents and hybrid securities have proliferated in the market, enabling companies t
o manage nancial risk actively rather than try to predict price movements.
6.
6.1.
ELEMENT 2: MARKETS
Market De ned
In simple terms, a market is a group of customers who have a speci c unsatis ed need
or want and are able to purchase a product or service to satisfy that need. For
example, the market for automobiles consists of anyone older than the legal dri
ving age with a need for transportation, access to roads, and enough money to pu
rchase or make a payment on a car. Markets can be broken down in numerous ways a
s marketers try to nd distinctive groups of consumers within the total market. Ma
rket segmentation allows marketers to allocate promotional expenses to the most
pro table segments within the total market and develop speci c ad campaigns for each
one. Segmentation produces a better match between what a marketer offers and wh
at the consumer wants, so customers dont have to make compromises when they purch
ase a product.
6.2.
Market Domains Served
The served market is the portion of a market that the enterprise decides to purs
ue. For example, a company that manufactures video games de nes the market as anyo
ne who owns a television. The potential market is de ned as households with childr
en and a television. The available market is limited to households with children
, a television, enough income to make the purchase, and a store nearby that carr
ies the game. The served market consists of households with a television, access
to a toy store, suf cient income to buy the product, and children within a speci c
age range.
7.
7.1.
42
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
1 . 1 M o n i to r & A s s e s s t h e E x t e r n a l E n v i r o n m e n t
1 .2 U n d e rs ta n d C u sto m e r N eed s & R e q u i re m e n ts
1 .3 D e fin e V a lu e P ro p o s itio n
1 .4 D e v e lo p M iss io n S ta te m e n t
1 .5 F o rm u la t e B u sin ess S t ra t e g y
1 .6 D e s ig n O r g a n iz a tio n a l S t ru c t u r e & P ro c e s s e s
1 .7 D e v e lo p & Set O r g a n iz a tio n a l G o a ls
1 . 8 C o n t i n u o u s l y M o n i to r & I m p ro v e
Figure 8 Illustrative Strategic Management Process (From Risk Management Partner
s, Inc.)
Strategic management principles in the New Economy must recognize the importance
of knowledge and information to all value propositions and the need for structu
res that can adapt quickly and continuously improve in real time. Figure 9 provi
des an example of the important components of strategy in a connected world. In
the Enterprise Business Model block in Figure 9:
Knowledge Networks means capturing and managing knowledge as a strategic asset. Pro
s Excellence means designing and managing processes to achieve competitive advantage.
Core Competencies means focusing on those things that the enterprise does best and u
sing
alliance partners to supplement those skills. In particular, strategy involves c
ontinuously reconceiving the business and the role that business can play in the
marketplace. The new business model is a real-time structure that can change co
ntinually and adapt more quickly and better than the competition. The traditiona
l approach to strategy development relied upon a set of powerful analytic tools
that allowed executives to make fact-based decisions about strategic alternative
s. The goal of such analysis was to discuss and test alternative scenarios to nd
the most likely outcome and create a strategy based on it. This approach served
companies well in relatively stable business environments; however, fact-based d
ecisions in the rapidly changing New Economy will be largely replaced by imagina
tion and vision. Rapidly changing business environments with ever-increasing unc
ertainty require new approaches to strategy development and deployment. While tr
aditional approaches are at best marginally helpful and at worst very dangerous,
misjudging uncertainty can lead to strategies that do not identify the business
opportunities and business risks. However, making systematically sound strategi
c decisions will continue to be important in the future. Even in the most uncert
ain environments, executives can generally identify a range of potential scenari
os. The key will be to design business models that can
S tr at egi c In te n t
M Mo on n ito ito r r & & R Re esp sp o on nd d
B u si n ess St rat eg y
IT S t ra te gy
C n cc tiv ity Coon n nee tiv ity O n yy ste m Oppee nS S ste mss R aaltim Ree l
timee In Info form rmaatio tion n
E n t erpr is e Bu si n ess M od el
F le x ib le A d a p tiv e In n o v a tiv e
K oow le ddge tw oork Kn n w le ge N Nee tw rkss P ee ss xxcc ee lle cc ee Pro roc
c ss E E llen n C oom te n cc ie ss Coore re C C mppee te n ie
Figure 9 A New Economy Strategy. (From Risk Management Partners, Inc.)
siness processes who are externally focused on serving customers outside the org
anization. Without appropriate resourcesinformation, people, and capitalthe core p
rocesses cannot offer the value customers need and will cease to provide an effe
ctive source of competitive advantage.
7.2.
Process Analysis Components
Figure 10 displays a framework for process analysis. The process analysis compon
ents in this framework are discussed in this section.
7.2.1.
Process Objectives
Processes are established to serve speci c customer needs. The customers may be in
ternal, such as another process, or external to the enterprise. The process obje
ctives de ne what value will be supplied to the customer. One can look at them as
the whole purpose for which the organization has put together this set of resour
ces and activities. Process objectives need to be speci c, measurable, attainable,
and realistic and to have a sense of time. Business process objectives may diff
er signi cantly between enterprises within an industry or industry segment, being
shaped by the organizations strategic objectives and related critical success fac
tors. For example, the business objectives for the materials procurement process in
a consumer products company might be as follows:
44
P ro c e ss O b je c t iv e s In p u t s A c tiv it ie s
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
T h e o b j e c ti v e s o f th e p r o c e s s a r e s t a t e m e n t s th a t
d e f in e th e k e y r o l e s t h a t th e p r o c e s s p l a y s in t h e a
c h i e v e m e n t o f th e e n tit y s b u s in e s s o b je c ti v e s . T h
e in p u ts to a p r o c e s s r e p r e s e n t t h e e l e m e n ts , m a t e
r i a ls , r e s o u r c e s , o r in f o r m a t i o n n e e d e d t o c o m p
l e t e th e a c ti v it i e s in t h e process. T h e a c t i v iti e s a r e t
h o s e a c t i o n s o r s u b - p r o c e s s e s th a t t o g e th e r p r o
d u c e t h e o u t p u t s o f t h e p r o c e s s . F o r s o m e p r o c e s
s e s , a r r o w s a r e o m it te d d u e t o th e n o n - s e q u e n ti a l
n a tu r e o f t h e a c t i v it i e s .
O u tp u ts
T h e o u t p u ts r e p r e s e n t th e e n d r e s u lt o f t h e p r o c e s
s - th e p r o d u c t , d e li v e r a b l e , in f o r m a t i o n , o r r e
s o u r c e th a t is p r o d u c e d .
S y s te m s
T h e s y s t e m s a r e c o ll e c t i o n s o
e d t o a c c o m p l is h p r o c e s s o b je
o n s y s t e m s p r o d u c e r e p o r ts c o
b o u t t h e p e r f o r m a n c e o f o p e r
, a n d c o m p l i a n c e o b je c t i v e s t
e t o r u n a n d c o n tro l th e p r o c e s s
f
c
n
a
h
.
r e s o u r c e s d e s i g n
ti v e s . I n f o r m a t i
t a in in g f e e d b a c k a
t i o n a l, f in a n c i a l
a t m a k e it p o s s i b l
R i s k s T h a t T h r e a t e n O b je c t iv e s
P r o c e s s r is k s a r e t h e r i s k s t h a t m a y th r e a te n th e a
tt a in m e n t o f th e p r o c e s s s o b j e c t i v e s .
C o n t r o ls L i n k e d t o R i
C o n t r o l s a r e t h e p o li
m a y o r m a y n o t b e im p l e
s u r a n c e t h a t th e r is k
p t a b l e to m e e t t h e p r o
s
c
m
s
c
k
i
e
a
e
s
e
n
r
s
s
t
e
s
a
e
r
n
d
e
s
d
,
d
o
p r o c e d u r e s , w h ic h
th a t h e l p p r o v id e a s
u c e d to a l e v e l a c c e
b j e c t i v e s .
7.2.3.
Activities
Almost everything that we do or are involved in is a process. Some processes are
highly complex, involving thousands of people, and some are very simple, requir
ing only seconds of time. Therefore, a process hierarchy is necessary to underst
and processes and their key activities. From a macro view, processes are the key
activities required to manage and / or run an organization. Any key business pr
ocessa strategic management process, a core business process, or a resource manag
ement processcan be subdivided into subprocesses that are logically related and c
ontribute to the objectives of the key business process. For example, Figure 11
provides an example of the subprocess components of a new product planning proce
ss for a consulting rm. Every key business process or subprocess is made up of a
number of activities. As the name implies, activities are the actions required t
o produce a particular result. Furthermore, each activity is made up of a number
of tasks that are performed by an individual or by small teams. Taken together,
tasks form a microview of the process.
Subprocesses
6 .1 D e t e r m in e P r o d u c t A tt r ib u t e s & F e a t u r e s
P r o p o s it io n s & B e n e fit S ta t e m e n t s
A lt e r n a t iv e s & M o d e ls
M a rk e t R esea rch
P o t e n t ia l B uy ers
M a rk e t T e s t in g
C on cept D e s c r ip t io n s
6 .8 D e v e lo p P r e lim in a r y B u s in e s s C a s e
Figure 11 Example: New Product Planning ProcessConsulting. (From Risk Management
Partners, Inc.)
46
IN T TEEG RRIT SS IN G ITYY RRIS ISKK
M an agem ent FF r aud M an agem ent r aud EE m ploy ee r au dd m ploy eeF F r a
u I llegal I llegalActs Acts UU nn auth or iz ed U s ee auth or iz ed U s
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
T TEE C HH N O L LO G YY RRIS SS C N O O G ISKK
M an agem ent tems M an agem entI nfo. I nfo.S Sys ys tems DD ependence pon epen
denceuu ponI T IT RR eli ability eli ability EE xter nn al xter alITIT
G O VV EE RR N A N C EE RR IS SS G O N A N C ISKK
Auth or ity Auth or ity LL eader ss hh ip eader ip PP er for m ance er for m anc
eI ncentives I ncentives LL im its im its I nter nn al dit I nter alAu Au dit
C O M PP L LIA N C EE RR IS SS C O M IA N C ISKK
TT ax ation ax ation EE nn vir on m ental vir on m ental HH ealth afety ealth&&S
S afety PP ens ion uu nd ens ionF F nd RR egu lator yy egu lator
P r oces s F low
B us i nes s R i s k s
IN BBO UU N DD IN O N IN FFO RRM A T TIO N IN O M A IO N RRIS SS ISKK RR elevan
ce elevan ce Com pletenes ss Com pletenes A ccuracy A ccuracy TT im elin es ss i
m elin es
I N P U T S
K ey M acr o P r oces s es
K ey S ub P r oces s es
K ey T as k s
O U T P U T S
O UU T TBBO UU N DD O O N IN FFO RRM A T TIO N IN O M A IO N RRIS SS ISKK RR ele
van ce elevan ce Com pletenes ss Com pletenes A ccuracy A ccuracy T Tim elin es
ss im elin es
K ey C ontr ol I ndicator s
FF IN A N C IA L L RRIS SS IN A N C IA ISKK
B u dgetin g & P lan nin g Accou nting Infor m ati on In ves tm ent E valu ati o
n F in ancial R epor ti n g C as h F l ow F u nding
FF IN A N C IA LL M A N A G EEM EEN T T RRIS SS IN A N C IA M A N A G M N ISKK
Cur r ency E x ch an ge C as h T r ans fer /V el ocity S ettlem ents R ein ves t
m ent/R ol lover C ounter par ty I nter es t R ates Cr edit C oll ater al D er i
vatives
HH UU M A N UU RRC EESS RRIS SS M A N RREESSO O C ISKK
H R M an agem ent R ecogn iti on/R etenti on L eader s h ip D evelopm ent P er f
or m ance M an agem ent C om pens ation R ecr u itm ent C om petencies
Figure 13 Illustrative Process Risks for Resource Management Processes. (From Ri
sk Management Partners, Inc.)
and assure integrity of data must be recognized as only one aspect of the contem
porary organizations control structure. Other components of the control structure
include diagnostic control systems, belief systems, boundary systems, and incen
tives. Diagnostic control systems, for example, recognize that empowerment requi
res a change in what is controlled. Consistently, empowered individuals are bein
g asked to take risks and there must be commensurate rewards for the risk taking
and achievement of superior performance. Such rewards, which can be either mone
tary or nonmonetary, are made on the basis of tangible performance consistent wi
th the organizations mission. The evolving organizational controls structure cons
ists of strategic controls, management controls, and business process controls.
A brief description of these elements follows:
Strategic controls are designed to assess continuously the effect of changes in
environment risks
on the business, formulate business risk control strategies, and align the organ
ization with those strategies. Management controls drive business risk assessmen
t and control throughout the organization. Process controls are designed to asse
ss continuously the risk that business processes do not achieve what they were d
esigned to achieve. Embedded in process risk is information processing / technol
ogy risk, which arises when the information technologies used in the process are
not operating as intended or are compromising the availability, security, integ
rity, relevance, and credibility of information produced. Figure 15 provides exa
mples of these types of controls.
8.
8.1.
ELEMENT 4: ALLIANCES AND RELATIONSHIPS
Alliance De ned
Several types of alliances exist, each with a speci c purpose. The following are t
hree of the more common types:
Transactional alliances are established for a speci c purpose, typically to improv
e each participants ability to conduct its business. Cross-licensing in the pharmaceutical ind
ustry is an example. Open-ended purchase orders for speci c products would be anot
her. Strategic sourcing involves a longer-term commitment. It is a partnership b
etween buyer and seller that can reduce the cost and friction between supplier a
nd buyer by sharing product development plans, jointly programming production, s
haring con dential information, or otherwise working together much more closely th
an do typical suppliers and customers. Wal-Mart and Home Depot are examples of c
ompanies with substantial capabilities in strategic sourcing.
h
m
g
,
e d , g e n e
a n a g e m e
h o u t th e
th e n re a c
In e ff e c tiv e p e o p le a re th e p rim a ry s o u rc e o f b u s in e s s
ris k
N e w P a r a d ig m
R is k a s s e s s m e n t is a c o n tin u o u s p ro c e s s B u s in e s s ri
s k id e n tif ic a tio n a n d c o n tro l m a n a g e m e n t a re th e re s p
o n s ib ility o f a ll m e m b e rs o f th e o rg a n iz a tio n C o n n e c t
io n s B u s in e s s ris k a s s e s s m e n t a n d c o n tro l a re f o c u s
e d a n d c o o rd in a te d w ith s e n io r-le v e l o v e rs ig h t C o n tr
o l is f o c u s e d o n th e a v o id a n c e o f u n a c c e p ta b le b u s i
n e s s ris k , fo llo w e d c lo s e ly b y m a n a g e m e n t o f o th e r u
n a v o id a b le b u s in e s s ris k s to re d u c e th e m to a n a c c e p t
a b le le v e l A f o rm a l b u s in e s s ris k c o n tro ls p o lic y is a p
p ro v e d b y m a n a g e m e n t a n d b o a rd a n d c o m m u n ic a te d th
ro u g h o u t th e c o m p a n y
A n tic ip a te a n d p re v e n t b u s in e s s ris k , a n d m o n ito r b u
s in e s s ris k c o n tro ls c o n tin u o u s ly In e ff e c tiv e p ro c e s
s e s a re th e p rim a ry s o u rc e o f b u s in e s s ris k
Figure 14 The Old and New Business Risk Control Paradigms. (From Risk Management
Partners, Inc.)
48
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
S trategic Controls
Monitor external forces Monitor external forces business risk at the strategic le
vel Assess Assess business risk at the strategic level risk and control strategy/
policies Define Define risk and control strategy/policies appropriate resources Al
locate Allocate appropriate resources performance of the organization Monitor Mo
nitor performance of the organization Support continuous improvement Support con
tinuous improvement
Management Controls
business objectives, assumptions, and code of conduct Communicate Communicate bus
iness objectives, assumptions, and code of conduct clear boundaries and limits E
stablish Establish clear boundaries and limits Select and develop best practices
Select and develop best practices accountability Establish Establish accountabil
ity Support effective communication and information sharing Support effective com
munication and information sharing Implement effective processes for risk managem
ent Implement effective processes for risk management
Proces s Controls
Define process control objectives Define process control objectives business ris
k at the process level Assess Assess business risk at the process level Design a
nd implement controls and implement controls Design Measure the process performa
nce Measure the process performance Monitor the process control activities t Moni
tor the process control activities
Figure 15 Illustrative Examples of Controls. (From Risk Management Partners, Inc
.)
Strategic alliances involve two enterprises pulling together to share resources,
funding, or even
equity in a new enterprise on a long-term basis. For example, Motorola, Apple, a
nd IBM pooled research programs and made nancial commitments to develop a new-gen
eration engine for personal computersthe Power PC.
8.2.
Strategic Alliances in the Value / Supply Chain
The move toward strategic alliances is very strong because of increased global c
ompetition and industry convergence. For example, in banking, insurance, mutual
funds, nancial planning, and credit cards, industry boundaries are beginning to b
lur. Strategic alliances form when one enterprise alone cant ll the gap in serving
the needs of the marketplace. Financial pressures and time constraints have squ
eezed managers without resources to ll the gaps through internal development. Als
o, acquisitions have proved expensive and have not always brought the needed cap
abilities. An increasing number of global enterprises recognize that strategic a
lliances can provide growth at a fraction of the cost of going it alone. In addi
tion to sharing risks and investment, a well-structured, well-managed approach t
o alliance formation can support other goals, such as quality and productivity i
mprovement. Alliances provide a way for organizations to leverage resources. In
the future, many organizations will be nothing more than boxes of contracts, with su
bstantially all of the traditional value / supply chain components outsourced to
business partners in the form of alliances or other strategic relationships. In
addition to the more obvious reasons to focus on a companys core competencies, the n
ew economics of information will dramatically dismantle traditional business str
uctures and processes. New business structures will reform based on the separate
economics of information and physical products. Over the next few years, many r
elationships throughout the business world will change. The dismantling and refo
rmulation of traditional business structures will include value chains, supply c
hains, and business models. This will result from the separation of the economic
s of information from the economics of physical products. In addition, the explo
sion of networks and content standards will allow informational value chains to
separate from the physical chain. This will create enormous opportunities to use
information innovatively and create new knowledge for competitive advantage. As
value chains deconstruct to create new market opportunities, more and more alli
ances and new relationships will be formed to maximize opportunities and ll the g
aps that will exist in traditional business processes.
8.3.
Performance Management
While the promise of alliances is very bright, managing a myriad of different an
d sometimes very complex relationships will be a signi cant challenge for most ent
erprises. Improper guidelines, poor
50
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
K e y P e r f o r m a n c e In d ic a t o r S h o r t-T e r m
X X
L o n g -T e r m
X X X X X X X X X X X X X X X
M arket P e rf o rm a n ce C u s to m e r A c c e p t a n c e C u s to m e r S a
ti sf a c t io n M ark et S hare U n it S a l e s G r o w t h B ra nd Im a g e
P ro ce ss P e rf o rm a n ce T i m e -to - m a r k e t Q u a li t y S t a n d a
r d s U n iq u e B e n e fi t s T e c h no lo g y E n a b ler s R esou rc e P e
rform an ce C r o ss - f u n c ti o n a l T e a m i n g P e r fo r m a n c e A g
a i n s t G o a l s F in a n c ia l P e r f o r m a n c e M ar gin G o als P r
o fit a b i li t y G o a l s R e t u r n o n I n v e st m e n t N e w P r o d u
c t S a l e s /t o ta l S a le s
X X X X X X
X
Figure 16 Core Products / Services Measurement Framework. (From Risk Management
Partners, Inc.)
The measurement framework in Figure 16 provides an example of the product dimens
ions that should be continuously monitored and improved.
10.
10.1.
ELEMENT 6: CUSTOMERS
Customer De ned
Customers are the reason that organizations existthey are the most valuable asset
s. They are consumers or other businesses that utilize the enterprises products a
nd services. Large customers and / or groups of customers can exert signi cant in ue
nce over an organization. The business model is a framework for analyzing the co
ntributions of individual activities in a business to the overall level of custo
mer value that an enterprise produces, and ultimately to its nancial performance.
10.2.
Categories of Customers
An organizations customer base is made up of customers with many different attrib
utes. They may be categorized along many different dimensions, depending on the
purpose for which the segmentation is performed. Possible segmentation criteria
include size, market, pro tability, geographic location, customer preferences, in ue
nce or bargaining power, and intellectual capital. Segmentation gets the enterpr
ise closer to the customer and allows the enterprise to understand customer need
s in a very deep way. This closeness gives the enterprise access to information
that is critical to strategy formulation and implementation. In addition, the en
terprise and / or engineer can utilize customer segmentation techniques for vari
ous improvement initiatives.
10.3.
Product and Services and Customer Linkages
Products and services and customers are inextricably linked. An enterprises value
proposition is expressed in the valuethe products and servicesit delivers to its
customers. In the New Economy, these linkages will become more formalized as org
anizations innovate and produce new products with their customers. Customer capi
tal will grow when the enterprise and its customers learn from each other. Colla
borative innovation will be in everyones best interest.
10.4.
Relationship of Customers to Markets
Markets are made up of customers that have some common interests. Markets can be
divided into ner and ner segments (customer groups), with each segment having its
own issues. Market seg-
ts planned responses to such business risks, and the business processes that mana
gement has implemented. The strategic analysis is also focused on the articulati
on between the business strategy and the supporting business processes, as well
as the articulation between the identi ed business risks and managements responses
or controls. During strategic analysis, the engineer may rst obtain general indus
try information, including that which is available from trade associations, peri
odicals, and the like. Then he or she will consider obtaining information about
the structure of the industry, including its segmentation, the dynamics among th
e various organizations that comprise the industry, the critical business issues
facing entities in the industry, and signi cant industry risks. At the conclusion
of the strategic analysis, the engineer will have learned the directional course th
e management has set in response to the environment, taking into consideration:
The relationship between the broad economic environment and the industry segment
(s) in which
the enterprise competes The enterprises position and role within its respective i
ndustry segment(s) Threats to maintaining or improving the current position The
needs and wants of the enterprises chosen market segment(s) The total productive
capacity of the enterprise and its competitors for each niche
52
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
Managements vision of how to satisfy the market needs better than its rivals Mana
gements speci c strategies and plans for achieving that vision
Also, the engineer will have obtained an understanding of how and to what extent
management steers the business and attains a t between its strategy and the rang
e of environmental forces acting on it. This will have been done through review
of:
The enterprises strategic management process The formalized strategic plan The en
terprises approach to environmental scanning to monitor emerging or changing external threats
Managements methods for communicating strategies throughout the organization, as
well as
the clarity of such communications
The methods and measures used to monitor entity-level performance in terms of th
e strategic
goals
f the
iness
ished
The strategic analysis will provide the engineer with in-depth knowledge o
enterprises value proposition and insight into opportunities to improve bus
performance and mitigate the risks that threaten achievement of the establ
objectives.
11.2.2.
Business Process Analysis
Business process analysis is designed to provide the engineer with an in-depth u
nderstanding of the key business processes identi ed earlier during strategic anal
ysis. Through this analysis, the engineer learns how the organization creates va
lue. Speci cally, each core business process is studied in depth to discern signi ca
nt process objectives, the business risks related to these objectives, the contr
ols established to mitigate the risks, and the nancial implications of the risks
and controls. Likewise, each signi cant resource management process is examined wi
th the same foci. Business process analysis adopts a value chain approach to analyzi
ng the interconnected activities in the business, both domestically and globally
. It is consistent with W. Edward Demings views of business processes and the rol
e of total quality management in monitoring the value of these processes. Core b
usiness processes represent the main customer-facing activities of the business.
It is the successful combination and execution of the core business processes t
hat creates value in the eyes of customers and therefore results in pro table cust
omer sales. During business process analysis, the engineer recognizes the crossfunctional nature of activities in the enterprises business, that not all activit
ies within and across processes are sequential, and that important linkages exis
t between processes. Figure 5, above, provides the context for a process analysi
s example. Speci cally, it depicts the four core business processes of a hypotheti
cal retail company: brand and image delivery, product / service delivery, custom
er service delivery, and customer sales. Consider the brand and image delivery c
ore business process, which might include the following subprocesses: format dev
elopment and site selection, brand management, advertising and promotion, visual
merchandising, and proprietary credit. Figure 17 presents an example of a compl
eted process analysis template for the format development and site selection sub
process. Such a process analysis template can be used by the engineer to analyze
his or her enterprises core business processes and signi cant resource management
processes. The template is a framework that guides the engineers collection and i
ntegration of information about business processes, using eight components: proc
ess objectives, inputs, activities, outputs, systems, risks that threaten object
ives, and management controls linked to risks. Refer to Section 7.2 for descript
ions of each of these components. In the retail company, the engineer would addr
ess each of the following objectives, which are typical for this process: 1. 2.
3. 4. 5. 6. Provide an environment in which the customers needs can be met. Deliv
er a cost-effective and viable shop solution. Inject freshness and maintain a co
mpetitive edge. Use the store as a vehicle for differentiation. Open the store o
n time and stay on budget. Take maximum advantage of available nancial incentives
.
From a value-chain perspective, and focusing rst on process inputs, among the key
considerations are historical performance, technology capability, competitor fo
rmats, customer pro le, and cost constraints. Continuing the value-chain perspecti
ve, the engineer will gather information about process activities such as:
And the engineer will consider outputs like the following: 1. 2. 3. 4. 5. Store
design The macrospace plan Recruitment and training Customer satisfaction measur
es The roll-out implementation plan
The engineer also will recognize that various systems are germane to the format
development and site selection subprocess. These systems include the customer da
tabase, space management, property development and construction, market research
, project appraisal, and contract management systems. The engineer next consider
s the risks that threaten achievement of the process objectives and the controls
that have been implemented to mitigate such risks. Continuing with the focus on
the format development and site selection subprocess, such risks may include th
e possibility that competitors will develop better store formats, or an overemph
asis on new stores relative to existing stores. Controls that could mitigate suc
h risks are regular monitoring of competitors, in concert with contingency plann
ing and usage of appropriate evaluation criteria. A similar approach is taken by
the engineer for the signi cant resource management processes, which were identi ed
for the retail company in Figure 5 to be nancial / treasury management, informat
ion management, human resource management, property management, and regulatory m
an-
54
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
agement. Figure 18 presents an example of a completed process analysis template
for a retail companys human resource management process. As shown in Figure 18, t
he following are among the process objectives of relevance: attract and retain a
skilled and motivated workforce; control employee costs while maintaining moral
e and productivity; comply with regulatory / tax ling requirements; and adhere to
the organizations code of conduct. Maintaining a value-chain perspective, the en
gineer next considers inputs to this process, including the organizations strateg
ic plan, its operating plan, employee regulations, tax regulations, union contra
cts, industry statistics and market data, and training goals. Activities are the
n considered, such as developing and maintaining human resource policies and pro
cedures; establishing and maintaining compensation and bene t policies and program
s; identifying resource requirements; recruitment and hiring; training and devel
opment; performance reviews; compensation and bene t administration; monitoring of
union contracts and grievances; and monitoring compliance with regulations. The
engineer then will consider outputs, such as regulatory lings, personnel les, tax
lings, and performance reviews. Of course, various systems will be recognized as
keys to successful human resource management, such as those related to compensa
tion and bene ts, tax compliance, and regulatory compliance. Subsequently, the eng
ineer considers risks related to the human resource management function, includi
ng high levels of staff turnover, noncompliance with regulations, and noncompeti
tive compensation packages. In turn, the engineer considers the controls that ca
n mitigate the risks, such as implementing growth and opportunity plans for empl
oyees; regulatory monitoring; and benchmarking salary costs against industry and
other norms. At the conclusion of business process analysis, the engineer will
have updated his / her understanding of (a) how the enterprise creates value, (b
) whether the enterprise has effectively aligned the business process activities
with the business strategy, (c) what the signi cant process risks are that threat
en the achievement of the enterprises business objectives, and (d) how effective
the processes are at controlling the signi cant strategic and process risks. This
detailed and updated knowledge about the business provides a basis for the engin
eers development of recommendations about improvement opportunities and risk mana
gement.
11.2.3.
Business Performance Measurement
Information-age enterprises succeed by investing in and managing their intellect
ual assets as well as integrating functional specialization into customer-based
business processes. As organizations acquire these new capabilities, their measu
re of success should not depend solely on a traditional, historical
P roc e ss O b je c ti v e s I n pu t s
A ttr a c t a n d r e ta in s ki lle d a n d m o tiv a te d w o r k f o r c e Co
n tr o l e m p lo y e e c o s ts w h ile m a in ta in in g mo r a l e a n d p r
o d u c tiv ity
Co m p ly w ith r e g u la to r y / ta x f ilin g r e q u ir e m e n ts A d h e
r e n c e to c o d e o f c o n d u c t
S tr a te g ic p la n O p e r a ti n g p la n Re s o u r c e r e q u e s ts
E m p l o y e e r e g u la tio n s Ta x r e g u l a tio n s Un io n c o n tr a c
ts
In d u s tr y s ta tis tic s a n d ma r ke t d a ta Tr a in in g g o a ls / r e
q u e s ts Pers onnel f eedbac k
A c t iv i ti e s
D e v el op & m ain t ai n h um an re so u rc e p oli c ie s & p ro c ed u re s
E st a b li sh an d m ain t ai n c o m p en sa t io n & b en ef it s p oli c ie
s a n d p ro g ram s I d en ti fy re so u rc e re q ui rem en ts R e c ru itm e
nt a n d hi rin g P ro v id e t ra ini n g a nd d e v elo pm en t P ro v id e p
e rf o rm an c e re v ie w s a n d c o u n seli n g P ro v id e c o m p en sa t
io n a n d b en efits a dm i ni st ra t io n M oni t o r u ni on c o n t ra ct s
and g ri ev a n ce s M oni t o r c o m pl ia n ce wi t h re g ul at io n s a nd
p ro c e ss re q ui red fil in g s
O u tputs
Re g u la to r y f ili n g s Co m p e n s a ti o n a n d b e n e f its p o lic i
e s a n d a d m in is tr a tio n P e r s o n n e l f ile s Hu m a n r e s o u r
c e ma n a g e m e n t Co m p e n s a ti o n a n d b e n e f its Ta x s y s te
m
Ta x f ilin g s Hu m a n r e s o u r c e p o l ic ie s / p r o c e d u r e s Tr
a in in g p r o g r a m s
P e r f o r ma n c e r e v ie w s P a y r o ll a n d b e n e f its d is b u r s
e me n ts S ta f f in g a n d c o s t d a ta
S y ste m s
Re g u la to r y s y s te ms Ca s h d is b u r s e m e n t / p a y a b le s s y
s te m s E m p l o y e e s e lf - s e r v ic e s y s te m s
R is k s T h a t T h r
H ig h l e v e l o f s
o n - c o mp lia n c e
c k o f p e r s o n n
e c o mp e n s a t io
e a te n O b j e
ta ff tu r n o v
w ith r e g u la
e l w ith s kill
n p a c ka g e s
c
e
t
s
t iv e s
r P o o r ly mo t iv a te d s ta f f N
io n s ( ta x , la b o r , e tc .) L a
e ts n e e d e d No n c o m p e tit iv
C o n tr o l s L in k e d to R i s k s
Co n d u c t e mp lo y e e s u r v e y s w ith f o llo w u p o n r e s u lts ; i
mp l e me n t g r ow th a n d o p p o r tu n ity p la n s f o r e m p l o y e e
s Co m p a r e in c e n t iv e p a y to p e r f o r m a n c e ; c o n d u c t e
m p lo y e e s u r v e y s w ith f o llow u p o n r e s u lts ; mo n ito r la b
o r r e la t io n s a n d e s ta b l is h e m p lo y e e g r ie v a n c e c o m
m itte e s Re g u la to r y mo n ito r i n g Es ta b lis h f o r m a l h ir in
g c r ite r ia ; d e v e lo p a n d i m p le m e n t e ff e c tiv e tr a in in g
p r o g r a ms Co m p a r e s a la r y c o s ts to in d u s tr y n o r m s ; c
o m p a r e in c e n tiv e p a y to p e r f o r ma n c e
Figure 18 Example Human Resource Management ProcessRetail Company. (Adapted from
Bell et al. 1997)
56
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
FF in aa nn ci ll PP er rm aa nn ce in ca ia ef ro fo rm ce
V is i o n a n d S tr a te g y
M ar et PP er rm an ce M ak rk et e fo r fo rm an ce R e c u r r i n
e s s R a t io R e c u r r i n g B u s i n e s s R a t io M a r k e t in g C a
p a c i t y M a r k e t in g C a p a c i t y W in - L o s s R a t io W in - L o
s s R a t io E n g a g e m e n t S iz e E n g a g e m e n t S iz e en et t io n
nRR aa t ito en er ta ra t io io P P S o l e S o u r c e R a t io S o l e S o u r
c e R a t io NN ew C li e n t R a t io e w C li e n t R a t io CC lili en t R e
te n t io n R aa t io e n t R e te n t io n R t io N e w P r o d u c t R a tio
N e w P r o d u c t R a tio M a r k e t in g B r e a d t h aa t io t io M a r k
e t in g B r e a d t hRR C li e n t S a t i s f a c t io n C li e n t S a t i s
f a c t io n aa ck lo gg ck lo BB
PP ro ce er rm aa nn ce ro c ss e ssP P e fo r fo rm ce
ev
u
li
OO
en uu
e p e
en tt
sa tn
e eGG ro ww th RR ev en ro th ev en uu e ep p er PP RR ev en e rVV ev e n
r E m pp lo ye RR even ue per E m lo ye ee et M aa rg in NN et M rg in li
PP ro f iftia ili ty CC en ro tb ab ili ty aa ys uu t stt dd in gg DD y s
an in
RR e so uu rc er an ce
i liz t io ro dd uu ct
ye aa t itsif ct nn
ss TT ra in in a lu
o vv er TT rn er er
in st
e so re c eP P
CC er at nn ss
EE lo ye e eS
aio t io ew kk
rm aa nn ce gg
ef ro fr om rm an ce t itliz aa t io nn UU
P P ro ct et ritfiic f ic aio t io mm pp lo
S sa fa cio t io ra in in g gEE vv a lu at nn
ilil lslsCC re te NN e wS S ra ea td ed uu rn o
aa in st GG oo aa l sl s P P ef ro fo rm c eAA
et oo dd oo lo gg i ei s M M eh th lo es oo oo lslsa n d dTT ec nn iq uu es TT an
eh ch iq es ec nn oo lolo gg y yEE nn aa bb lele rs TT eh ch rs nn oo ww l el d g
g e eBB aa se KK ed ss es ew ro dd uu ct NN e wP P ro cst s ec itit in gg RR er cu
ru in
C aa uu ss e e aa nn dd EE ff cc t tR ee la ti nn ss hh ip ss C fe fe R la to io
ip
Figure 20 Sample Financial and Non nancial MeasuresConsulting Firm. (From Risk Mana
gement Partners, Inc.)
objectives. KPIs at the process level typically focus on three dimensions of pro
cess performance: cycle time, process quality, and process cost. More speci cally,
management might monitor and control process performance using one or more of t
he following types of KPIs:
Waste, rework, and other indicators of process inef ciency Backlog of work in proc
ess Customer response time Number of times work is recycled between subprocesses
and departments Number of document errors Customer satisfaction ratings Number
of routing errors Value-adding processing time Information processing errors
An integrated performance management system, with the appropriate KPIs, can prov
ide evidence to the engineer that the organization is maintaining the level of p
rocess quality required to sustain product demand.
11.2.4.
Risk Assessment
Risk assessment is a continuous process performed throughout the design and deve
lopment of the business model. During strategic analysis and business process an
alysis, the engineer reviews the processes and procedures that the enterprise ha
s established to identify and manage strategic and process risks. During the eng
ineers review of the enterprises risk management activities, he or she develops an
understanding of managements perceptions of business risk, both strategic risks
and business process risks, and considers the reasonableness of the assumptions
that underlie managements assessments of the potential impacts of these risks. Th
ese underlying assumptions may be viewed as a combination of assumptions about t
he probability of occurrence and assumptions about the magnitude of impact. Also
, the engineer uses other information obtained during the strategic and business
process analyses to make judgments about coverage (i.e., whether management has
considered all signi cant business risks). And he or she uses this information to
make judgments about the extent to which strategic and process risks remain unc
ontrolled (i.e., to determine the level of residual risk). Next, the engineer fu
rther integrates information about residual business risks by grouping risks bas
ed on the particular business model elements to which they relate. He or she wil
l also consider possible interactions among these groups of risks and develop ex
pectations about how they might be manifested in the performance of the business
. This integrated knowledge, together with the appropriate business measurements
, provides the engineer with a basis for performing a diagnosis of the
zation of the Future, Josey-Bass, San Francisco, 1997. Kelly, K., New Rules for
the New Economy: 10 Radical Strategies for a Connected World, Penguin, Harmondsw
orth, Middlesex, England, 1998. Lynch, R. P., Business Alliances Guide: The Hidd
en Competitive Weapon, John Wiley & Sons, New York, 1993. Magretta, J., Ed., Man
aging in the New Economy, Harvard Business School Press, Boston, 1999. Marquardt
, M., and Reynolds, A., The Global Learning Organization: Gaining Competitive Ad
vantage Through Continuous Learning, Irwin Professional Publishing, New York, 19
94. Miller, A., and Dess, G. G., Strategic Management, McGraw-Hill, New York, 19
96.
58
INDUSTRIAL ENGINEERING FUNCTION AND SKILLS
Negroponte, N., Being Digital, Alfred A. Knopf, New York, 1995. Porter, M. E., C
ompetitive Advantage: Creating and Sustaining Superior Performance, The Free Pre
ss, New York, 1985. Rosenau, M. D., Jr., Ed., The PDMA Handbook of New Product D
evelopment, John Wiley & Sons, New York, 1996. Shapiro, C., and Varian, H. R., I
nformation Rules: A Strategic Guide to the Network Economy, Harvard Business Sch
ool Press, Boston, 1999. Sherman, H., and Schultz, R., Open Boundaries: Creating
Business Innovation Through Complexity, Perseus Books, Reading, MA, 1998. Stewa
rt, T. A., Intellectual Capital: The New Wealth of Organizations, Doubleday, New
York, 1997. Thomas, R. J., New Product Development: Managing and Forecasting fo
r Strategic Success, John Wiley & Sons, New York, 1993.
APPENDIX List of Generic Business Processes and Subprocesses
Strategic Management Processes
1.0 Understand Markets and Customers: 1.1 Determine customer needs and wants. 1.
2 Monitor changes in market and customer expectations. 2.0 Develop Vision and St
rategy: 2.1 Monitor the external environment. 2.2 De ne value proposition and orga
nizational strategy. 2.3 Design organizational structure / processes / relations
hips. 2.4 Develop and set organizational goals. 3.0 Manage Improvement and Chang
e: 3.1 Measure organizational performance. 3.2 Conduct quality assessments. 3.3
Benchmark performance. 3.4 Improve processes and systems.
Core Business Processes
4.0 Design Products and Services: 4.1 Develop new product / service concepts and
plans. 4.2 Design, build, and evaluate prototype products / services. 4.3 Re ne e
xisting products / services. 4.4 Test effectiveness of new / revised products or
services. 4.5 Prepare for production. 5.0 Market and Sell Products / Services:
5.1 Market products / services to relevant customers. 5.2 Process customer order
s. 6.0 Produce and Deliver Goods: 6.1 Plan for and acquire necessary resources.
6.2 Convert resources or inputs into products. 6.3 Deliver products. 6.4 Manage
production and delivery process. 7.0 Produce and Deliver Services: 7.1 Plan for
and acquire necessary resources. 7.2 Develop human resource skills.
ENTERPRISE CONCEPT: BUSINESS MODELING ANALYSIS AND DESIGN 7.3 Deliver service to
the customer. 7.4 Ensure quality of service. 8.0 Invoice and Service Customers:
8.1 Bill the customer. 8.2 Provide after-sales service. 8.3 Respond to customer
inquiries.
59
Resource Management Processes
9.0 Develop and Manage Human Resources: 9.1 Create and manage human resource str
ategies. 9.2 Perform work level analysis and planning. 9.3 Manage deployment of
personnel. 9.4 Develop and train employees. 9.5 Manage employee performance, rew
ard, and recognition. 9.6 Ensure employee well-being and satisfaction. 9.7 Ensur
e employee involvement. 9.8 Manage labor / management relationships. 9.9 Develop
human resource information systems. 10.0 Manage Information Resources: 10.1 Pla
n for information resource management. 10.2 Develop and deploy enterprise suppor
t systems. 10.3 Implement systems security and controls. 10.4 Manage information
storage and retrieval. 10.5 Manage facilities and network operations. 10.6 Mana
ge information resources. 10.7 Facilitate information sharing and communication.
10.8 Evaluate and audit information quality. 11.0 Manage Financial and Physical
Resources: 11.1 Manage nancial resources. 11.2 Process nance and accounting trans
actions. 11.3 Report information. 11.4 Conduct internal audits. 11.5 Manage the
tax function. 11.6 Manage physical resources. 12.0 Execute Environmental Managem
ent Program: 12.1 Formulate environmental management strategy. 12.2 Ensure compl
iance with regulations. 12.3 Train and educate employees. 12.4 Implement polluti
on-prevention program. 12.5 Manage remediation efforts. 12.6 Implement emergency
response program. 12.7 Manage government agency and public relations. 12.8 Mana
ge acquisition / divestiture environmental issues. 12.9 Develop / manage environ
mental information systems. 13.0 Manage External Relationships: 13.1 Establish c
ommunication networks and requirements. 13.2 Communicate with stakeholders. 13.3
Manage government relationships.
SECTION II TECHNOLOGY
A. Information Technology B. Manufacturing and Production Systems C. Service Sys
tems
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
II.A
Information Technology
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
lding of information systems (e.g., database management software) and tools that
support methodologies for system development.
1.1.
The Nature of Information Systems
The core function of an information system is record keeping. Record keeping is
able to represent and process information about some domain of interest such as
a job shop, an inventory, suppliers
s not unusual for its users to want reports other than those in the prede ned port
folio. The usefulness of these kinds of reports may have been overlooked in the
original design of the system, or new reports may be needed because of changing
conditions in a users work environment. A query facility provides a nonprocedural
way to state what should be in the desired report, allowing a user to specify w
hat is desired without having to specify how to extract data from records in ord
er to produce the report. An information system equipped with a query facility i
s a step in the direction of a decision support system. It gives the user a way
to address some of the ad hoc knowledge needs that arise in the course of decisi
on making.
1.2.
Classes of Information Systems
Having characterized information systems and distinguished them from decision su
pport systems, we can now look at classes of ISs. One approach to classi cation is
based on functional application domains: ISs for nance applications, manufacturi
ng applications, human resource applications, accounting applications, sales app
lications, and so forth. Another classi cation approach is to differentiate ISs in
terms of the underlying technologies used to build them: le management, database
68
TECHNOLOGY
, COBOL, HTML, XML, arti cial intelligence, and management, data warehouses, Visual
Basic, C so on. However, it is not uncommon for multiple technologies to be empl
oyed in a single IS. Here, we classify ISs in terms of their intended scopes. Sc
ope can be thought of in two complementary ways: the scope of the records that a
re kept by the IS and the scope of IS usage. The IS classes form a progression f
rom those of a local scope, to ISs with a functional scope to enterprisewide ISs
to ISs that are transorganizational in scope. Some tools for building informati
on systems are universal in that they can be used for any of these classes. Exam
ples include common programming languages and database management tools. Others
are more specialized, being oriented toward a particular class. Representative e
xamples of both universal and specialized tools are presented in this chapter.
1.2.1.
Local Information Systems
Local ISs are in wide use today. This is due to such factors as the tremendous r
ise in computer literacy over the past two decades, the economical availability
of increasingly powerful computing devices (desktop and handheld), and the appea
rance of more convenient, inexpensive development tools. Local ISs are used in b
oth small and large organizations. They are also prominent outside the organizat
ional setting, ranging from electronic calendars and address books to personal na
nce systems to genealogy systems. A local IS does record keeping and reporting t
hat relate to a speci c task performed by an individual. For instance, a salespers
on may have an IS for tracking sales leads. The IS would keep records of the sal
espersons current, past, and prospective customers, including contact information
, customer characteristics (e.g., size, needs, tastes), history of prior interac
tions, and schedules of planned interactions. The IS would allow such records to
be updated as needed. Reports produced by this IS might include reports showing
who to contact at a customer location and how to contact them, a list of all cu
stomers having a particular characteristic (e.g., having a need that will be add
ressed by a new product offering), or a list of prospects in a given locale and
a schedule showing who to contact today. Operation of a local IS tends to be und
er its users personal control. The user typically operates the IS directly, assum
ing responsibility for creating and updating its records and requesting the prod
uction of reports. The behavior of a local IS, in terms of what records it can k
eep and what reports it can produce, may also be under the users control. This is
the case when the IS software is custom-built to meet the particular users requi
rements. The user either personally builds the IS software or has it built by a
professional IS developer according to his or her speci cations. At the opposite e
xtreme, the user may have little control over the behavior of the local ISs softw
are. This is the case when the user obtains off-the-shelf, packaged software tha
t has been designed by a vendor to suit most of the needs of a large class of IS
users. Thus, the user of a local IS has a choice between creating or obtaining
custom-built IS software and acquiring ready-made software. Customized construct
ion has the advantage of being tailor-made to suit the users needs exactly, but i
t tends to be time-consuming and expensive. Ready-made software is quickly avail
able and relatively inexpensive, but the behavior of an off-the-shelf software p
ackage may not match the IS users needs or desires. Such mismatches range from cr
ippling omissions that render a package unworkable for a particular user to situ
ations where the behavior is adequate in light of cost and time savings. A middl
e ground between custom-made and ready-made software is package software that ca
n be con gured to behave in a certain way. In principle, this is packaged software
that yields a variety of local ISs. The user selects the particular IS behavior
that most closely suits his or her needs and, through an interactive process, c
on gures the packaged software to exhibit this behavior. This approach may be some
what more expensive and time-consuming than strictly ready-made packages, but it
also permits some tailoring without the degree of effort required in the custom
-made approach.
1.2.2.
Functional Information Systems
A functional IS is one that performs record keeping and reporting related to som
e function of an organization such as production, nance, accounting, marketing, s
ales, purchasing, logistics, personnel, or research. Such systems include those
that give reports on production status, inventory levels, nancial positions, acco
unts (e.g., payable, receivable), sales levels, orders, shipments, fringe bene ts,
and so forth. These ISs are much broader in scope than local ISs and may be tho
ught of as being more for departmental use than individual use. A functional IS
tends to be larger and more complex in terms of records it possesses and process
es. Often there are many individuals who use a given functional IS. Functional I
Ss are typically administered by IS professionals rather than individual users.
It would be even more unusual for an individual user to build his or her own fun
ctional IS, as the size and complexity of these systems usually calls for formal
system development methodologies. The options for building a functional IS are
using customized development, purchasing ready-made software, and
70
TECHNOLOGY
transorganizational ISs, which take two main forms: those that involve accepting
and reporting information from and to other businesses (B2B electronic commerce
) and those that involve accepting and reporting information from and to consume
rs (B2C electronic commerce). As companions to transorganizational ISs, many exa
mples of Web-oriented decision support systems exist (Holsapple et al. 2000).
1.3.
Overview of Information System Development and Tools
Given an application domain and a class of potential users, we are confronted wi
th the problem of how to create a useful information system. The act of creation
spans such activities as analysis, design, and implementation. System analysis
is concerned with determining what the potential users want the system to do. Sy
stem design involves transforming analysis results into a plan for achieving tho
se results. Implementation consists of carrying out the plan, transforming the d
esign into a working information system. IS implementation may involve software
tools such as programming-language compilers and database management software. O
r it may involve con guring off-the-shelf software packages. The activity of desig
n can be strongly in uenced by what tools are to be used for implementation. Both
the analysis and design activities can themselves be supported by tools such as
data ow diagrams, data dictionaries, HIPO (hierarchical input, process, output) c
harts, structure charts, and tools for computer-assisted software engineering (C
ASE). In the early days of IS development, the principal software tool used by d
evelopers was a programming language with its attendant compiler or interpreter.
This tool, together with text-editing software, was used to specify all aspects
of an information systems behavior in terms of programs. When executed, these pr
ograms governed the overall ow of the information systems actions. They accomplish
ed information storage and processing tasks. They also accomplished user interac
tion tasks, including the interpretation of user requests and production of repo
rts for users. Section 2 provides an overview of programming, accompanied by hig
hlights of two languages widely used for IS implementation: Visual Basic and C
. A
lthough programming is a valuable way for developers to specify the ow of control
(i.e., what should happen and when) in an information systems behavior, its use
for specifying the systems data representation and processing behaviors has stead
ily diminished in concert with the proliferation of database management tools. T
oday, database management systems are a cornerstone of information system develo
pment. Most popular among the various approaches to database management is the r
elational, which is described in Section 3. In addition to packaging these as se
parate tools from conventional (so-called third-generation) languages such as CO
BOL, C, and FORTRAN, various efforts have been made to integrate database manage
ment and programming facilities into single facility. The resultant tools are ex
amples of fourth-generation languages. Programming and database management tools
can be used in implementing ISs in any of the four classes discussed in Section
2. There is great variation in off-the-shelf packages available for IS implemen
tation both within and across the IS classes. Section 2.5 considers prominent We
b-based tools used for implementing transorganizational information systems. Sec
tion 3 provides a brief overview of database management, which forms the foundat
ion for most information systems today. Section 4 focuses on tools for the class
of enterprise ISs. Finally, Section 5 considers ancillary tools that can be val
uable in the activity of developing information systems. These are examined in t
he context of the system development life cycle of analysis, design, implementat
ion, and maintenance. We will focus on tools often used by IS professionals duri
ng the analysis and design phases.
2.
2.1.
PROGRAMMING LANGUAGES
Overview
This section describes how programming languages may be used to build informatio
n systems. First, a brief historical review of programming languages helps expla
in how programming tools have become more powerful and easier to use over the ye
ars. We characterize todays modern programming languages, such as C
and Visual Bas
ic, by considering the advantages and disadvantages of each. Then we review some
basic programming concepts by showing some typical examples from these language
s. Finally, we consider how program development is affected when the World Wide
Web is targeted as a platform. This includes a comparison of Internet programmin
g tools, such as HTML, Java, and CGI scripting.
2.2.
Historical Review
In the earliest days of program development, programmers worked directly with th
e computers own machine language, using a sequence of binary digits. This process
was tedious and error-prone, so programmers quickly started to develop tools to
assist in making programming easier for the humans involved. The rst innovation
allowed programmers to use mnemonic codes instead of actual binary
n when one considers the well-known Year 2000 problem encountered in many progra
ms written with structured languages, such as COBOL. In many of these informatio
n systems, literally thousands of functions passed dates around as data, with on
ly two digits reserved for storing the year. When the structure of the data had
to be changed to use four digits for the year, each of these functions had to be
changed as well. Of course, each function had to be identi ed as having a depende
ncy on the date. In an OOP language, a single class would exist where the struct
ure of the data representing a date is stored along with the only functions that
may manipulate that data directly. Encapsulation, then, makes it easier to isol
ate the changes required when the structure of data must be modi ed. It strengthen
s information hiding by making it dif cult to create a data dependency within a fu
nction outside a class. OOP also makes code reuse easier than structured program
ming allows. In a structured environment, if you need to perform a task that is
similar but not identical to an existing function, you must create a new functio
n from scratch. You might copy the code from the original function and use it as
a foundation, but the functions will generally be independent. This approach is
error-prone and makes long-term program maintenance dif cult. In OOP, one may use
a principle called inheritance
72
TECHNOLOGY
to simplify this process. With this approach, an existing class, called the base
class or superclass, is used to create a new class, called the derived class or
subclass. This new derived class is like its parent base class in all respects
except what the programmer chooses to make different. Only the differences are p
rogrammed in the new class. Those aspects that remain the same need not be devel
oped from scratch or even through copying and pasting the parents code. This save
s time and reduces errors. The rst OOP languages, such as SmallTalk and Eiffel, w
ere rarely used in real world projects, however, and were relegated to universit
y research settings. As is often the case with easing system development and mai
ntenance, program execution speed suffered. With the advent of C
, an OOP hybrid l
anguage, however, performance improved dramatically and OOP took off. Soon there
after, new development tools appeared that simpli ed development further. Rapid Ap
plication Development (RAD) tools speed development further by incorporating a m
ore visual development environment for creating graphical user interface (GUI) p
rograms. These tools rely on wizards and code generators to create frameworks ba
sed on a programmers screen layout, which may be easily modi ed. Examples include B
orlands Delphi (a visual OOP language based on Pascal), Borlands C
Builder (a visua
l C
language), and Microsofts Visual Basic.
2.3.
C
The C
language was originally developed by Bjarne Stroustrup (Stroustrup 1994) bu
t is now controlled and standardized by the American National Standards Institut
e (ANSI). It is an extension (literally, an increment) of the C programming lang
uage. C is well known for the speed of its compiled executable code, and Stroust
rup strove to make C
into a similarly ef cient object-oriented language (see Strous
trup 1994 for a description of the development of the language). Technically, C
i
s a hybrid language in that it supports both structured (from C) and object-orie
nted development. In order to achieve the speed that earlier OOP languages could
not, C
makes some sacri ces to the purity of the OO model. The tradeoffs required,
however, helped make OOP popular with real-world developers and sparked a revol
ution in modern program development. Like C before it, C
is a language with a ric
h abundance of operators and a large standard library of code for a programmers u
se. In its current form, C
includes the Standard Template Library, which offers m
ost of the important data structures and algorithms required for program develop
ment, including stacks, queues, vectors, lists, sets, sorts, and searches (Strou
strup 1997). Programs written using C s standard components are easily ported from
one platform to another. The language lacks, however, a standard library of grap
hics routines for creating a graphical user interface program under different op
erating systems. Instead, each compiler vendor tends to offer its own classes an
d functions for interacting with a speci c operating systems windowing routines. Fo
r example, Microsoft offers the Microsoft Foundation Classes (MFC) for developin
g Windows applications with its compiler, while Borland offers its Object Window
s Library (OWL) for the same purpose. This makes it dif cult to port GUI programs
from one vendors compiler on one operating system to another vendors compiler on a
nother operating system. A simple example (Main and Savitch 1997) is shown below
that declares a basic Clock class. It is an abstraction of a real-world clock u
sed for telling time.
class Clock { public: Clock(); void set time(int hour, int minute, bool morning)
; void advance(int minutes); int get hour() const; int get minute() const; bool
is morning() const; private: int hour 24, // Stores the current hour minute 24;
// Stores the current minute };
This class is typical of the structure of most classes in C
. It is divided into t
wo sections, public and private. In keeping with information hiding, the data ar
e normally kept private so that others outside of the class may not examine or m
odify the values stored. The private data are exclusively manipulated by the cla
ss functions, normally referred to as methods in OOP. The functions are declared
in the public section of the class, so that others may use them. For example, o
ne could ask a clock object its minute by using its get minute() function. This
might be written as in the following code fragment:
Clock c; // Creates an actual object, c cout
The minute is
c.get minute() ;
endl
Of course, this simple class provides a method for modifying the time stored. Th
e public method set time() is used for this purpose. By forcing the use of this
method, the class designer can ensure that the data are manipulated in accordanc
e with appropriate rules. For example, the set time() method would not allow the
storage of an illegal time value. This could not be guaranteed if the data were
public and directly available for modi cation. The implementation of this method
is shown below:
void Clock::set time(int hour, int minute, bool morning) { assert((hour
1) && (ho
ur
12)); assert((minute
0) && (minute
59)); minute 24
minute; if ((morning)&&(hour
2)) hour 24
0; else if ((!morning) && (hour
12)) hour 24 hour
12; else hour 24 h
our;}
Note how the values passed to the method are tested via an assert() statement be
fore they are used. Only if the assertion proves true are the values used; other
wise a runtime error message is produced. To show an example of inheritance, con
sider extending the Clock class to create a new class, CuckooClock. The only dif
ference is that this type of clock has a bird that chirps on the hour. In all ot
her respects it is identical to a regular clock.
class CuckooClock : public Clock { public: bool is cuckooing( ) const; }; bool C
uckooClock::is cuckooing( ) const { return (get minute( )
0);}
Note, we declare the new CuckooClock class in terms of the existing Clock class.
The only methods we need to describe and implement are those that are new to or
different from the base class. Here there is only one new function, called is c
uckooing(), which returns true on the hour. It is important to understand that C
uckooClock is a Clock. In important respects, an instance of CuckooClock can do
anything that a Clock object can do. In fact, the compiler will allow us to send
a CuckooClock object anywhere a Clock object is expected. For example, we might
have a function written to compare two Clock objects to see if one is equal to the
other.
bool operator (const Clock& c1, const Clock& c2) { return ((c1.get hour()
c2.get ho
ur()) && (c1.get minute()
c2.get minute()) && (c1.is morning()
c2.is morning()));
}
We may con dently send a CuckooClock to this function even though it is written ex
plicitly to expect Clock objects. Because of inheritance, we do not need to writ
e a different version of the function for each class derived from ClockCuckooCloc
k is a Clock. This simpli es things considerably for the programmer. Another inter
esting note about this code segment is that C
allows us to provide de nitions for m
ost of its built-in operators in the context of a class. This is referred to as
operator overloading, and C
is rare among programming languages in allowing this
kind of access to operators. Here we de ne a meaning for the equality operator ( ) fo
r Clock objects. In summary, C
is a powerful and rich object-oriented programming
language. Although widely recognized as dif cult to work with, it offers ef cient e
xecution times to programmers that can master its ways. If you want portable cod
e, you must stick to creating console applications. If this is not a concern, mo
dern compilers assist in creating GUI applications through wizards, such as in M
icrosofts Visual C . This is still more dif cult to accomplish using C
than a RAD tool
such as Visual Basic, which will be discussed next.
2.4.
Visual Basic
Visual Basic (VB) is a programming language and development tool from Microsoft
designed primarily for rapidly creating graphical user interface applications fo
74
TECHNOLOGY
system. First introduced in 1991, the language is an extension of BASIC, the Beg
inners All-Purpose Symbolic Instruction Code (Eliason and Malarkey, 1999). BASIC
has been in use since John Kemeny and Thomas Kurta introduced it in 1965 (Brooks
hear 1999). Over the years, Visual Basic has evolved more and more towards an ob
ject-oriented programming model. In its current version, 6.0, VB provides progra
mmers with the ability to create their own classes using encapsulation, but it d
oes not yet support inheritance. It simpli es the creation of Windows applications
by making the enormously complex Windows Application Program Interface (consist
ing of over 800 functions) available through easy-to-use objects such as forms,
labels, command buttons, and menus (Eliason and Malarkey 1999). As its name sugg
ests, Visual Basic is a highly visual tool. This implies not only that the devel
opment environment is GUI-based but also that the tool allows you to design a pr
ograms user interface by placing components directly onto windows and forms. This
signi cantly reduces development time. Figure 1 provides a snapshot of the Visual
Basic development environment. Another feature that makes Visual Basic into a R
apid Application Development tool is that it supports both interpreted and compi
led execution. When VB is used in interpreted mode, the tool allows the programm
er to quickly see the effects of their code changes without a lengthy compilatio
n directly to machine code. Of course, run-time performance is better when the c
ode is actually compiled, and this can be done before distributing the applicati
on. In Visual Basic, objects encapsulate properties, methods, and events. Proper
ties are an objects data, such as a labels caption or a forms background color. Met
hods are an objects functions, as in C
and other OOP languages. Events are typical
ly user-initiated actions, such as clicking a forms button with the mouse or maki
ng a selection from a menu, that require a response from the object. Figure 1 sh
ows some of the properties of the highlighted command button object, Command1. H
ere the buttons text is set to Press Me via the Caption property. In the code window
for the form, the Click events code is displayed for this command button. This is
where you would add the code to respond when the user clicks on the forms button
. The tool only provides the outline for the code, as seen here. The programmer
must add actual VB statements to perform the required action. Visual Basic does
provide many wizards, however, to assist in creating different kinds of applicat
ions, including forms that are connected back to a database. These wizards can s
igni cantly reduce development time and improve reliability. The fundamental objec
ts that are important to understand in Visual Basic development are forms and co
ntrols. A form serves as a container for controls. It is the canvas upon which the p
rogrammer
Figure 1 The Visual Basic Development Environment.
paints the applications user interface. This is accomplished by placing controls onto
the form. Controls are primarily GUI components such as command buttons, labels
, text boxes, and images. In Figure 1, the standard toolbox of controls is docke
d on the far left of the screen. These controls are what the user interacts with
when using the application. Each control has appropriate properties and events
associated with it that affect its appearance and behavior. For example, conside
r the form shown in Figure 2. This form shows the ease with which you can create
a database-driven application. There are three controls on this form. At the bo
ttom of the form is a data control. Its properties allow the programmer to speci
fy a database and table to connect to. In this example, it is attached to a simp
le customer database. At the top are two textbox controls, used for displaying a
nd editing textual information. In this case, these controls are bound to the da
ta control, indicating that their text will be coming from the associated databa
se table. The data control allows the user to step through each record in the ta
ble by using the left and right arrows. Each time the user advances to another r
ecord from the table, the bound controls update the users text to display that cu
stomers name and address. In the code fragment below, you can see some of the imp
ortant properties of these controls.
Begin VB.Data Data1 Caption
Customer Connect Access DatabaseName
C:\VB Exampl
ultCursorType 0 DefaultCursor DefaultType
2 UseODBC Exclusive 0 False ReadOnly
0 F
e RecordsetType
1 Dynaset RecordSource
Customer End Begin VB.TextBox Text1 DataField
stomer name DataSource Data1 End Begin VB.TextBox Text2 DataField
street address
e Data1 End
In the data control object, Data1, there are several properties to consider. The
Connect property speci es the type of database being attached, here an Access dat
abase. The DatabaseName property speci es the name and location of the database. R
ecordSource speci es the table or stored query that will be used by the data contr
ol. In this example, we are connecting to the Customer table of the example data
base. For the bound text controls, the programmer need only specify the name of
the data control and the eld to be used for the text to be displayed. The propert
ies that need to be set for this are DataSource and DataField, respectively.
Figure 2 Database Example.
76
TECHNOLOGY
As these examples demonstrate, Visual Basic makes it relatively easy to create c
omplex GUI applications that run exclusively under the Windows operating system.
When these programs are compiled into machine code, the performance of these ap
plications is quite acceptable, although not yet as fast as typical C
programs. T
he object-oriented model makes development easier, especially since most of the
properties can be set visually, with the tool itself writing the necessary code.
Unfortunately, VB still does not support inheritance, which limits a programmers
ability to reuse code. Recent additions in the language, however, make it easie
r to target the Internet as an application platform. By using Active X controls,
which run under Windows and within Microsofts Internet Explorer (IE) web browser
, and VBScript (a subset of VB that runs in IE), you can create applications tha
t may be ported to the World Wide Web. Your choice of controls is somewhat limit
ed and the user must have Microsofts own web browser, but in a corporate Intranet
environment (where the use of IE can be ensured) this might be feasible. For ot
her situations a more exible solution is required. The next section will explore
some of the tools for achieving this.
2.5.
Web-Based Programming
Developing applications where the user interface appears in a web browser is an
important new skill for programmers. The tools that enable a programmer to accom
plish this type of development include HTML, Java, CGI, Perl, ColdFusion, ASP, e
tc.a veritable cornucopia of new acronyms. This section explains these terms and
shows how these new tools may be used to create web pages that behave more like
traditional software applications.
2.5.1.
HTML
The HyperText Markup Language (HTML) is the current language of the World Wide W
eb. HTML is a markup language, not a programming language. Very simply, a docume
nt is marked up to de ne its appearance. This involves placing markup tags (or command
s) around text and pictures, sound, and video in a document. The general syntax
for HTML tags is:
Tagname [options] your text here / tagname
These tags indicate how to display portions of the document. Opening tags, along
with available options, are enclosed in
and
and closing tags include the / befo
re the tagname. In some instances both opening and closing tags are required to
indicate what parts of the documents the tags should affect. In other instances
no closing tag is required. The contents of the le (including HTML tags and web p
age contents) that generates the simple web page shown in Figure 3 are as follow
s:
html
head
title Tools for Building Information Systems /title
/head
body center
g Information Systems /h1 by p Robert L. Barker br Brian L. Dos Santos br Clyde H.
Figure 3 Web Page and HTML Tags.
ate with programs running on a Web server. A typical CGI transaction begins when
a user submits a form that he or she has lled out on a web page. The user may be
searching for a book by a certain author at an online bookstore and have just e
ntered the authors name on a search page. The HTML used for displaying the form a
lso speci es where to send the data when the user clicks on the submit button. In
this case, the data is sent to a CGI program for processing. Now the bookstores C
GI program searches its database of authors and creates a list of books. In orde
r to return this information to the user, the CGI program must create a response
in HTML. Once the HTML is generated, control may once again be returned to the
web server to deliver the dynamically created web page. CGI programs may be writ
ten in nearly any programming language supported by the operating system but are
often written in specialized scripting languages like Perl.
78
TECHNOLOGY
CGI provides a exible, but inef cient mechanism for creating dynamic responses to W
eb queries. The major problem is that the web server must start a new program fo
r each request, and that incurs signi cant overhead processing costs. Several more
ef cient alternatives are available, such as writing programs directly to the Web
servers application program interface. These server APIs, such as Microsofts ISAP
I and Netscapes NSAPI, allow more ef cient transactions by reducing overhead throug
h the use of dynamic link libraries, integrated databases, and so on. Writing pr
ograms with the server APIs is not always an easy task, however. Other alternati
ves include using products like Allaires ColdFusion or Microsofts Active Server Pa
ges (ASP).
2.5.3.
Java
ious Java applet and place it on a web page for unsuspecting users. Luckily, the
Java community has created a robust and ever-improving security model to preven
t these situations. For example, web browsers generally restrict applets from ac
cessing memory locations outside their own programs and prevent writing to the u
sers hard drive. For trusted applets, such as those being run from a companys Intr
anet, the security restrictions can be relaxed by the end user to allow more pow
erful functionality to be included.
2.5.4.
ColdFusion
ColdFusion is a tool for easily creating web pages that connect to a database. D
eveloped by Allaire (http: / / www.allaire.com), ColdFusion is being used in tho
usands of Web applications by leading companies, including Reebok, DHL, Casio, a
nd Siemens. The ColdFusion Markup Language (CFML) allows a user to create Web ap
plications that are interactive and interface with databases. The syntax of CFML
is similar to HTML, which is what makes it relatively easy to use; it makes Col
dFusion appealing to Web designers who do not have a background in programming.
ColdFusion encapsulates in a single tag what might take ten or a hundred lines o
f code in a CGI or ASP program. This allows for more rapid development of applic
ations than competing methods provide. However, if a tag does
80
TECHNOLOGY
and diffused data ownership (McLeod 1998). In early data processing systems, the
data les were often created with little thought as to how those les affected other
systems. Perhaps much, or even all, of the new data in a new le was already conta
ined in an existing le (McLeod 1998). This resulted in signi cant data redundancy and
several associated problems. Obviously, data duplication results in wasted spac
e, but worse than this are synchronization problems. Multiple copies of data were
often updated on different frequencies. One le would be updated daily, another we
ekly, and a third monthly. Reports based on one le could con ict with reports based
on another, with the user being unaware of the differences (McLeod 1998). These e
arly systems also exhibited a high degree of dependence between the data speci cat
ions and the programs written to manipulate the data. Every time the representat
ion of the data or the le storage format changed, all of the programs that depend
ed on that data would also need to be changed. Other problems resulted from a la
ck of agreement on data ownership and standardization. This situation of the 1950s
and early 1960s may have been the reason why many of the rst management informat
ion systems failed in the late 1960s. Information specialists appeared unable to
deliver systems that provided consistent, accurate, and reliable information (McL
eod 1998). Database systems sought to address these issues by separating the log
ical organization of the data from the physical organization. The logical organi
zation is how the user of the system might view the data. The physical organizat
ion, however, is how the computer views the data. A database, then, may be de ned
as an integrated collection of computer data, organized and stored in a manner tha
t facilitates easy retrieval by the user (McLeod 1998). It seeks to minimize data
redundancy and data dependence. A database is often considered to consist of a h
ierarchy of data: les, records, and elds. A eld is the smallest organized element o
f data, consisting of a group of characters that has a speci c meaning. For exampl
e, a persons last name and telephone number would be common elds. Related elds are
collected and stored in a structure referred to as a record. A speci c persons reco
rd might consist of his or her name, address, telephone number, and so on (all v
alues associated with that individual). Related records are grouped into a le, su
ch as a le of employees or customers. Keep in mind that the physical representati
on of the database will be shielded from the user and should not limit the users
ability retrieve information from the system. A database management system (DBMS
) is a collection of programs that manage the database structure and controls acce
ss to the data stored in the database. The DBMS makes it possible to share the d
ata in the database among multiple applications or users (Rob and Coronel 1997). I
n a sense, it sits between the user and the data, translating the users requests
into the necessary program code needed to accomplish the desired tasks. General
Electrics Integrated Data Store (IDS) system, introduced in 1964, is considered b
y most to be the rst generalized DBMS (Elmasri and Navathe 2000). Database system
s are often characterized by their method of data organization. Several popular
models have been proposed over the years. Conceptual models focus on the logical
nature of the data organization, while implementation models focus on how the d
ata are represented in the database itself. Popular conceptual models include th
e entity-relationship (E-R) model and the object-oriented model discussed below.
These models typically describe relationships or associations among data as one
-to-one, one-to-many, or many-to-many. For example, in most organizations an emp
loyee is associated with one department only but a department may have several e
mployees. This is a one-tomany relationship between department and employee (one
department has many employees). But the relationship between employee and skill
s would be many-to-many, as a single employee might possess several skills and a
single skill might be possessed by several employees (many employees have many
skills). Implementation models include the hierarchical model, network model, an
d relational model. Each model has advantages and disadvantages that made it pop
ular in its day. The relational model is easily understood and has gained widesp
read acceptance and use. It is currently the dominant model in use by todays data
TOOLS FOR BUILDING INFORMATION SYSTEMS Employee EmployeeID 1234 2111 2525 3222 3
536 Department DepartmentID 11 20 32 EmpName Jane Smith Robert Adams Karen Johns
on Alan Shreve Mary Roberts DeptName Marketing Engineering Accounting EmpPhone 5
025551234 8125551212 6065553333 5025558521 8125554131
81 DepartmentID 11 20 11 32 20
Pay special attention to how relationships between tables are established by sha
ring a common eld value. In this example, there is a one-to-many relationship bet
ween the Employee and Department tables established by the common DepartmentID el
d. Here, Robert Adams and Mary Roberts are in the Engineering department (Depart
mentID 20). The tables represented here are independent of one another yet easil
y connected in this manner. It is also important to recognize that the table str
ucture is only a logical structure. How the relational DBMS chooses to represent
the data physically is of little concern to the user or database designer. They
are shielded from these details by the DBMS.
3.1.2.
Processing
One of the main reasons for the relational models wide acceptance and use is its
powerful and exible ad hoc query capability. Most relational DBMS products on the
market today use the same query language, Structured Query Language (SQL). SQL
is considered a fourth-generation language (4GL) that allows the user to specify w
hat must be done without specifying how it must be done (Rob and Coronel 1997). Th
e relational DBMS then translates the SQL request into whatever program actions
are needed to ful ll the request. This means far less programming for the user tha
n with other database models or earlier le systems. SQL statements exist for de nin
g the structure of the database as well as manipulating its data. You may create
and drop tables from the database, although many people use tools provided by t
he DBMS software itself for these tasks. SQL may also be used to insert, update,
and delete data from existing tables. The most common task performed by users,
however, is data retrieval. For this, the SELECT statement is used. The typical
syntax for SELECT is as follows:
SELECT eld(s) FROM table(s) WHERE conditions ;
For example, to produce a list of employee names and telephone numbers from the
Engineering department using the tables introduced earlier, you would write:
SELECT EmpName, EmpPhone FROM Employee WHERE DepartmentID
20;
Of course, you may use the typical complement of logical operators, such as AND,
OR, and NOT, to control the information returned by the query. The SELECT state
ment may also be used to join data from different tables based on their common el
d values. For example, in order to produce a list of employee names and their de
partments, you would write:
SELECT Employee.EmpName, Department.DeptName FROM Employee, Department WHERE Emp
loyee.DepartmentID Department.DepartmentID;
In this example, the rows returned by the query are established by matching the
common values of DepartmentID stored in both the Employee and Department tables.
These joins may be performed across several tables at a time, creating meaningf
ul reports without requiring excessive data duplication.
82
TECHNOLOGY
It is also possible to sort the results returned by a query by adding the ORDER
BY clause to the SELECT statement. For example, to create an alphabetical listin
g of department names and IDs, you would write:
SELECT DeptName, DepartmentID FROM Department ORDER BY DeptName;
These examples demonstrate the ease with which SQL may be used to perform meanin
gful ad hoc queries of a relational database. Of course, SQL supports many more
operators and statements than those shown here, but these simple examples give a
hint of what is possible with the language.
3.2.
Object-Oriented Databases
The evolution of database models (from hierarchical to network to relational) is
driven by the need to represent and manipulate increasingly complex real-world
data (Rob and Coronel 1997). Todays systems must deal with more complex applicati
ons that interact with multimedia data. The relational approach now faces a chal
lenge from new systems based on an object-oriented data model (OODM). Just as OO
concepts have in uenced programming languages, so too are they gaining in popular
ity with database researchers and vendors. There is not yet a uniformly accepted
de nition of what an OODM should consist of. Different systems support different
aspects of object orientation. This section will explore some of the similar cha
racteristics shared by most of these new systems. Rob and Coronel (1997) suggest
that, at the very least, an OODM 1. 2. 3. 4. 5. Must support the representation
of complex objects Must be extensible, or capable of de ning new data types and o
perations Must support encapsulation (as described in Section 2) Must support in
heritance (as described in Section 2) Must support the concept of object identit
y, which is similar to the notion of a primary key from the relational model tha
t uniquely identi es an object. This object identity is used to relate objects and
thus does not need to use JOINs for this purpose.
Consider an example involving data about a Person (from Rob and Coronel 1997). I
n a typical relational system, Person might become a table with elds for storing
name, address, date of birth, and so on as strings. In an OODM, Person would be
a class, describing how all instances of the class (Person objects) will be repr
esented. Here, additional classes might be used to represent name, address, date
of birth, and so on. The Address class might store city, state, street, and so
on as separate attributes. Age might be an attribute of a Person object but whos
e value is controlled by a method, or function of the class. This inclusion of m
ethods makes objects an active component in the system. In traditional systems,
data are considered passive components waiting to be acted upon. In an OODM, we
can use inheritance to create specialized classes based on the common characteri
stics of an existing class, such as the Person class. For example, we could crea
te a new class called Employee by declaring Person as its superclass. An Employe
e object might have additional attributes of salary and social security number.
Because of inheritance, an Employee is a Person and, hence, would also gain the
attributes that all Person objects shareaddress, date of birth, name, and so on.
Employee could be specialized even further to create categories of employees, su
ch as manager, secretary, cashier, and the like, each with its own additional da
ta and methods for manipulating that data. Rob and Coronel (1997) summarize the
advantages and disadvantages of object-oriented database systems well. On the po
sitive side, they state that OO systems allow the inclusion of more semantic inf
ormation than a traditional database, providing a more natural representation of
real-world objects. OO systems are better at representing complex data, such as
required in multimedia systems, making them especially popular in CAD / CAM app
ing is the process of making a discovery from large amounts of detailed data (Ba
rry 1995) by drilling down through detailed data in the warehouse. Data mining t
ools sift through large volumes of data to nd patterns or similarities in the dat
a. Data mining and online analytical processing tools translate warehouse conten
ts into business intelligence by means of a variety of statistical analyses and
data visualization methods (Brown 1995; Fogarty 1994). Table 1 provides a list o
f some of the common terms used within the data warehouse community. A number of
commercial products attempt to ful ll warehousing needs, including the SAS Instit
utes SAS / Warehouse Administrator, IBMs Visual Warehouse, Cognoss PowerPlay, Red B
rick Systems Red Brick Warehouse, and Information Builders FOCUS. A discussion of
these tools is beyond the scope of this chapter.
4.
ENTERPRISE TOOLS
Since the early days of business computing, system designers have envisioned inf
ormation system architectures that would allow the seamless sharing of informati
on throughout the corporation. Each major advance in information technology spaw
ned a new generation of applications such as CIM or MRP II that claimed to integ
rate large parts of the typical manufacturing enterprise. With the recent
84 TABLE 1 Term Aggregates Common Terms Used by the Data Warehousing Community D
escription
TECHNOLOGY
Business Intelligence Tools
Data Extraction, Acquisition Data Mart
Data Mining Data Modeling
Data Staging Data Transformation Data Warehouse
Database Gateway Drill Down
DSS (Decision Support Systems) EIS (Executive Information Systems) Metadata OLAP
(Online Analytical Processing)
OLTP (Online Transactional Processing) Query Tools and Queries
Precalculated and prestored summaries that are stored in the data warehouse to i
mprove query performance. For example, for every VAR there might be a prestored
summary detailing the total number of licenses, purchased per VAR, each month. T
hose client products, which typically reside on a PC, that are the decision supp
ort systems (DSS) to the warehouse. These products provide the user with a metho
d of looking and manipulating the data. The process of copying data from a legac
y or production system in order to load it into a warehouse. Separate, smaller w
arehouses typically de ned along organizations departmental needs. This selectivity
of information results in greater query performance and manageability of data.
A collection of data marts (functional warehouses) for each of the organizations
business functions can be considered an enterprise warehousing solution. A colle
ction of powerful analysis techniques for making sense out of very large dataset
s. The process of changing the format of production data to make it usable for h
euristic business reporting. It also serves as a roadmap for the integration of
data sources into the data warehouse. The data staging area is the data warehous
e workbench. It is the place where raw data are brought in, cleaned, combined, a
rchived, and eventually exported to one or more data marts. Performed when data
is extracted from the operational systems, including integrating dissimilar data
types and processing calculations. Architecture for putting data within reach o
f business intelligence systems. These are data from a production system that no
w resides on a different machine, to be used strictly for business analysis and
querying, allowing the production machine to handle mostly data input. Used to e
xtract or pass data between dissimilar databases or systems. This middleware com
ponent is the front-end component prior to the transformation tools. The process
of navigating from a top-level view of overall sales down through the sales ter
ritories, down to the individual sales person level. This is a more intuitive wa
y to obtain information at the detail level. Business intelligence tools that ut
ilize data to form the systems that support the business decision-making process
of the organization. Business intelligence tools that are aimed at less sophist
icated users, who want to look at complex information without the need to have c
omplete manipulative control of the information presented. Data that de ne or desc
ribes the warehouse data. These are separate from the actual warehouse data, whi
ch are used to maintain, manage, and support the actual warehouse data. Describe
s the systems used not for application delivery, but for analyzing the business,
e.g., sales forecasting, market trends analysis, etc. These systems are also mo
ve conducive to heuristic reporting and often involve multidimensional data anal
ysis capabilities. Describes the activities and systems associated with a compan
ys day-to-day operational processing and data (order entry, invoicing, general le
dger, etc.). Queries can either browse the contents of a single table or, using
the databases SQL engine, they can perform join conditioned queries that produce
result sets involving data from multiple tables that meet speci c selection crite
ria.
86
TECHNOLOGY
a central database. MRP II systems extended the integrated manufacturing concept
by linking the production schedule to other organizational systems in nance, mar
keting, and human resources. For instance, changes in customer demand would have
a ripple effect on cash ow planning, and personnel planning, in addition to purc
hasing. The goals again were to cut costs of production, reduce inventory, impro
ve customer service, and integrate production decisions better with nance and mar
keting functions.
4.2.1.
The Appearance of Enterprise Resource Planning
While MRP II remained focused on shortening the ordering and production cycles i
n a manufacturing environment, several advances in technology in the late 1980s
led to the development of ERP tools and applications. Beyond the manufacturing p
rocesses, ERP encompasses business planning, sales planning, forecasting, produc
tion planning, master scheduling, material planning, shop oor control, purchasing
and nancial planning, routing and bill of material management, inventory control
, and other functions. As with MRP II, a centralized relational database is key
to the success of an ERP application. Other new technologies that enabled the sp
read of ERP tools include the rapidly maturing client / server distributed archi
tecture and object-oriented programming (OOP) development practices. Both of the
se factors make the ERP system more scalable both in terms of the hardware and a
lso the modularity of the software. This scalability in turn lends itself to con
tinuous expansion of their functionality, as can be seen by the rise of new ERP
modules that extend to supply chain and customer relationship concepts, generati
ng tighter linkages with both the customers and suppliers own planning systems. As
the scope of these integrated applications has grown, it is clear that the comp
lexity has also grown. Horror stories about runaway ERP implementations appear f
requently in the popular press. Two examples occurred in 1999 with the much-publ
icized problems of Hersheys ERP implementation and also a similar set of problems
at Whirlpool. After more than $100 million had been spent on Hersheys ERP implem
entation, Hersheys inventory system did not work and they reportedly lost more th
an $100 million during the busy Halloween season (Stedman 1999b). ERP costs for
large multinational rms can easily take several years and cost several hundred mi
llion dollars. Corning Inc. reported in 1998 that they expected their ERP projec
t to take between ve and eight years to roll out to all 10 of its manufacturing d
ivisions (Deutsch 1998). Common complaints about ERP products are that they are
in exible and if anything special is desired, expensive consultants must be hired
to design tack on modules that often must be programmed in some obscure proprietary
language. This one size ts all mentality makes it dif cult for smaller companies to jus
tify the expense of an ERP systems. It may be that ERP will not t with certain ty
pes of organizations that require extensive exibility. Experience has shown that
ERP systems seem to work best in those organizations with a strong top-down stru
cture. Also, the very complexity of an ERP implementation frightens off many pot
ential adopters because it may involve shifting to a client / server environment
combined with an often painful process of reengineering. With all the problems
and risks associated with ERP, why would anyone even want to consider an ERP sys
tem? Despite these problems, from 1995 to 1999 there was a tremendous growth in
corporate implementation of ERP systems. Over 70% of the Fortune 500 rms (includi
ng computer giants Microsoft, IBM, and Apple) have implemented an ERP system. Th
ough market growth slowed a bit in 1999 due to corporate focus on the Y2K proble
m, during this same ve-year period, ERP vendors averaged about 35% annual growth
(Curran and Ladd, 2000). Globally, the ERP market is projected to grow at a comp
ound annual rate of 37% over the next few years, reaching $52 billion by 2002. T
he reality is that if a company today wants an enterprise-wide IS, there are few
alternatives to ERP tools. Though the risks are very high, the potential bene ts
include Y2K and Euro compliance, standardization of systems and business process
es, and improved ability to analyze data (due to improved data integration and c
onsistency). A vice president at AlliedSignal Turbocharging Systems said the com
pany was replacing 110 old applications with an ERP system and this would help t
he $1 billion unit do a much better job of lling orders and meeting delivery comm
itments (Davenport 1998). Prior to the year 2000, the most common expected bene ts
of implementing an ERP solution cited in interviews of information technology e
xecutives included:
Solved potential Y2K and Euro problems Helped standardize their systems across t
he enterprise Improved business processes Improved ef ciency and lowered costs Nee
ded to support growth and market demands Global system capabilities Ability to i
ntegrate data (Bancroft et al., 1998)
ow users to access their enterprise ISs from a standard web browser. Many ventur
e capitalists reportedly refuse to fund an Internet start-up unless Oracle is a
part of the business plan (Stroud, 1999). PeopleSoft Inc. made its niche in the
ERP market by being the company that disaffected SAP users could turn to as an a
lternative. When it was founded in 1987, it was a vendor of software for the hum
an resource (HR) function. It has since focused on this application in the ERP m
arket, while SAP has focused on inventory management and logistics processes. Th
e quality of PeopleSofts HR modules was such that some companies would adopt SAP
for part of their processes and PeopleSoft speci cally for their HR processes. Bec
ause it was more exible in its implementations, PeopleSoft won over customers who
just wanted a partial ERP installation that they could integrate more easily th
an with their existing systems. PeopleSoft was perceived as being more exible and
cost-effective because it developed its whole ERP software suite by collaborati
ng with rms like Hyperion, Inc. for data warehousing, data mining, and work ow mana
gement applications. Once PeopleSoft got into a company with its HR applications
, it then began offering the nancial and other ERP software modules. This led to
rapid growth; 1992 revenues of $33 million grew to $1.4 billion in 1998 (Kirkpat
rick 1998). Along with the rest of the ERP software industry, PeopleSoft has bee
n scrambling to extend its ERP offerings to the Internet. In 1999, it began deve
lopment of eStore and eProcurement, and it acquired CRM vendor Vantive (Davis 19
99).
88
TECHNOLOGY
Baan Software, a Dutch rm, was an early player in the ERP software market, shippi
ng its rst product in 1982. It was the rst ERP vendor to focus on an open UNIX com
puting environment in 1987. Its early strengths were in the areas of procurement
and supply chain management. Through internal development and acquisitions, Baa
n offers one of the most complete suites of ERP tools, including accounting, nanc
ial, and HR applications in one comprehensive package or separately for function
al IS development. Shifting toward an Internet focus, Baan has recently acquired
a number of tool development rms and entered into an alliance with Sun for its J
ava-based software. It is also moving aggressively to integrate XML-based middle
ware into its next release of BaanERP (Stedman 1999a). J.D. Edwards began in 197
7 as a CASE tool vendor but developed its own suite of ERP applications, primari
ly for the AS / 400 platform. Its modules now include manufacturing, distributio
n, nance, and human resources. The company emphasizes the exibility of its tools a
nd has risen quickly to be fourth-largest ERP vendor, with nearly $1 billion in
1998 sales (Kirkpatrick 1998). It has been an aggressive ERP vendor in moving to
the ERP Outsourcing model, where rms access the J.D. Edwards software over a sec
ure network link to an offsite computer. This means rms pay a xed monthly fee per u
ser and dont have large up-front investments in expensive implementations and sof
tware and skills that are rapidly out of date. This network-distribution-based o
utsourcing model for ERP vendors has yet to be validated, though, and presents m
any potential technical problems.
4.3.
Basic Concepts of Enterprise Tools
As one reads the literature on ERP tools, it quickly becomes apparent that very
little academic research has been performed to in this rapidly emerging area. Th
e eld, for the most part, is de ned by a few case studies, and research consists pr
imarily of industry publications (Brown and Vessey 1999). Articles appearing in
industry publications generate many new terms, which the authors seldom de ne, and
they also tend to over-hype new technology. Various ERP-related terms have been
put forth, including ERP tools, ERP integration tools, ERP network tools, ERP s
ecurity tools, ERP software, and ERP system. Some vendors are also trying to mov
e away from the term ERP to the term enterprise planning (EP) and even extended
resource planning (XRP) systems (Jetly, 1999). As mentioned earlier, enterprise
resource planning (ERP) is a term used by the industry to refer to prepackaged s
oftware tools that can be con gured and modi ed for a speci c business to create an en
terprise IS. Common enterprise-wide processes are integrated around a shared dat
abase. Although mainframe versions of ERP tools exist, for the most part ERP too
ls assume a client / server platform organized around a centralized database or
data warehouse. The essential concepts that generally de ne ERP tools include inte
gration of application modules, standard user interfaces, process view support, inte
grated database with real-time access, and use of an open system architecture usuall
y built around a client / server platform.
4.3.1.
ERP and the Process View
Another common feature of ERP tools / systems is that they usually involve the e
xtensive application of business process reengineering (Curran and Ladd 2000). T
his process is often very painful because it may require an organization to reex
amine all of its core processes and redesign whole departments. The ERP tool cho
sen actively supports this process view. In this sense, a process can be thought of
speci ed series of activities or a way of working to accomplish an organizational
mers more closely, a company must move its focus away from a strictly intraorgan
izational one to an increasing awareness of transorganizational needs. Both of t
hese needs are further complicated by the fact that each vendor de nes the standar
d business processes in different ways and hence the component architectures of
their software re ect a different set of con gurable components. Also, within the di
fferent vendor packages there are varying degrees of overlap in the functionalit
y of each of the modules. This makes it very dif cult to compare the functionality
of speci c modules from different vendors and how much of them will be able to be
integrated into existing systems. The functionality varies, making it dif cult to
do a proper comparison of the purchase cost and implementation cost of the modu
les. This problem is further complicated when one considers the different sets o
f industry solutions that ERP vendors push as ways of smoothing the implementation p
rocess. One way to get around some of the problems in making comparisons of ERP
tool functionality is to try to translate a particular companys needs into a gene
ric set of core processes, such as order management, inventory management, and s
o forth. Companies may then go through several iterations of attempting to map e
ach vendors modules to those requirements until the desired level of functionalit
y has been met. In this way, some of the gaps and overlaps will become more appa
rent. Further requirements, such as those related to technical, cost, support, a
nd data integration can be overlaid onto the matrix to re ne the comparison furthe
r. This section examines the four standard core business processes that almost a
ll ERP tools are designed to support. These processes include the sales and dist
ribution processes, manufacturing and procurement, nancial management, and
90
TECHNOLOGY
human resources. Each ERP package de nes these processes slightly differently and
has varying degrees of functionality, but all tend to support the basic processe
s presented here with various combinations of modules.
4.4.1.
Sales and Distribution Applications
With the advent of newer, more customer-oriented business models such as Dells buil
d to order computer manufacturing model, it is increasingly important for companie
s to take customer needs into account and to do it in a timely fashion. ERP tool
s for supporting sales and distribution are designed to give customers a tighter
linkage to internal business operations and help to optimize the order manageme
nt process. The processes that are most typically supported by the sales and dis
tribution (sales logistics) ERP modules include order entry, sales support, ship
ping, quotation generation, contract management, pricing, delivery, and inquiry
processing. Because the data and functional applications are integrated in a sin
gle real-time environment, all the downstream processes, such as inventory manag
ement and production planning, will also see similar improvements in timeliness
and ef ciency. When companies adopt a particular ERP vendors package, they often be
gin with the sales and distribution model because this is the beginning of the b
usiness cycle. ERP customers are given a variety of different sales scenarios an
d can choose the ones that most closely mirror how they would like to run their
sales processes. Then they can con gure that module to re ect more accurately the wa
y they want to support their sales processes. If they need to modify the module
more substantially, they will have to employ programmers to customize the module
and possibly link it to existing custom packages that they wish to keep. Whiche
ver approach they choose, the generic sales processes that are chosen by the ERP
customer can be used as a good starting point from which to conduct user requir
ements interview sessions with the end users within the company. One example of
a typical sales process that would be supported by an ERP sales and distribution
module is the standard order processing scenario. In the standard scenario, the com
pany must respond to inquiries, enter and modify orders, set up delivery, and ge
nerate a customer invoice (Curran and Ladd 2000). The system should also be able
to help generate and manage quotes and contracts, do credit and inventory avail
ability checks, and plan shipping and generate optimal transportation routes. On
e area that does distinguish some of the ERP packages is their level of support
for international commerce. For example, SAP is very good in its support for mul
tiple currencies and also international freight regulations. Their sales and dis
tribution module will automatically perform a check of trade embargoes to see if
any particular items are blocked or are under other trade limits for different
countries. They will also maintain international calendars to check for national
holidays when planning shipping schedules. This application is also where custo
mer-related data are entered and corresponding records maintained. Pricing data
that is speci c to each customers negotiated terms may also be a part of this modul
e in some ERP packages (Welti 1999). Other sales processes that might be conside
red a part of the sales and distribution module (depending on the particular ERP
product) include support for direct mailing campaigns, maintenance of customer
contact information, and special order scenarios such as third party and consign
ment orders. With the rapid growth of the newer CRM software packages, much of t
he support for these processes is now being moved to the Internet. E-mail and We
b-based forms enabling customer input mean that ERP sales and distribution modul
es will have to evolve rapidly along CRM lines. Currently, the most advanced wit
h respect to this type of web integration is the ERP package from Oracle, but th
e other vendors are moving quickly to enhance their current sales and distributi
on module functionality (Stroud 1999).
4.4.2.
Manufacturing and Procurement Applications
Certainly the most complex of the core business processes supported is the large
area generally called manufacturing and procurement. Because ERP tools grew out
of MRP II and its precursors, these modules have been around the longest and ha
ve been ne-tuned the most. The suite of ERP modules for this application is desig
ned to support processes involved in everything from the procurement of goods us
ed in production to the accounting for the costs of production. These modules ar
e closely integrated with the nancial modules for cash ow and asset management and
with the sales and distribution modules for demand ow and inventory management.
Key records maintained by these application modules include material and vendor
information. Among the supply chain activities, the manufacturing and procuremen
t applications handle material management, purchasing, inventory and warehouse m
anagement, vendor evaluation, and invoice veri cation (Bancroft et al. 1998). Thes
e activities are so complex that they are typically divided up among a whole sui
te of manufacturing modules, such as in SAPs R / 3, which consists of materials m
anagement (MM), production planning (PP), quality management (QM), plant mainten
ance (PM), and project management (PM). These various modules can be con gured in
a wide variety of ways and can accommodate job shop, process-oriented, repetitiv
e, and build-to-order manufacturing envi-
her analyzed and broken down by organizational unit and by speci c product or proj
ect. Pro tability analysis can likewise be conducted at various points in the proc
ess and be broken down by organizational unit and product or project. One must b
ear in mind that with a properly con gured suite of ERP nancials, the data that are
collected and analyzed for these controlling reports are very timely and genera
ting the reports can be done much more ef ciently than previously. Analysis can al
so be conducted across international divisions of the company and whole product
lines.
4.4.4.
Human Resource Applications
Typically the last set of business processes to be converted to an ERP system, H
R applications have recently become very popular as companies look to areas wher
e they might be able to gain a competitive advantage within their respective ind
ustries. Because of this, there has been substantial improvement in the HR modul
es of the leading ERP vendors. This improvement in turn has led to a substantial
second wave of ERP implementations among those early adopters, who may have onl
y adopted the nancial and manufacturing or sales modules in the rst wave of ERP im
plementations (Caldwell and Stein 1998). In most organizations, HR too often has
become a catchall for those functions that do not seem to t elsewhere. As a resu
lt, HR typically houses such disparate functions as personnel management, recrui
ting, public relations, bene t plan management, training, and regulatory complianc
e. In ERP
92
TECHNOLOGY
applications, the HR modules house the employee data and also data about the cur
rent organizational structure and the various standard job descriptions. The lat
est wave of ERP tools now supports most of these activities in a growing suite o
f HR modules and submodules. For example, personnel management includes the recr
uitment, training and development, and compensation of the corporate workforce.
Typical recruitment activities include realizing the internal need for new perso
nnel, generating a job description, posting advertisements, processing applicant
s, screening, and selecting the nal choice for the position. Once the position ha
s been lled, then training assessments can be done and a bene t and development pla
n can be implemented. Other core processes in the HR module include payroll proc
essing, bene ts, and salary administration; sometimes they include time and travel
management to help employees streamline reimbursement for travel expenses.
4.5.
Implementing an Enterprise System
Because of the large investment in time and money, the selection of the right en
terprise tool has a tremendous impact on the rms strategy for many years afterward
. Choosing a particular ERP package means that a company is entrusting its most
intimate core processes to that vendors software. This is a decision that cannot
be easily reversed after it has been made. Some rms have chosen to implement an E
RP package not realizing the extent of the commitment involved and then had to s
top the project in the middle (Stedman 1999b). This is often because these compa
nies are unwilling to reexamine their core processes to take advantage of the in
tegrated nature of an ERP package. What makes choosing an ERP package different
from other types of organizational computing decisions? As discussed above, the
large scale and scope of an ERP system make it even more critical than local or
functional IS applications for the well-being of the enterprise, apart from the
high-cost issue. It is also different in that it is oriented towards supporting
business processes rather than separate islands of automation and so the typical
implementation involves business experts more than technology experts as oppose
d to traditional system implementations. The process-oriented nature of an ERP i
mplementation also means that cross-functional teams are a large part of the des
ign process. Because of the high potential impact on an organization due to the
redesign of core business processes and reduction of the labor force needed for
these processes, one of the more subtle differences is that end users may ght the
changes brought about by an ERP system. Therefore, project leaders must also be
skilled in change management (Davenport 1998).
4.5.1.
Choosing an Enterprise Tool
Given that an ERP implementation is so risky to an enterprise, what should a rm c
onsider when choosing an ERP tool? There are currently a handful of case studies
in the academic literature and numerous anecdotal references in the trade liter
ature. From these we can derive some factors that a rm should take into account.
Probably the most important criterion is how well the software functionality ts w
ith the requirements of the rm. A rm could develop a matrix of its core business p
rocess requirements and then try to map the modules from the various ERP package
s to the matrix to see where obvious gaps and overlaps might exist (Jetly 1999).
A byproduct of this analysis will be a better understanding of the level of dat
a and application integration that the package offers. Increasingly, the ERP pac
kages support for electronic commerce functions will be a major factor in choosin
g the right package. Beyond an analysis of the ERP package functionality, a rm mu
st also take technical factors into consideration. Each package should be evalua
ted for the exibility of the architectures that it supports and its scalability.
How well will this architecture t with the existing corporate systems? How many c
oncurrent users will it support and what is the maximum number of transactions?
Will the module require extensive modi cations to make it t with our business proce
sses and how dif cult will it be to change them again if my business changes? Give
n the potential complexity of an ERP implementation, potential users should be c
oncerned about the ease of use and amount of training that will be required for
end users to get the most out of the new system. ERP vendor-related questions sh
ould center on the level of support that is offered and how much of that support
can be found locally. They should also look into the overall nancial health and
stability of their prospective ERP vendors and the scope and frequency of their
new product releases and updates. With respect to cost, several factors should b
e kept in mind when choosing an ERP package. First is the cost of the software a
nd hardware itself that will be required. Besides the required number of new dat
abase and application servers, hardware for backup and archiving may also be req
uired. Software costs are typically on a per-module basis and may vary between m
odules, but a rm may also have to purchase the submodules in order to get the des
ired functionality. If these modules need to be customized or special interfaces
with existing systems need to be programmed, rms should consider the amount of c
ustomization required. This cost is exaggerated for some ERP packages, such as S
APs R / 3, which require knowledge of the rather exotic proprietary language ABAP
/ 4 in order do extensive customization. Training and ongoing support for these
packages will be a continuing expense that should also be considered.
94
TECHNOLOGY
The most commonly cited of these factors is the importance of management support
for the success of the project. This factor has been widely documented for othe
r types of business computing projects, but it is especially important in implem
enting an ERP system because this may involve a radical redesign of some core pr
ocesses and resulting major changes in the organization. In a project as complex
as an ERP system, problems will occur. Leaders must maintain the strong vision
to persevere through all the changes. One study showed that nearly 75% of new im
plementations resulted in some organizational change, and these changes could be
very severe (Boudreau and Robey 1999). Project teams should also be balanced wi
th respect to business area experts and IS professionals. As the software become
s easier to con gure and has more complete functionality, it seems clear that the
trend of shifting more responsibility away from the IS professionals to the busi
ness experts will continue (Bancroft et al., 1998). Because of the cross-functio
nal nature of ERP-based ISs, it is also important that team members be drawn fro
m multiple business units and assigned full-time to the project (Norris et al.,
1998).
4.6
Future of Enterprise Tools
The new generation of enterprise tools is beginning to enable large corporations
to reap signi cant gains in ef ciency and productivity, but the investment costs of
implementing an ERP system can be a huge hurdle for smaller companies. Enterpri
se tools will continue to improve in overall functionality and ease of use as mo
dules are developed and re ned. The functionality will continue to be extended to
the customer, supplier, and other business partners in what some call the Extend
ed Resource Planning (XRP) software generation (Jetly 1999). Most of these exten
sions are based on some form of e-business model using the Internet. A descripti
on of some of the principal newer enhancements to ERP systems follows. These are
taking enterprise ISs into the transorganizational realm.
4.6.1.
Enterprise Tools and Supply Chain Management (SCM )
One logical outgrowth of the increased integration of enterprise-wide data and s
upport for core business processes is the current interest in supply chain manag
ement. In the 1980s, much of the focus in manufacturing was on increasing the ef c
iency of manufacturing processes using JIT, TQM, Kanban, and other such techniqu
es. Given the large investment in these improvements in manufacturing strategies
, companies managed to lower the cost of manufacturing goods as far as was pract
ical. These same companies are nding that they can take a system view of the whol
e supply chain, from suppliers to customers, and leverage their investment in ER
P systems to begin to optimize the ef ciency of the whole logistics network. In th
is context, supply chain management is de ned as follows:
A set of approaches utilized to ef ciently integrate suppliers, manufacturers, war
ehouses, and stores, so that merchandise is produced and distributed at the righ
t quantities, to the right locations, and at the right time, in order to minimiz
e system-wide costs while satisfying service level requirements (Simchi-Levi et
al. 2000).
Until recently, ERP vendors have focused their efforts on providing real-time su
pport for all the manufacturing transactions that form the backbone of the suppl
y chain and left the more focused decision support applications to software comp
anies such as i2 Technologies and Manugistics. These companies have produced who
le families of decision support systems to help analyze the data produced by the
ERP-based IS and make better decisions with respect to the supply chain. The in
terfaces between the standard ERP tools and the new generation of DSSs is where
some of the most interesting and valuable research and development is being cond
ucted. As companies improve their support of speci c supply chain activities, they
can expand the level of integration to include supporting activities from nancia
l operations and human resources. When taking a supply chain view of its operati
ons, a company must begin with the notion that there are three basic ows within t
heir enterprise: materials, nancials, and information. At de ned points in the supp
ly chain, data can be collected and stored. The goal of ERP-based ISs in this vi
ew is to enable the enterprise to collect and access this information in a timel
y fashion. This reduces inventory levels, reduces costs, and improves customer s
ervice and satisfaction, and each product produced by the enterprise will also h
ave an accompanying information trail that can be tracked and used for planning
and estimating lead times. It must be kept in mind that this integrated informat
ion ow is available in real time as a basis for decision making. Much of the rece
nt interest in supply chain management and ERP stems from the e-business models
that have been emerging. These models now call for tighter linkages between supp
liers and customers, and the ERP tools must support these linkages. This extende
d supply chain management makes use of the Internet to link with suppliers and c
ustomers via standard browsers and TCP / IP protocols. This means that not only
will customers be able to track their orders, but companies will be sharing inve
ntory and BOM information with their suppliers. In fact, one of the major unreso
lved issues in ERP research is the question of inter-ERP linkage. Most vendors h
ave their own standard
96
TECHNOLOGY
4.6.3.
Enterprise Tools and Electronic Commerce (EC )
As the ERP market has matured, ERP vendors have rushed to be the rst to redesign
their products so that they Web-enable their enterprise tools. The rapidly evolv
ing model of ERP and e-commerce means that users anywhere in the world will be a
ble to access their speci c enterprise-wide ISs via a standard, browser-based inte
rface. In this model, the ERP platform de nes a new suite of e-business modules th
at are distributed to suppliers and clients using the standards of the Internet.
The ERP tool can thus be used to build an e-business portal that is designed speci ca
lly for everything from small e-tailers to large multinational corporations. This po
rtal will streamline links with customers, suppliers, employees, and business pa
rtners. It will extend the linkage between the enterprise and transorganizationa
l classes of ISs. One way that this model is emerging is as a turnkey supplier o
f e-business function support. For example, SAPs new web portal model, MySAP.com,
can be used to help a new e-tailer design and build its website. Using an innov
ative natural language interface, a business person can describe the features he
or she wants for the company website and a basic web page will be automatically
generated. The website will be further linked with whatever functionality is de
sired from the backof ce enterprise IS, via the Web-enabled suite of modules in My
SAP.com. These modules might include processing customer orders, creating and ma
intaining a corporate catalog for customers to browse, processing bank transacti
ons, monitoring customer satisfaction, and allowing customers to check the statu
s of their orders. The same platform can be used to maintain the companys busines
sto-business applications, such as eProcurement and information sharing with bus
iness partners and clients. For a fee, MySAP.com users can rent back-of ce decision su
pport software for forecasting, data mining, and inventory planning and manageme
nt. These services are delivered over the Internet and users pay on a per-transa
ction basis. This web portal also serves as the platform for businessemployee tr
ansactions, helping employees communicate better, monitor their bene t plans, plan
for and receive training, and possibly provide more advanced knowledge manageme
nt support.
5.
TOOLS FOR ANALYSIS AND DESIGN OF IS
Managing the development and implementation of new IS is vital to the long-term
viability of the organization. In this section, we present and discuss some of t
he tools and methodologies that have emerged over the last 40 years to help faci
litate and manage the development of information systems. The systems developmen
t life cycle (SDLC) is introduced, and the tools used within each step of the li
fe cycle are explained. It is important to realize that although professional sy
stems developers have employed these tools and methodologies for the last 20 yea
rs with some success, the tools do not guarantee a successful outcome. Utilizati
on of the SDLC will increase the likelihood of development success, but other fo
rces may affect the outcome. Careful attention must also be paid to the needs of
users of the system and to the business context within which the system will be
implemented. This attention will increase the likelihood of system acceptance a
nd viability.
5.1.
The Systems Development Life Cycle
98
TECHNOLOGY
5.2.1.
Feasibility Analysis
Information systems development projects are almost always done under tight budg
et and time constraints. This means that during the systems planning phase of th
e SDLC, a feasibility assessment of the project must be performed to ensure that
the project is worth the resources invested and will generate positive returns
for the organization. Typically, feasibility is assessed on several different di
mensions, such as technical, operational, schedule, and economic. Technical feas
ibility ascertains whether the organization has the technical skills to develop
the proposed system. It assesses the development groups understanding of the poss
ible target technologies, such as hardware, software, and operating environments
. During this assessment, the size of the project is considered, along with the
systems complexity and the groups experience with the required technology. From t
hese three dimensions, the proposed projects risk of failure can be estimated. Al
l systems development projects involve some degree of failure risk; some are ris
kier than others. If system risk is not assessed and managed, the organization m
ay fail to receive anticipated bene ts from the technology, fail to complete the d
evelopment in a timely manner, fail to achieve desired system performance levels
, or fail to integrate the new system properly with the existing information sys
tem architecture. Reducing the scope of the system, acquiring the necessary deve
lopment skills from outside, or unbundling some of the system components to make
the development project smaller can reduce these risks. Operational feasibility
examines the manner in which the system will achieve desired objectives. The sy
stem must solve some business problem or take advantage of opportunities. The sy
stem should make an impact on value chain activities in such a way that it suppo
rts the overall mission of the organization or addresses new regulations or laws
. The system should be consistent with the operational environment of the organi
zation and should, at worst, be minimally disruptive to organizational procedure
s and policies. Another dimension of feasibility, projected schedule feasibility
, relates to project duration. The purpose of this assessment is to determine wh
ether the timelines and due dates can be met and that meeting those dates will b
e suf cient to meet the objectives of the organization. For example, it could be t
hat the system must be developed in time to meet a regulatory deadline or must b
e completed at a certain point in a business cycle, such as a holiday or a point
in the year when new products are typically introduced to market. Usually, the
economic feasibility assessment is the one of overriding importance. In this cas
e, the system must be able to generate suf cient bene ts in either cost reductions o
r revenue enhancements to offset the cost of developing and operating the system
. The bene ts of the system could include reducing the occurrence of errors associ
ated with a certain procedure; reducing the time involved in processing transact
ions; reducing the amount of human labor associated with a procedure; or providi
ng a new sales opportunity. The bene ts that can be directly measured in dollars a
re referred to as tangible bene ts. The system may also have some bene ts that, thou
gh not directly quanti able, assist the organization in reaching its goals. These
are referred to as intangible bene ts. Only tangible bene ts will be used in the nal
economic feasibility assessment, however. After bene ts have been calculated, the
tangible costs of the systems are estimated. From a development perspective, the
se costs include hardware costs, labor costs, and operational costs such as site
renovation and employee training. The costs can be divided into two classes: on
e-time (or sunk) costs and those costs associated with the operation of the new
system, or recurring costs. The bene ts and costs are determined for a prespeci ed p
eriod of time, usually the depreciable life of the system. This time period coul
d be 1 year, 5 years, or 10 years, depending on the extent of the investment and
the nancial guidelines used within the organization. The tangible costs and bene t
s are then used to determine the viability of the project. Typically, this invol
ves determining the net present value (NPV) of the system. Since the cost of dev
eloping the system is incurred before the systems begins to generate value for t
he organization over the life of the system, it is necessary to discount the fut
ure revenue streams to determine whether the project is worth undertaking. The p
resent value of anticipated costs at a future point in time is determined by: PV
n
Yn (1
i)n
t
where PVn is the present value of
monetary value ($) of the cost or
d i is the interest rate at which
resent value (NPV) of the project
t 1
PV
T
100
TECHNOLOGY
DFDs allow the same system to be examined at many levels of detail, from the con
text diagram level through whatever level the development team chooses to stop a
t. The lowest level of detail in a DFD is called a context diagram, also called
a black box diagram, because it shows only the sources and sinks and their connectio
n to the system through aggregate data ows. The system itself is represented by o
nly one process. That central process will be decomposed into many subsystems in
the Level 0 diagram, which shows the systems major processes, data ows, and data
stores at a higher level of detail. Figure 6 shows a level 0 diagram for a simpl
e payroll processing system. Each process is numbered, and each represents a maj
or informationprocessing activity. When the diagram moves to Level 1, each subsy
stem from Level 0 is individually decomposed. Thus, process 1.0 on the DFD may b
e functionally subdivided into processes 1.1, 1.2, 1.3, and so on for as many st
eps as necessary. In Figure 7, we show a Level 1 diagram that decomposes the thi
rd process from the Level 0 diagram in Figure 6. As the diagrams are decomposed
further, at Level 2 the steps in process 1.1 may be decomposed into subprocesses
1.1.1, 1.1.2, 1.1.3, etc. Each Level 2 DFD will expand on a single process from
Level 1. These diagrams can continue to be leveled down until the steps are so
simple that they cannot be decomposed any further. This set of DFDs would be ref
erred to as primitive diagrams because no further detail can be gained by decomp
osing further. The power of this tool is that while it is conceptually simple, w
ith only four symbols, it is able to convey a great degree of detail from the ne
sting of the decompositions. For systems analysis, this detail is invaluable for
an understanding of existing system functionality and to understand the process
es.
5.2.3.
Structured English
The next step in process modeling, usually performed after the creation of the D
FDs, is to model process logic with Structured English, a modi ed form of the Engl
ish language used to express information system process procedures. This tool us
es the numbering convention from the DFDs to designate the process under examina
tion, then semantically describes the steps in that DFD process. Structured Engl
ish is used to represent the logic of the process in a manner that is easy for a
programmer to turn into computer code. In fact, similar tools were called pseudocode to indicate that they were closely related to the design of computer programs
. At that point they were used
Figure 6 Data Flow Diagram for a Simple Payroll System.
102
Read Order File WHILE NOT End Of File DO BEGIN IF READ Order Address GENERATE Ma
iling Label END IF END DO
TECHNOLOGY
Structured English has been found to be a good supplement to DFDs because, in ad
dition to helping developers at the implementation stage, they have been found t
o be a useful way to communicate processing requirements with users and managers
. They are also a good way to document user requirements for use in future SDLC
activities.
5.2.4.
ERD / Data Dictionaries
During the analysis phase of the SDLC, data are represented in the DFDs in the f
orm of data ows and data stores. The data ows and data stores are depicted as DFD
components in a processing context that will enable the system to do what is req
uired. To produce a system, however, it is necessary to focus on the data, indep
endent of the processes. Conceptual data modeling is used to examine the data at r
est in the DFD to provide more precise knowledge about the data, such as de nitions,
structure, and relationships within the data. Without this knowledge, little is
actually known about how the system will have to manipulate the data in the pro
cesses. Two conceptual data modeling tools are commonly used: entity relationshi
p diagrams (ERDs), and data dictionaries (DDs). Each of these tools provides cri
tical information to the development effort. ERDs and DDs also provide important
documentation of the system. An ERD is a graphical modeling tool that enables t
he team to depict the data and the relationships that exist among the data in th
e system under study. Entities are anything about which the organization stores
data. Entities are not necessarily people; they can also be places, objects, eve
nts, or concepts. Examples of entities would be customers, regions, bills, order
s, and divisions. Entity types are collections of entities that share a common p
roperty or characteristic, and entity instances are single occurrences of an ent
ity type. For example, a CUSTOMER is an entity type, while Bill Smith is an enti
ty instance. For each entity type, one needs to list all the attributes of the e
ntity that one needs to store in the system. An attribute is a named property or
characteristic of an entity type. For example, the attributes for a CUSTOMER co
uld be customer number, customer name, customer address, and customer phone numb
er. One or a combination of the attributes uniquely identi es an instance of the e
ntity. Such an attribute is referred to as a Candidate key, or identi er. The cand
idate key must be isolated in the data: it is the means by which an entity insta
nce is identi ed. Associations between entity types are referred to as relationshi
ps. These are the links that exist between the data entities and tie together th
e systems data. An association usually means that an event has occurred or that s
ome natural linkage exists between the entity types. For example, an order is as
sociated with an invoice, or a customer is associated with accounts receivable i
nstances. Usually the relationship is named with a meaningful verb phase that de
scribes the relationship between the entities. The relationship can then be read fro
m one entity to the other across the relationship. Figure 8 shows a simple ERD f
or a system in a university environment. Once an ERD has been drawn and the attr
ibutes de ned, the diagram is re ned with the degree of the relationships between th
e entities. A unary relationship is a one-to-one or one-to-many relationship wit
hin the entity typefor example, a person is married to another person, or an empl
oyee manages many other employees. A binary relationship is a one-to-one, one-to
-many, or many-tomany relationship between two entity typesfor example, an accoun
ts receivable instance is associated with one customer, or one order contains ma
ny items. A tenary relationship is a simultaneous relationship between three ent
ity typesfor example, a store, vendor, or part, all sharing stock-onhand. Once th
e degree of the relationships have been determined, the ERD can be used to begin
the process of re ning the data model via normalization of the data. Normalizatio
n is necessary to increase the stability of the data so that there are fewer dat
a updating and concurrency anomalies. The normalization process is described in
detail in every database management textbook (e.g., McFadden et al., 1999). The
ERD is an excellent tool to help understand and explore the conceptual data mode
l. It is less useful, however, for documentation purposes. To address the docume
ntation requirements, a data dictionary (DD) is used. The DD is a repository of
all data de nitions for all the data in the system. A DD entry is required for eac
h data ow and data store on every DFD and for each entity in the ERDs. Developing
a DD entry typically begins with naming the data structure being described, eit
her by ow name, le name, or entity type. The list of attributes follow, with a des
cription of the attribute, the attribute type, and the maximum size of the attri
bute, in characters. The attribute type is speci ed (e.g., integer, text, date, or
binary). In the case of a data le, the average size of the le is indicated.
104
TECHNOLOGY
Figure 9 A Gantt Chart for a Simple Lawn-Maintenance Task.
example of this would be program code development and site preparation during sy
stem implementation. Figure 9 shows a Gantt chart for a lawn maintenance job wit
h two workers assigned to each activity (Wu and Wu 1994). PERT diagrams serve th
e same function as Gantt charts but are more complex and more detailed. They are
essential tools for managing the development process. PERT is an acronym for Pr
ogram Evaluation and Review Technique and was developed by the United States mil
itary during World War II to schedule the activities involved in the constructio
n of naval vessels. Since then, PERT has found its way into production environme
nts and structured system development because it allows one to show the causal r
elationships across activities, which a Gantt chart does not depict. PERT diagra
ms show the activities and then the relationships between the activities. They a
lso show which activities must be completed before other activities can begin an
d which activities can be done in parallel. Figure 10 shows an example of a PERT
diagram for the lawn maintenance job (Wu and Wu 1994). Which planning tool is c
hosen depends upon the complexity of the system being developed. In the case of
a systems development effort that is relatively small and simple and where the d
esign can be completed quickly, the Gantt chart is preferable. Where the design
is large, complex, and involves many activities that must be carefully coordinat
ed, the PERT diagram is best.
5.2.6.
JAD / RAD
Rapid application deployment (RAD), joint application deployment (JAD), and prot
otyping are approaches used in system development that bypass parts of the SDLC
in an effort to speed up the
Figure 10 A PERT Diagram for a Simple Lawn-Maintenance Task.
y across all of them. This, in turn, can increase the productivity of the SDLC t
eam because a data store de ned in a DFD can be automatically inserted as an entry
in the data dictionary. This, in turn, can increase the consistency of the deve
lopment effort across projects. The disadvantages of CASE mostly revolve around
cost. CASE tools tend to be very expensive, both in monetary and in resource ter
ms. CASE tools cost a lot, and learning to use them can be very time consuming b
ecause of the number of features and the functionality that they provide. They a
lso place great demands on the hardware in terms of processing needs and storage
requirements at the server and workstation level. Essentially, all they do is a
utomate manual processes, so the decision to use CASE hinges on the traditional
tradeoff between labor costs and automation costs.
6.
CONCLUSION
Over the past 35 years, tremendous strides have been made in management informat
ion systems, contributing greatly to organizations abilities to grow, provide qua
lity outputs, and operate ef ciently. These strides have been fostered by continua
lly improving hardware, advances in software tools for building ISs, formal meth
ods for IS development, and progress in understanding the management of ISs. The
re is no sign that these trends are abating. Nor is there any sign that an organ
izations needs
106 Year 1 75,000 0.90909091 68,182 68,182 Year 1 (22,000) 0.90909091 (20,000) (
20,000) (21,000) 0.82644628 (17,355) (37,355) (23,000) 0.7513148 (17,280) (54,63
6) Year 2 Year 3 Year 4 (24,000) 0.68301346 (16,392) (71,028) 75,000 0.82644628
61,983 130,165 75,000 0.7513148 56,349 186,514 75,000 0.68301346 51,226 237,740
Year 2 Year 3 Year 4 Year 5 75,000 0.62092132 46,569 284,309 Year 5 (25,000) 0.6
2092132 (15,523) (86,551) Totals $ 284,309 Totals $ (186,551) $ 97,758 52% 166,7
12 339,582 197,758 537,340 48,182 (51,818) 92,810 40,992 131,878 172,870
TABLE 2
ABC Company Feasibility Analysis: Database Server Project
Year 0
Net Bene ts Discount Rate PV of Bene ts NPV of all Bene ts Sunk Costs
0 1.0000 0 0 $ (100,000)
Year 0
Net Costs Discount Rate PV of Costs NPV of all Costs Overall NPV Overall ROI Bre
ak Even Analysis Yearly NPV Cash ow Overall NPV Cash ow
0 1.0000 0 0
(100,000) (100,000)
Break even is between year 1 and 2.
omy, organizations (both traditional and virtual) strive to ensure that througho
ut their operations the right knowledge is available to the right processors (hu
man and computer-based) at the right times in the right presentation formats for
the right cost. Along with decision support systems, information systems built
with tools such as those described here have a major role to play in making this
happen.
REFERENCES
Bancroft, N., Seip, H. and Sprengel, A. (1998), Implementing SAP R / 3: How to I
ntroduce a Large System into a Large Organization, 2nd Ed., Manning, Greenwich,
CT. Barqu n, R. C., and Edelstein, H. (1997), Building, Using, and Managing the Da
ta Warehouse, Data Warehousing Institute Series, Prentice Hall PTR, Upper Saddle
River, NJ. Barry, M. (1995), Getting a Grip on Data, Progressive Grocer, Vol. 74, N
o. 11, pp. 7576. Bonczek, R., Holsapple, C., and Whinston, A. (1980), Future Direct
ions for Decision Support Systems, Decision Sciences, Vol. 11, pp. 616631.
108
TECHNOLOGY
Boudreau, M., and Robey, D. (1999), Organizational Transition to Enterprise Resour
ce Planning Systems: Theoretical Choices for Process Research, in Proceedings of T
wentieth International Conference on Information Systems (Charlotte, NC), pp. 29
1300. Bowen, T. (1999), SAP is Banking on the CRM Trend, Infoworld, November, pp. 7476
. Brookshear, J. G. (1999), Computer Science: An Overview, Addison-Wesley, Readi
ng, MA. Brown, C., and Vessey, I. (1999), ERP Implementation Approaches: Towards a
Contingency Framework, in Proceedings of Twentieth International Conference on In
formation Systems, (Charlotte, NC), pp. 411416. Brown, S. (1995), Try, Slice and Di
ce, Computing Canada, Vol. 21, No. 22, p. 44. Caldwell, B., and Stein, T. (1998), Be
yond ERP: The New IT Agenda, Information Week, November 30, pp. 3038. Castro, E. (1
999), HTML 4 for the World Wide Web: Visual QuickStart Guide, Peachpit Press, Be
rkeley, CA. Ching, C., Holsapple, C., and Whinston, A. (1996), Toward IT Support f
or Coordination in Network Organizations, Information and Management, Vol. 30, No.
4, pp. 179199. Curran, T., and Ladd, A. (2000), SAP R / 3 Business Blueprint: Un
derstanding Enterprise Supply Chain Management, 3rd Ed., Prentice Hall, PTR, Upp
er Saddle River, NJ. Davenport, T. (1998), Putting the Enterprise into the Enterpr
ise System, Harvard Business Review, Vol. 76, July / August, pp. 121131. Davis, B.
(1999), PeopleSoft Fill out Analysis Suite, Information Week, Manhasset, RI, July 26
. Deutsch, C. (1998), Software That Can Make a Grown Company Cry, The New York Times
, November 8. Doane, M. (1997), In the Path of the Whirlwind: An Apprentice Guid
e to the World of SAP, The Consulting Alliance, Sioux Falls, SD. Eliason, A., an
d Malarkey, R. (1999), Visual Basic 6: Environment, Programming, and Application
s, Que Education & Training, Indianapolis, IN. Elmasri, R., and Navathe, S. B. (
2000), Fundamentals of Database Systems, 3rd Ed., Addison-Wesley, Reading, MA. F
ogarty, K. (1994), Data Mining Can Help to Extract Jewels of Data, Network World, Vo
l. 11, No. 23. Francett, B. (1994), Decisions, Decisions: Users Take Stock of Data
warehouse Shelves, Software Magazine, Vol. 14, No. 8, pp. 6370. Goldbaum, L. (1999
), Another Day, Another Deal in CRM, Forbes Magazine, October, pp. 53 57. Hackathorn,
R. D. (1995), Web Farming for the Data Warehouse, Morgan Kaufmann Series in Dat
a Management Systems, Vol. 41, Morgan Kaufmann, San Francisco, CA. Hall, M. (199
8), Core Web Programming, Prentice Hall PTR, Upper Saddle River, NJ. Holsapple,
C. (1995), Knowledge Management in Decision Making and Decision Support, Knowledge a
nd Policy, Vol. 8, No. 1, pp. 522. Holsapple, C., and Sena, M. (1999), Enterprise S
ystems for Organizational Decision Support, in Proceedings of Americas Conference
on Information Systems (Milwaukee, WI), pp. 216218. Holsapple, C., and Singh, M.
(2000), Electronic Commerce: De nitional Taxonomy, Integration, and a Knowledge Mana
gement Link, Journal of Organizational Computing and Electronic Commerce, Vol. 10,
No. 3. Holsapple, C., and Whinston, A. (1987), Knowledge-Based Organizations, The I
nformation Society, Vol. 5, No. 2, pp. 7790. Holsapple, C. W., and Whinston, A. B
. (1996), Decision Support Systems: A Knowledge-Based Approach, West, Mineapois
/ St. Paul. Holsapple, C. W., Joshi, K. D., and Singh, M. (2000), Decision Support
Applications in Electronic Commerce, in Handbook on Electronic Commerce, M. Shawn
, R. Blanning, T. Strader, and A. Whinston, Eds., Springer, Berlin, pp. 543566. J
etly, N. (1999), ERPs Last Mile, Intelligent Enterprise, Vol. 2, No. 17, December, pp
. 3845. Kirkpatrick, D. (1998), The E-Ware War: Competition Comes to Enterprise Sof
tware, Fortune, December 7, pp. 102112. , Addison-Wesley, Main, M., and Savitch, W.
(1997), Data Structures and Other Objects Using C Chicago. Marion, L. (1999a), Big
Bangs Return, Datamation, October, pp. 4345.
an issue, and generating alternative courses of action that will resolve the ne
eds and satisfy objectives. It should also provide support in enhancing the deci
sion makers abilities to assess the possible impacts of these alternatives on the
needs to be ful lled and the objectives to be satis ed. This analysis capability mu
st be associated with provision of capability to enhance the ability of the deci
sion maker to provide an interpretation of these impacts in terms of objectives.
This interpretation capability will lead to evaluation of the alternatives and
selection of a preferred alternative option. These three steps of formulation, a
nalysis, and interpretation are fundamental for formal analysis of dif cult issues
. They are the fundamental steps of systems engineering and are discussed at som
e length in Sage (1991,
112
TECHNOLOGY
Figure 1 Organizational Information and Decision Flow.
1992, 1995), from which much of this chapter is derived. It is very important to
note that the purpose of a decision support system is to support humans in the
performance of primarily cognitive information processing tasks that involve dec
isions, judgments, and choices. Thus, the enhancement of information processing
in systems and organizations (Sage 1990) is a major feature of a DSS. Even thoug
h there may be some human supervisory control of a physical system through use o
f these decisions (Sheridan 1992), the primary purpose of a DSS is support for c
ognitive activities that involve human information processing and associated jud
gment and choice. Associated with these three steps must be the ability to acqui
re, represent, and utilize information or knowledge and the ability to implement
the chosen alternative course of action. The extent to which a support system p
ossesses the capacity to assist a person or a group to formulate, analyze, and i
nterpret issues will depend upon whether the resulting system should be called a
management information system (MIS), a predictive management information system
(PMIS), or a decision support system. We can provide support to the decision ma
ker at any of these several levels, as suggested by Figure 2. Whether we have a
MIS, a PMIS, or a DSS depends upon the type of automated computer-based support
that is provided to the decision maker to assist in reaching the decision. Funda
mental to the notion of a decision support system is assistance provided in asse
ssing the situation, identifying alternative courses of action and formulating t
he decision situation, structuring and analyzing the decision situation, and the
n interpreting the results of analysis of the alternatives in terms of the value
system of the decision maker. In short, a decision support system provides a de
cision recommendation capability. A MIS or a PMIS does not, although the informa
tion provided may well support decisions. In a classical management information
system, the user inputs a request for a report concerning some question, and the
MIS supplies that report. When the user is able to pose a what if? type question an
d the system is able to respond with an if then type of response, then we have a
Figure 2 Conceptual Differences Between MIS, PMIS, and DSS.
114
TECHNOLOGY
requirements needed for a speci c DSS and the systems management skills needed to
design a support process. This suggests that the decision support generator is a
potentially very useful tool, in fact a design level, for DSS system design. Th
e DSS generator is a set of software products, similar to a very advanced genera
tion system development language, which enables construction of a speci c DSS with
out the need to formally use micro-level tools from computer science and operati
ons research and systems analysis in the initial construction of the speci c DSS.
These have, in effect, already been embedded in the DSS generator. A DSS generat
or contains an integrated set of features, such as inquiry capabilities, modelin
g language capabilities, nancial and statistical (and perhaps other) analysis cap
abilities, and graphic display and report preparation capabilities. The major su
pport provided by a DSS generator is that it allows the rapid construction of a
prototype of the decision situation and allows the decision maker to experiment
with this prototype and, on the basis of this, to re ne the speci c DSS such that it
is more representative of the decision situation and more useful to the decisio
n maker. This generally reduces, often to a considerable degree, the time requir
ed to engineer and implement a DSS for a speci c application. This notion is not u
nlike that of software prototyping, one of the principal macro-enhancement softw
are productivity tools (Sage and Palmer 1990) in which the process of constructi
ng the prototype DSS through use of the DSS generator leads to a set of requirem
ents speci cations for a DSS that are then realized in ef cient form using DSS tools
directly. The primary advantage of the DSS generator is that it is something th
at the DSS designer can use for direct interaction with the DSS user group. This
eliminates, or at least minimizes, the need for DSS user interaction with the c
ontent specialists most familiar with micro-level tools of computer science, sys
tems analysis, and operations research. Generally, a potential DSS user will sel
dom be able to identify or specify the requirements for a DSS initially. In such
a situation, it is very advantageous to have a DSS generator that may be used b
y the DSS engineer, or developer, in order to obtain prototypes of the DSS. The
user may then be encouraged to interact with the prototype in order to assist in
identifying appropriate requirements speci cations for the evolving DSS design. T
he third level in this DSS design and development effort results from adding a d
ecision support systems management capability. Often, this will take the form of
the dialog generation and management subsystem referred to earlier, except perh
aps at a more general level since this is a DGMS for DSS design and engineering
rather than a DGMS for a speci c DSS. This DSS design approach is not unlike that
advocated for the systems engineering of large scale systems in general and DSS
in particular. There are many potential dif culties that affect the engineering of
trustworthy systems. Among these are: inconsistent, incomplete, and otherwise i
mperfect system requirements speci cations; system requirements that do not provid
e for change as user needs evolve over time, and poorly de ned management structur
es. The major dif culties associated with the production of trustworthy systems ha
ve more to do with the organization and management of complexity than with direc
t technological concerns. Thus, while it is necessary to have an appropriate set
of quality technological methods and tools, it is also very important that they
be used within a well chosen set of lifecycle processes and set of systems mana
gement strategies that guide the execution of these processes (Sage 1995; Sage a
nd Rouse 1999a). Because a decision support system is intended to be used by dec
ision makers with varying experiential familiarity and expertise with respect to
a particular task and decision situation, it is especially important that a DSS
design consider the variety of issue representations or frames that decision ma
kers may use to describe issues, the operations that may be performed on these r
epresentations to enable formulation analysis and interpretation of the decision
situation, the automated memory aids that support retention of the various resu
lts of operations on the representations, and the control mechanisms that assist
decision makers in using these representations, operations, and memory aids. A
116
TECHNOLOGY
exists elsewhere in the database because of data redundancy. With many people po
tentially using the same database, resource control is essential. The presence o
f these three features in a DBMS is the major factor that differentiates it from
a le management system. The management system for a database is composed of the
software that is bene cial to the creation, access, maintenance, and updating of a
database. A database contains data that are of value to an individual or an org
anization and that an individual or an organization desires to maintain. Mainten
ance should be such that the data survive even when the DBMS system hardware and
/ or software fails. Maintenance of data is one of the important functions prov
ided for in DBMS design. There are many tasks that we desire to perform using a
database. These ve are especially important: 1. Capturing relevant data for use i
n the database 2. Selecting relevant data from the database 3. Aggregating data
to form totals, averages, moments, and other items that support decision making
4. Estimating, forecasting, and predicting in order to obtain extrapolations of
events into the future, and such other activities as 5. Optimizing in order to e
nable selection of a best alternative We note that these database raise issues that
are associated with the model base management system. In a similar way, the dial
og generation and management system determines how data is viewed and is therefo
re also important for use of a DBMS.
2.1.
DBMS Design, Selection, and Systems Integration
As with any information systems engineering-based activity, DBMS design should s
tart with an identi cation of the DBMS design situation and the user requirements.
From this, identi cation of the logical or conceptual data requirements follows s
peci cation of a logical database structure in accordance with what is known as a
data de nition language (DDL). After this, the physical database structure is iden
ti ed. This structure must be very concerned with speci c computer hardware characte
ristics and ef ciency considerations. Given these design speci cations, an operation
al DBMS is constructed and DBMS operation and maintenance efforts begin. The log
ical database design and physical database design efforts can each be disaggrega
ted into a number of related activities. The typical database management system
lifecycle will generally follow one of the systems engineering development life
cycles discussed in the literature (Sage 1992, 1995; Sage and Rouse 1999a). The
three important DBMS requirementsdata independence, redundancy reduction, and inc
reased data resource controlare generally applicable both to logical data and to
physical data. It is highly desirable, for example, to be able to change the str
ucture of the physical database without affecting other portions of the DBMS. Th
is is called physical data independence. In a similar way, logical data independ
ence denotes the ability of software to function using a given applicationsorien
ted perspective on the database even though changes in other parts of the logica
l structure have been made. The requirements speci cation, conceptual design, logi
cal design, and physical design phases of the DBMS development life cycle are sp
eci cally concerned with satisfaction of these requirements. A number of questions
need to be asked and answered successfully in order to design an effective DBMS
. Among these are the following (Arden 1980): 1. Are there data models that are
appropriate across a variety of applications? 2. What DBMS designs that enable d
ata models to support logical data independence? 3. What DBMS designs enable dat
a models to support logical data independence, and what are the associated physi
cal data transformations and manipulations? 4. What features of a data descripti
on language will enable a DBMS designer to control independently both the logica
l and physical properties of data? 5. What features need to be incorporated into
a data description language in order to enable errors to be detected at the ear
liest possible time such that users will not be affected by errors that occur at
a time prior to their personal use of the DBMS? 6. What are the relationships b
etween data models and database security? 7. What are the relationships between
data models and errors that may possibly be caused by concurrent use of the data
base by many users? 8. What are design principles that will enable a DBMS to sup
port a number of users having diverse and changing perspectives? 9. What are the
appropriate design questions such that applications programmers, technical user
s, and DBMS operational users are each able to function effectively and ef ciently
?
118
TECHNOLOGY
Figure 6 Data Transforms and Model Mappings.
a data description language (DDL) and also describes the mapping that is desired
between source and target data. Figure 7 represents this general notion of data
transformation. This could, for example, represent target data in the form of a
table that is obtained from a source model composed of a set of lists. Figure 7
is a very simple illustration of the more general problem of mapping between va
rious schemas. Simply stated, a schema is an image used for comparison purposes.
A schema can also be described as a data structure for representing generic con
cepts. Thus, schemas represent knowledge about concepts and are structurally org
anized around some theme. In terms appropriate here, the user of a database must
interpret the real world that is outside of the database in terms of real-world
objects, or entities and relationships between entities, and activities that ex
ist and that involve these objects. The database user will interact with the dat
abase in order to obtain needed data for use or, alternatively, to store obtaine
d data for possible later use. But a DBMS cannot function in terms of real objec
ts and operations. Instead, a DBMS must use data objects, composed of data eleme
nts and relations between them, and operations on data objects. Thus, a DBMS use
r must perform some sort of mapping or transformation from perceived real object
s and actions to those objects and actions representations that will be used in th
e physical database. The single-level data model, which is conceptually illustra
ted in Figure 7, represents the nature of the data objects and operations that t
he user understands when receiving data from the database. In this fashion the D
BMS user models the perceived real world. To model some action sequence, or the
impact of an action sequence on the world, the user maps these actions to a sequ
ence of operations that are allowed by the speci c data model. It is the data mani
pulation language (DML) that provides the basis for the operations submitted to
the DBMS as a sequence of queries or programs. The development of these schemas,
which represent logical data, results in a DBMS architecture or DBMS framework,
which describes the types of schemas that are allowable and the way in which th
ese schemas are related through various mappings. We could, for example, have a
two-schema framework or a three-schema framework, which appears to be the most p
opular representation at this time. Here
Figure 7 Single-Level Model of Data Schema.
elds and records that are allowed in the database. Examples of data structures i
nclude lists, tables, hierarchies, and networks. 2. A set of operations that de ne
the admissible manipulations that are applied to the elds and records that compr
ise the data structures. Examples of operations include retrieve, combine, subtr
act, add, and update. 3. A set of integrity rules that de ne or constrain allowabl
e or legal states or changes of state for the data structures that must be prote
cted by the operations. There are three generically different types of data mode
l representations and several variants within these three representations. Each
of these is applicable as an internal, external, or conceptual model. We will be
primarily concerned with these three modeling representations as they affect th
e
120
TECHNOLOGY
external, or logical, data model. This model is the one with which the user of a
speci c decision support system interfaces. For use in a decision support system
generator, the conceptual model is of importance because it is the model that in u
ences the speci c external model that various users of a DSS will interface with a
fter an operational DSS is realized. This does not mean that the internal data m
odel is unimportant. It is very important for the design of a DBMS. Through its
data structures, operations, and integrity constraints, the data model controls
the operation of the DBMS portion of the DSS. The three fundamental data models
for use in the external model portion of a DSS are recordbased models, structura
lly based models, and expert system-based models. Each of these will now be brie y
described.
2.2.1.
Record-Based Models
We are very accustomed to using forms and reports, often prepared in a standard
fashion for a particular application. Record-based models are computer implement
ations of these spreadsheet-like forms. Two types can be identi ed. The rst of thes
e, common in the early days of le processing systems (FPSs) or le management syste
ms (FMSs), is the individual record model. This is little more than an electroni
c le drawer in which records are stored. It is useful for a great many applicatio
ns. More sophisticated, however, is the relational database data model, in which
mathematical relations are used to electronically cut and paste reports from a vari
ety of les. Relational database systems have been developed to a considerable deg
ree of sophistication, and many commercial products are available. Oracle and In
formix are two leading providers. The individual record model is surely the olde
st data model representation. While the simple single-record tables characterist
ic of this model may appear quite appealing, the logic operations and integrity
constraints that need to be associated with the data structure are often unde ned
and are perhaps not easily de ned. Here, the data structure is simply a set of rec
ords, with each record consisting of a set of elds. When there is more than a sin
gle type of record, one eld contains a value that indicates what the other elds in
the record are named. The relational model is a modi cation of the individual rec
ord model that limits its data structures and thereby provides a mathematical ba
sis for operation on records. Data structures in a relational database may consi
st only of relations, or eld sets that are related. Every relation may be conside
red as a table. Each row in the table is a record or tuple. Every column in each
table or row is a eld or attribute. Each eld or attribute has a domain that de nes
the admissible values for that eld. Often there is only a modest difference in st
ructure between this relational model and the individual record model. One diffe
rence is that elds in the various records of the individual record model represen
t relationships, whereas relationships among elds or attributes in a relational m
odel are denoted by the name of the relation. While the structural differences b
etween the relational model and the individual record model are minimal, there a
re major differences in the way in which the integrity constraints and operation
s may affect the database. The operations in a relational database form a set of
operations that are de ned mathematically. The operations in a relational model m
ust operate on entire relations, or tuples, rather than only on individual recor
ds. The operations in a relational database are independent of the data structur
es, and therefore do not depend on the speci c order of the records or the elds. Th
ere is often controversy about whether or not a DBMS is truly relational. While
there are very formal de nitions of a relational database, a rather informal one i
s suf cient here. A relational database is one described by the following statemen
ts. 1. Data are presented in tabular fashion without the need for navigation lin
ks, or pointer structures, between various tables. 2. A relational algebra exist
2.2.2.
Structural Models
122
TECHNOLOGY
development of the data dictionary. The primary dif culties that may impede easy u
se of the ER method are in the need for selection of appropriate verbs for use a
s contextual relationships, elimination of redundancy of entities, and assurance
s that all of the needed ER have been determined and properly used. Often a cons
iderable amount of time may be needed to identify an appropriate set of entities
and relations and obtain an appropriate structural model in the form of an ER d
iagram.
2.2.3.
Expert and Object-Oriented Database Models
An expert database model, as a specialized expert database system, involves the
use of arti cial intelligence technology. The goal in this is to provide for datab
ase functionality in more complex environments that require at least some limite
d form of intelligent capability. To allow this may require adaptive identi cation
of system characteristics, or learning over time as experience with an issue an
d the environment into which it is embedded increases. The principal approach th
at has been developed to date is made up of an object-oriented database (OODB) d
esign with any of the several knowledge representation approaches useful in expe
rt system development. The eld is relatively new compared to other database effor
ts and is described much more fully in Kerschberg (1987, 1989), Myloupoulos and
Brodie (1989), Parsaye et al. (1989), Debenham (1998), Poe et al. (1998), and Da
rwen and Date (1998). The efforts to date in the object oriented area concern wh
at might be more appropriately named object-oriented database management systems
design (OODBMS). The objective in OODBMS design is to cope with database comple
xity, potentially for large distributed databases, through a combination of obje
ct-oriented programming and expert systems technology. Object-oriented design ap
proaches generally involve notions of separating internal computer representatio
ns of elements from the external realities that lead to the elements. A primary
reason for using object-oriented language is that it naturally enables semantic
representations of knowledge through use of 10 very useful characteristics of ob
ject-oriented approaches (Parsaye et al. 1989): information hiding, data abstrac
tion, dynamic binding and object identity, inheritance, message handling, object
oriented graphical interfaces, transaction management, reusability, partitionin
g or dividing or disaggregating an issue into parts, and projecting or describin
g a system from multiple perspectives or viewpoints. These could include, for ex
ample, social, political, legal, and technoeconomic perspectives (Sage 1992). Th
ere are at least two approaches that we might use in modeling a complex large-sc
ale system: (1) functional decomposition and structuring and (2) purposeful or o
bject decomposition and structuring. Both approaches are appropriate and may pot
entially result in useful models. Most models of real-world phenomena tend to be
purposeful. Most conventional high-level programming languages are functionally
, or procedurally, oriented. To use them, we must write statements that correspo
nd to the functions that we wish to provide in order to solve the problem that h
as been posed. An advantage of object decomposition and structuring is that it e
nables us to relate more easily the structure of a database model to the structu
re of the real system. This is the case if we accomplish our decomposition and s
tructuring such that each module in the system or issue model represents an obje
ct or a class of objects in the real issue or problem space. Objects in object-o
riented methodology are not unlike elements or nodes in graph theory and structu
ral modeling. It is possible to use one or more contextual relations to relate e
lements together in a structural model. An object may be de ned as a collection of
information and those operations that can be performed upon it. We request an o
bject to perform one of its allowable operations by instructing it with a messag
e. Figure 9 illustrates the major conceptual difference between using a conventi
124
TECHNOLOGY
design techniques provide a natural interface to expert system-based techniques
for DBMS design. This is so because objects, in object-oriented design, are prov
ided with the ability to store information such that learning over time can occu
r; process information by initiating actions in response to messages; compute an
d communicate by sending messages between objects; and generate new information
as a result of communications and computations. Object-oriented design also lend
s itself to parallel processing and distributed environments. Object-oriented de
sign and analysis is a major subject in contemporary computer and information sy
stem subject areas and in many application domains as well (Firesmith 1993; Sull
o 1994; Jacobson et al. 1995; Zeigler 1997).
2.3.
Distributed and Cooperative Databases
We have identi ed the three major objectives for a DBMS as data independence, data
redundancy reduction, and data resource control. These objectives are important
for a single database. When there are multiple databases potentially located in
a distributed geographic fashion, and potentially many users of one or more dat
abases, additional objectives arise, including: 1. Location independence or tran
sparency to enable DBMS users to access applications across distributed informat
ion bases without the need to be explicitly concerned with where speci c data is l
ocated 2. Advanced data models, to enable DBMS users to access potentially nonco
nventional forms of data such as multidimensional data, graphic data, spatial da
ta, and imprecise data 3. Extensible data models, which will allow new data type
s to be added to the DBMS, perhaps in an interactive real-time manner, as requir
ed by speci c applications. An important feature of a distributed database managem
ent systems (DDBMSs) is that provisions for database management are distributed.
Only then can we obtain the needed fail-safe reliability and availability even when
ortion of the system breaks down. There are a number of reasons why distributed
databases may be desirable. These include and are primarily related to distribut
ed users and cost savings. Some additional costs are also involved, and these mu
st be justi ed. A distributed database management system will generally look much
like replicated versions of a more conventional single-location database managem
ent system. We can thus imagine replicated versions of Figure 7. For simplicity,
we show only the physical database, the database access interface mechanism, an
d the data dictionary for each database. Figure 10 indicates one possible concep
tual
Figure 10 Prototypical Structure of Model Management System.
126
TECHNOLOGY
1986). A DSS user would interact directly with a decision processing MBMS, where
as the model processing MBMS would be more concerned with provision of consisten
cy, security, currency, and other technical modeling issues. Each of these suppo
rts the notion of appropriate formal use of models that support relevant aspects
of human judgment and choice. Several ingredients are necessary for understandi
ng MBMS concepts and methods. The rst of these is a study of formal analytical me
thods of operations research and systems engineering that support the constructi
on of models that are useful in issue formulation, analysis, and interpretation.
Because presenting even a small fraction of the analytical methods and associat
ed models in current use would be a mammoth undertaking, we will discuss models
in a somewhat general context. Many discussions of decision relevant models can
be found elsewhere in this Handbook, most notably in Chapters 83 through 102. Th
ere are also many texts in this area, as well as two recent handbooks (Sage and
Rouse 1999a; Gass and Harris 2000).
3.1.
Models and Modeling
In this section, we present a brief description of a number of models and method
s that can be used as part of a systems engineering-based approach to problem so
lution or issue resolution. Systems engineering (Sage 1992, 1995; Sage and Rouse
1999a; Sage and Armstrong 2000) involves the application of a general set of gu
idelines and methods useful to assist clients in the resolution of issues and pr
oblems, often through the de nition, development, and deployment of trustworthy sy
stems. These may be product systems or service systems, and users may often depl
oy systems to support process-related efforts. Three fundamental steps may be di
stinguished in a formal systemsbased approach that is associated with each of th
ese three basic phases of system engineering or problem solving: 1. Problem or i
ssue formulation 2. Problem or issue analysis 3. Interpretation of analysis resu
lts, including evaluation and selection of alternatives, and implementation of t
he chosen alternatives These steps are conducted at a number of phases throughou
t a systems life cycle. As we indicated earlier, this life cycle begins with de ni
tion of requirements for a system through a phase where the system is developed
to a nal phase where deployment, or installation, and ultimate system maintenance
and retro t occur. Practically useful life cycle processes generally involve many
more than three phases, although this larger number can generally be aggregated
into the three basic phases of de nition, development, and deployment. The actual
engineering of a DSS follows these three phases of de nition of user needs, devel
opment of the DSS, and deployment in an operational setting. Within each of thes
e three phases, we exercise the basic steps of formulation, analysis, and interp
retation. Figure 11 illustrates these three steps and phases and much more detai
l is presented in Sage (1992, 1995), Sage and Rouse (1999a), Sage and Armstrong
(2000), and in references contained therein.
3.1.1.
Issue, or Problem, Formulation Models
The rst part of a systems effort for problem or issue resolution is typically con
cerned with problem or issue formulation, including identi cation of problem eleme
nts and characteristics. The rst step in issue formulation is generally that of d
e nition of the problem or issue to be resolved. Problem
Figure 11 Basic Phases and Steps in Systems Engineering.
m observation of an existing and evolving system or from study of plans and othe
r prescriptive documents. When these three approaches fail, it may be necessary
to construct a trial system and determine issue formulation elements through experim
entation and iteration with the trial system. These four methods (asking, study
of an existing system, study of a normative systems, experimentation with a prot
otype system) are each very useful for information and system requirements deter
mination (Davis 1982).
3.1.2.
Models for Issue Analysis
The analysis portion of a DSS effort typically consists of two steps. First, the
options or alternatives de ned in issue formulation are analyzed to assess their
expected impacts on needs and objectives. This is often called impact assessment
or impact analysis. Second, a re nement or optimization effort
128
TECHNOLOGY
is often desirable. This is directed towards re nement or ne-tuning a potentially v
iable alternative through adjustment of the parameters within that alternative s
o as to obtain maximum performance in terms of needs satisfaction, subject to th
e given constraints. Simulation and modeling methods are based on the conceptual
ization and use of an abstraction, or model, that hopefully behaves in a similar
way as the real system. Impacts of alternative courses of action are studied th
rough use of the model, something that often cannot easily be done through exper
imentation with the real system. Models are, of necessity, dependent on the valu
e system and the purpose behind utilization of a model. We want to be able to de
termine the correctness of predictions based on usage of a model and thus be abl
e to validate the model. There are three essential steps in constructing a model
: 1. Determine those issue formulation elements that are most relevant to a part
icular problem. 2. Determine the structural relationships among these elements.
3. Determine parametric coef cients within the structure. We should interpret the
word model here as an abstract generalization of an object or system. Any set of rul
es and relationships that describes something is a model of that thing. The MBMS
of a DSS will typically contain formal models that have been stored into the mo
del base of the support system. Much has been published in the area of simulatio
n realization of models, including a recent handbook (Banks 1998). Gaming method
s are basically modeling methods in which the real system is simulated by people
who take on the roles of real-world actors. The approach may be very appropriat
e for studying situations in which the reactions of people to each others action
s are of great importance, such as competition between individuals or groups for
limited resources. It is also a very appropriate learning method. Con ict analysi
s (Fraser and Hipel 1984; Fang et al. 1993) is an interesting and appropriate ga
me theory-based approach that may result in models that are particularly suitabl
e for inclusion into the model base of a MBMS. A wealth of literature concerning
formal approaches to mathematical games (Dutta 1999). Trend extrapolation or ti
me series forecasting models, or methods, are particularly useful when suf cient d
ata about past and present developments are available, but there is little theor
y about underlying mechanisms causing change. The method is based on the identi ca
tion of a mathematical description or structure that will be capable of reproduc
ing the data. Then this description is used to extend the data series into the f
uture, typically over the short to medium term. The primary concern is with inpu
toutput matching of observed input data and results of model use. Often little at
tention is devoted to assuring process realism, and this may create dif culties af
fecting model validity. While such models may be functionally valid, they may no
t be purposefully or structurally valid. Continuous-time dynamic-simulation mode
ls, or methods, are generally based on postulation and quali cation of a causal st
ructure underlying change over time. A computer is used to explore longrange beh
avior as it follows from the postulated causal structure. The method can be very
useful as a learning and qualitative forecasting device. Often it is expensive
and time consuming to create realistic dynamic simulation models. Continuous-tim
e dynamic models are quite common in the physical sciences and in much of engine
ering (Sage and Armstrong 2000). Input-output analysis models are especially des
igned for study of equilibrium situations and requirements in economic systems i
n which many industries are interdependent. Many economic data formats are direc
tly suited for the method. It is relatively simple conceptually, and can cope wi
th many details. Inputoutput models are often very large. Econometrics or macroec
onomic models are primarily applied to economic description and forecasting prob
lems. They are based on both theory and data. Emphasis is placed an speci cation o
f structural relations, based upon economic theory, and the identi cation of unkno
wn parameters, using available data, in the behavioral equations. The method req
uires expertise in economics, statistics, and computer use. It can be quite expe
nsive and time consuming. Macroeconomic models have been widely used for shortto medium-term economic analysis and forecasting. Queuing theory and discrete ev
ent simulation models are often used to study, analyze, and forecast the behavio
r of systems in which probabilistic phenomena, such as waiting lines, are of imp
ortance. Queuing theory is a mathematical approach, while discrete-event simulat
ion generally refers to computer simulation of queuing theory type models. The t
wo methods are widely used in the analysis and design of systems such as toll bo
oths, communication networks, service facilities, shipping terminals, and schedu
ling. Regression analysis models and estimation theory models are very useful fo
r the identi cation of mathematical relations and parameter values in these relati
ons from sets of data or measurements. Regression and estimation methods are use
d frequently in conjunction with mathematical modeling, in particular with trend
extrapolation and time series forecasting, and with econometrics. These methods
are often also used to validate models. Often these approaches are called syste
m identi -
130
TECHNOLOGY
A number of capabilities should be provided by an integrated and shared MBMS of
a DSS (Barbosa and Herko 1980; Liang 1985) construction, model maintenance, mode
l storage, model manipulation, and model access (Applegate et al. 1986). These i
nvolve control, exibility, feedback, interface, redundancy reduction, and increas
ed consistency: 1. Control: The DSS user should be provided with a spectrum of c
ontrol. The system should support both fully automated and manual selection of m
odels that seem most useful to the user for an intended application. This will e
nable the user to proceed at the problem-solving pace that is most comfortable g
iven the users experiential familiarity with the task at hand. It should be possi
ble for the user to introduce subjective information and not have to provide ful
l information. Also, the control mechanism should be such that the DSS user can
obtain a recommendation for action with this partial information at essentially
any point in the problemsolving process. 2. Flexibility: The DSS user should be
able to develop part of the solution to the task at hand using one approach and
then be able to switch to another modeling approach if this appears preferable.
Any change or modi cation in the model base will be made available to all DSS user
s. 3. Feedback: The MBMS of the DSS should provide suf cient feedback to enable th
e user to be aware of the state of the problem-solving process at any point in t
ime. 4. Interface: The DSS user should feel comfortable with the speci c model fro
m the MBMS that is in use at any given time. The user should not have to supply
inputs laboriously when he or she does not wish to do this. 5. Redundancy reduct
ion: This should occur through use of shared models and associated elimination o
f redundant storage that would otherwise be needed. 6. Increased consistency: Th
is should result from the ability of multiple decision makers to use the same mo
del and the associated reduction of inconsistency that would have resulted from
use of different data or different versions of a model. In order to provide thes
e capabilities, it appears that a MBMS design must allow the DSS user to: 1. Acc
ess and retrieve existing models 2. Exercise and manipulate existing models, inc
luding model instantiation, model selection, and model synthesis, and the provis
ion of appropriate model outputs 3. Store existing models, including model repre
sentation, model abstraction, and physical and logical model storage 4. Maintain
existing models as appropriate for changing conditions 5. Construct new models
with reasonable effort when they are needed, usually by building new models by u
sing existing models as building blocks A number of auxiliary requirements must
be achieved in order to provide these ve capabilities. For example, there must be
appropriate communication and data changes among models that have been combined
. It must also be possible to locate appropriate data from the DBMS and transmit
it to the models that will use it. It must also be possible to analyze and inte
rpret the results obtained from using a model. This can be accomplished in a num
ber of ways. In this section, we will examine two of them: relational MBMS and e
xpert system control of an MBMS. The objective is to provide an appropriate set
of models for the model base and appropriate software to manage the models in th
e model base; integration of the MBMS with the DBMS; and integration of the MBMS
with the DGMS. We can expand further on each of these needs. Many of the techni
cal capabilities needed for a MBMS will be analogous to those needed for a DBMS.
These include model generators that will allow rapid building of speci c models,
model modi cation tools that will enable a model to be restructured easily on the
basis of changes in the task to be accomplished, update capability that will ena
ble changes in data to be input to the model, and report generators that will en
able rapid preparation of results from using the system in a form appropriate fo
r human use. Like a relational view of data, a relational view of models is base
d on a mathematical theory of relations. Thus, a model is viewed as a virtual le
or virtual relation. It is a subset of the Cartesian product of the domain set t
hat corresponds to these input and output attributes. This virtual le is created,
ideally, through exercising the model with a wide spectrum of inputs. These val
ues of inputs and the associated outputs become records in the virtual le. The in
put data become key attributes and the model output data become content attribut
es. Model base structuring and organization is very important for appropriate re
lational model management. Records in the virtual le of a model base are not indi
vidually updated, however, as they
132
TECHNOLOGY
Bennett refers to these elements as the presentation language, the knowledge bas
e, and the action language. It is generally felt that there are three types of l
anguages or modes of human communications: words, mathematics, and graphics. The
presentation and action languages, and the knowledge base in general, many cont
ain any or all of these three language types. The mix of these that is appropria
te for any given DSS task will be a function of the task itself, the environment
into which the task is embedded, and the nature of the experiential familiarity
of the person performing the task with the task and with the environment. This
is often called the contingency task structure. The DSS, when one is used, becom
es a fourth ingredient, although it is really much more of a vehicle supporting
effective use of words, mathematics, and graphics than it is a separate fourth i
ngredient. Notions of DGMS design are relatively new, especially as a separately
identi ed portion of the overall design effort. To be sure, user interface design
is not at all new. However, the usual practice has been to assign the task of u
ser interface design to the design engineers responsible for the entire system.
In the past, user interfaces were not given special attention. They were merely
viewed as another hardware and software component in the system. System designer
s were often not particularly familiar with, and perhaps not even especially int
erested in, the user-oriented design perspectives necessary to produce a success
ful interface design. As a result, many user interface designs have provided mor
e what the designer wanted than what the user wanted and needed. Notions of dial
og generation and dialog management extend far beyond interface issues, although
the interface is a central concern in dialog generation and dialog management.
A number of discussions of user interface issues are contained in Part IIIB of t
his Handbook. Figure 14 illustrates an attribute tree for interface design based
on the work of Smith and Mosier (1986). This attribute tree can be used, in con
junction with the evaluation methods of decision analysis, to evaluate the effec
tiveness of interface designs. There are other interface descriptions, some of w
hich are less capable of instrumental measurement than these. On the basis of a
thorough study of much of the humancomputer interface and dialog design literatur
e, Schneiderman (1987) has identi ed eight primary objectives, often called the gold
en rules for dialog design: 1. Strive for consistency of terminology, menus, promp
ts, commands, and help screens. 2. Enable frequent users to use shortcuts that t
ake advantage of their experiential familiarity with the computer system. 3. Off
er informative feedback for every operator action that is proportional to the si
gni cance of the action. 4. Design dialogs to yield closure such that the system u
ser is aware that speci c actions have been concluded and that planning for the ne
xt set of activities may now take place. 5. Offer simple error handling such tha
t, to the extent possible, the user is unable to make a mistake. Even when mista
kes are made, the user should not have to, for example, retype an entire command
entry line. Instead, the user should be able to just edit the portion that is i
ncorrect. 6. Permit easy reversal of action such that the user is able to interr
upt and then cancel wrong commands rather than having to wait for them to be ful
ly executed.
Figure 13 Dialog Generation and Management System Architecture.
134
TECHNOLOGY
Two of Grudins observations relevant to interface consistency are that ease of le
arning can con ict with ease of use, especially as experiential familiarity with t
he interface grows, and that consistency can work against both ease of use and l
earning. On the basis of some experiments illustrating these hypotheses, he esta
blishes the three appropriate dimensions for consistency above. A number of char
acteristics are desirable for user interfaces. Roberts and Moran (1982) identify
the most important attributes of text editors as functionality of the editor, l
earning time required, time required to perform tasks, and errors committed. To
this might be added the cost of evaluation. Harrison and Hix (1989) identify usa
bility, completeness, extensibility, escapability, integration, locality of de nit
ion, structured guidance, and direct manipulation as well in their more general
study of user interfaces. They also note a number of tools useful for interface
development, as does Lee (1990). Ravden and Johnson (1989) evaluate usability of
human computer interfaces. They identify nine top-level attributes: visual clar
ity, consistency, compatibility, informative feedback, explicitness, appropriate
functionality, exibility and control, error prevention and correction, and user
guidance and support. They disaggregate each into a number of more measurable at
tributes. These attributes can be used as part of a standard multiple-attribute
evaluation. A goal in DGMS design is to de ne an abstract user interface that can
be implemented on speci c operating systems in different ways. The purpose of this
is to allow for device independence such that, for example, switching from a co
mmand line interface to a mouse-driven pull-down-menu interface can be easily ac
complished. Separating the application from the user interface should do much to
wards ensuring portability across operating systems and hardware platforms witho
ut modifying the MBMS and the DBMS, which together comprise the applications sof
tware portions of the DSS.
5.
GROUP AND ORGANIZATIONAL DECISION SUPPORT SYSTEMS
A group decision support system (GDSS) is an information technology-based suppor
t system designed to provide decision making support to groups and / or organiza
tions. This could refer to a group meeting at one physical location at which jud
gments and decisions are made that affect an organization or group. Alternativel
y, it could refer to a spectrum of meetings of one or more individuals, distribu
ted in location, time, or both. GDSSs are often called organizational decision s
upport systems, and other terms are often used, including executive support syst
ems (ESSs), which are information technology-based systems designed to support e
xecutives and managers, and command and control systems, which is a term often u
sed in the military for a decision support system. We will generally use GDSS to
describe all of these. Managers and other knowledge workers spend much time in
meetings. Much research into meeting effectiveness suggests that it is low, and
proposals have been made to increase this through information technology support
(Johansen 1988). Speci c components of this information technologybased support m
ight include computer hardware and software, audio and video technology, and com
munications media. There are three fundamental ingredients in this support conce
pt: technological support facilities, the support processes provided, and the en
vironment in which they are embedded. Kraemer and King (1988) provide a notewort
hy commentary on the need for group efforts in their overview of GDSS efforts. T
hey suggest that group activities are economically necessary, ef cient as a means
of production, and reinforcing of democratic values. There are a number of prede
cessors for group decision support technology. Decision rooms, or situation room
s, where managers and boards meet to select from alternative plans or courses of
action, are very common. The rst computer-based decision support facility for gr
oup use is attributed to Douglas C. Engelbart, the inventor of the (computer) mo
use, at Stanford in the 1960s. A discussion of this and other early support faci
lities is contained in Johansen (1988). Engelbarts electronic boardroom-type desi
gn is acknowledged to be the rst type of information technology-based GDSS. The e
lectronic format was, however, preceded by a number of nonelectronic formats. Th
e Cabinet war room of Winston Churchill is perhaps the most famous of these. Map
s placed on the wall and tables for military decision makers were the primary in
gredients of this room. The early 1970s saw the introduction of a number of simp
le computer-based support aids into situation rooms. The rst system that resemble
s the GDSS in use today is often attributed to Gerald Wagner, the Chief Executiv
e Of cer of Execucom, who implemented a planning laboratory made up of a U-shaped
table around which people sat, a projection TV system for use as a public viewin
g screen, individual small terminals and keyboards available to participants, an
d a minicomputer to which the terminals and keyboards were connected. This enabl
ed participants to vote and to conduct simple spreadsheet-like exercises. Figure
15 illustrates the essential features of this concept. Most present-day GDSS ce
ntralized facilities look much like the conceptual illustration of a support roo
m, or situation room, shown in this Figure. As with a single-user DSS, appropria
te questions for a GDSS that have major implications for design concern the perc
eptions and insights that the group obtains through use of the GDSS and the
136
TECHNOLOGY
lead directly to that in which expert skills (holistic reasoning), rules (heuris
tics), or formal reasoning (holistic analysis) are normatively used for judgment
. Generally, operational performance decisions are much more likely than strateg
ic planning decisions to be prestructured. This gives rise to a number of questi
ons concerning ef ciency and effectiveness tradeoffs between training and aiding (
Rouse 1991) that occur at these levels. There are a number of human abilities th
at a GDSS should augment. 1. It should help the decision maker to formulate, fra
me, or assess the decision situation. This includes identifying the salient feat
ures of the environment, recognizing needs, identifying appropriate objectives b
y which we are able to measure successful resolution of an issue, and generating
alternative courses of action that will resolve the needs and satisfy objective
s. 2. It should provide support in enhancing the abilities of the decision maker
to obtain and analyze the possible impacts of the alternative courses of action
. 3. It should have the capability to enhance the decision makers ability to inte
rpret these impacts in terms of objectives. This interpretation capability will
lead to evaluation of the alternatives and selection of a preferred alternative
option. Associated with each of these three formal steps of formulation, analysi
s, and interpretation must be the ability to acquire, represent, and utilize inf
ormation and associated knowledge and the ability to implement the chosen altern
ative course of action. Many attributes will affect the quality and usefulness o
f the information that is obtained, or should be obtained, relative to any given
decision situation. These variables are very clearly contingency task dependent
. Among these attributes are the following (Keen and Scott Morton 1978).
Inherent and required accuracy of available information: Operational control and
performance
situations will often deal with information that is relatively accurate. The inf
ormation in strategic planning and management control situations is often inaccu
rate. Inherent precision of available information: Generally, information availa
ble for operational control and operational performance decisions is very imprec
ise. Inherent relevancy of available information: Operational control and perfor
mance situations will often deal with information that is fairly relevant to the
task at hand because it has been prepared that way by management. The informati
on in strategic planning and management control situations is often obtained fro
m the external environment and may be irrelevant to the strategic tasks at hand,
although it may not initially appear this way. Inherent and required completene
ss of available information: Operational control and performance situations will
often deal with information that is relatively complete and suf cient for operati
onal performance. The information in strategic planning and management control s
ituations is often very incomplete and insuf cient to enable great con dence in stra
tegic planning and management control. Inherent and required veri ability of avail
able information: Operational control and performance situations will often deal
with information that is relatively veri able to determine correctness for the in
tended purpose. The information in strategic planning and management control sit
uations is often unveri able, or relatively so, and this gives rise to a potential
lack of con dence in strategic planning and management control. Inherent and requ
138
TECHNOLOGY
implementation. The model is suf ciently general that it can be used to represent
logical reasoning in a number of application areas. Toulmin assumes that wheneve
r we make a claim, there must be some ground on which to base our conclusion. He
states that our thoughts are generally directed, in an inductive manner, from t
he grounds to the claim, each of which are statements that may be used to expres
s both facts and values. As a means of explaining observed patterns of stating a
claim, there must be some reason that can be identi ed with which to connect the
grounds and the claim. This connection, called the warrant, gives the groundsclai
m connection its logical validity. We say that the grounds support the claim on
the basis of the existence of a warrant that explains the connection between the
grounds and the claim. It is easy to relate the structure of these basic elemen
ts with the process of inference, whether inductive or deductive, in classical l
ogic. The warrants are the set of rules of inference, and the grounds and claim
are the set of well-de ned propositions or statements. It will be only the sequenc
e and procedures, as used to formulate the three basic elements and their struct
ure in a logical fashion, that will determine the type of inference that is used
. Sometimes, in the course of reasoning about an issue, it is not enough that th
e warrant will be the absolute reason to believe the claim on the basis of the g
rounds. For that, there is a need for further backing to support the warrant. It
is this backing that provides for reliability, in terms of truth, associated wi
th the use of a warrant. The relationship here is analogous to the way in which
the grounds support the claim. An argument will be valid and will give the claim
solid support only if the warrant is relied upon and is relevant to the particu
lar case under examination. The concept of logical validity of an argument seems
to imply that we can make a claim only when both the warrant and the grounds ar
e certain. However, imprecision and uncertainty in the form of exceptions to the
rules or low degree of certainty in both the grounds and the warrant do not pre
vent us on occasion from making a hedge or, in other words, a vague claim. Often
we must arrive at conclusions on the basis of something less than perfect evide
nce, and we put those claims forward not with absolute and irrefutable truth but
with some doubt or degree of speculation. To allow for these cases, modal quali e
rs and possible rebuttals may be added to this framework for logical reasoning.
Modal quali ers refer to the strength or weakness with which a claim is made. In e
ssence, every argument has a certain modality. Its place in the structure presen
ted so far must re ect the generality of the warrants in connecting the grounds to
the claim. Possible rebuttals, on the other hand, are exceptions to the rules.
Although modal quali ers serve the purpose of weakening or strengthening the valid
ity of a claim, there may still be conditions that invalidate either the grounds
or the warrants, and this will result in deactivating the link between the clai
m and the grounds. These cases are represented by the possible rebuttals. The re
sulting structure of logical reasoning provides a very useful framework for the
study of human information processing activities. The order in which the six ele
ments of logical reasoning have been presented serves only the purpose of illust
rating their function and interdependence in the structure of an argument about
a speci c issue. It does not represent any normative pattern of argument formation
. In fact, due to the dynamic nature of human reasoning, the concept formation a
nd framing that result in a particular structure may occur in different ways. Th
e six-element model of logical reasoning is shown in Figure 16. Computer-based i
mplementations of Figure 16 may assume a Bayesian inferential framework for proc
essing information. Frameworks for Bayesian inference require probability values
as primary inputs. Because most events of interest are unique or little is know
n about their relative frequencies of occurrence, the assessment of probability
values usually requires human judgment. Substantial
Figure 16 Representation of Toulmin Logic Structure.
140 1. 2. 3. 4. 5. 6.
TECHNOLOGY
Omissions in surveying alternative courses of action Omissions in surveying obje
ctives Failure to examine major costs and risks of the selected course of action
(COA) Poor information search, resulting in imperfect information Selective bia
s in processing available information Failure to reconsider alternatives initial
ly rejected, potentially by discounting favorable information and overweighing u
nfavorable information 7. Failure to work out detailed implementation, monitorin
g, and contingency plans The central thrust of this study is that the relationsh
ip between the quality of the decision making process and the quality of the out
come is dif cult to establish. This strongly suggests the usefulness of the contin
gency task structural model construct and the need for approaches that evaluate
the quality of processes, as well as decisions and outcomes, and that consider t
he inherent embedding of outcomes and decisions within processes that lead to th
ese. Organizational ambiguity is a major reason why much of the observed bounded r
ationality behavior is so pervasive. March (1983) and March and Wessinger-Baylon (
1986) show that this is very often the case, even in situations when formal rati
onal thought or vigilant information processing (Janis and Mann 1977) might be thoug
ht to be a preferred decision style. March (1983) indicates that there are at le
ast four kinds of opaqueness or equivocality in organizations: ambiguity of inte
ntion, ambiguity of understanding, ambiguity of history, and ambiguity of human
participation. These four ambiguities relate to an organizations structure, funct
ion, and purpose, as well as to the perception of these decision making agents i
n an organization. They in uence the information that is communicated in an organi
zation and generally introduce one or more forms of information imperfection. Th
e notions of organizational management and organizational information processing
are indeed inseparable. In the context of human information processing, it woul
d not be incorrect to de ne the central purpose of management as development of a
consensual grammar to ameliorate the effects of equivocality or ambiguity. This
is the perspective taken by Karl Weick (1979, 1985) in his noteworthy efforts co
ncerning organizations. Starbuck (1985) notes that much direct action is a form
of deliberation. He indicates that action should often be introduced earlier in
the process of deliberation than it usually is and that action and thought shoul
d be integrated and interspersed with one another. The basis of support for this
argument is that probative actions generate information and tangible results th
at modify potential thoughts. Of course, any approach that involves act now, think
later behavior should be applied with considerable caution. Much of the discussio
n to be found in the judgment, choice, and decision literature concentrates on w
hat may be called formal reasoning and decision selection efforts that involve t
he issue resolution efforts that follow as part of the problem solving efforts o
f issue formulation, analysis, and interpretation that we have discussed here. T
here are other decision making activities, or decision-associated activities, as
well. Very important among these are activities that allow perception, framing,
editing and interpretation of the effects of actions upon the internal and exte
rnal environments of a decision situation. These might be called information sel
ection activities. There will also exist information retention activities that a
llow admission, rejection, and modi cation of the set of selected information or k
nowledge such as to result in short-term learning and long-term learning. Shortterm learning results from reduction of incongruities, and long-term learning re
sults from acquisition of new information that re ects enhanced understanding of a
n issue. Although the basic GDSS design effort may well be concerned with the sh
ort-term effects of various problem solving, decision making, and information pr
esentation formats, the actual knowledge that a person brings to bear on a given
problem is a function of the accumulated experience that the person possesses,
and thus long-term effects need to be considered, at least as a matter of second
ary importance. It was remarked above that a major purpose of a GDSS is to enhan
ce the value of information and, through this, to enhance group and organization
al decision making. Three attributes of information appear dominant in the discu
ssion thus far relative to value for problem solving purposes and in the literat
ure in general: 1. Task relevance: Information must be relevant to the task at h
and. It must allow the decision maker to know what needs to be known in order to
make an effective and ef cient decision. This is not as trivial a statement as mi
ght initially be suspected. Relevance varies considerably across individuals, as
a function of the contingency task structure, and in time as well. 2. Represent
ational appropriateness: In addition to the need that information be relevant to
the task at hand, the person who needs the information must receive it in a for
m that is appropriate for use.
142
TECHNOLOGY
Thus, the number of possible types of GDSS may be relatively extensive. Johansen
(1988) has identi ed no less than 17 approaches for computer support in groups in
his discussion of groupware. 1. Face-to-face meeting facilitation services: Thi
s is little more than of ce automation support in the preparation of reports, over
heads, videos, and the like that will be used in a group meeting. The person mak
ing the presentation is called a facilitator or chauffeur. 2. Group decision support
tems: By this, Johansen essentially infers the GDSS structure shown in Figure 15
with the exception that there is but a single video monitor under the control o
f a facilitator or chauffeur. 3. Computer-based extensions of telephony for use
by work groups: This involves use of either commercial telephone services or pri
vate branch exchanges (PBXs). These services exist now, and there are several pr
esent examples of conference calling services. 4. Presentation support software:
This approach is not unlike that of approach 1, except that computer software i
s used to enable the presentation to be contained within a computer. Often those
who will present it prepare the presentation material, and this may be done in
an interactive manner to the group receiving the presentation. 5. Project manage
ment software: This is software that is receptive to presentation team input ove
r time and that has capabilities to organize and structure the tasks associated
with the group, often in the form of a Gantt chart. This is very specialized sof
tware and would be potentially useful for a team interested primarily in obtaini
ng typical project management results in terms of PERT charts and the like. 6. C
alendar management for groups: Often individuals in a group need to coordinate t
imes with one another. They indicate times that are available, potentially with
weights to indicate schedule adjustment exibility in the event that it is not pos
sible to determine an acceptable meeting time. 7. Group authoring software: This
allows members of a group to suggest changes in a document stored in the system
without changing the original. A lead person can then make document revisions.
It is also possible for the group to view alternative revisions to drafts. The o
verall objective is to encourage, and improve the quality and ef ciency of, group
writing. It seems very clear that there needs to be overall structuring and form
at guidance, which, while possibly group determined, must be agreed upon prior t
o lling out the structure with report details. 8. Computer-supported face-to-face
meetings: Here, individual members of the group work directly with a workstatio
n and monitor, rather than having just a single computer system and monitor. A l
arge screen video may, however, be included. This is the sort of DSS envisioned
in Figure 14. Although there are a number of such systems in existence, the Cola
b system at Xerox Palo Alto Research Center (Ste k et al. 1987) was one of the ear
liest and most sophisticated. A simple sketch of a generic facility for this pur
pose might appear somewhat as shown in Figure 17. Generally, both public and pri
vate information are contained in these systems. The public information is share
d, and the private information, or a portion of it, may be converted to public p
rograms. The private screens normally start with a menu screen from which partic
ipants can select activities in which they engage, potentially under the directi
on of a facilitator. 9. Screen sharing software: This software enables one membe
r of a group to selectively share screens with other group members. There are cl
early advantages and pitfalls in this. The primary advantage to this approach is
that information can be shared with those who have a reason to know speci c infor
mation without having to bother others who do not need it. The disadvantage is j
ust this also, and it may lead to a feeling of ganging up by one subgroup on ano
ther subgroup. 10. Computer conferencing systems: This is the group version of e
lectronic mail. Basically, what we have is a collection of DSSs with some means
of communication among the individuals that comprise the group. This form of com
munication might be regarded as a product hierarchy in which people communicate.
11. Text ltering software: This allows system users to search normal or semistru
ctured text through the speci cation of search criteria that are used by the lterin
g software to select relevant portions of text. The original system to accomplis
h this was Electronic Mail Filter (Malone et al. 1987). A variety of alternative
approaches are also being emphasized now. 12. Computer-supported audio or video
conferences: This is simply the standard telephone or video conferencing, as au
gmented by each participant having access to a computer and appropriate software
. 13. Conversational structuring: This involves identi cation and use of a structu
re for conversations that is presumably in close relationship to the task, envir
onment, and experiential fa-
144
TECHNOLOGY
2. GDSS use posed an intellectual challenge to the group and made accomplishment
of the purpose of their decision making activity more dif cult than for groups wi
thout the GDSS. 3. The groups using the GDSS became more process oriented and le
ss speci c-issue oriented than the groups not using the GDSS. Support may be accom
plished at any or all of the three levels for group decision support identi ed by
DeSanctis and Gallupe (1987) in their de nitive study of GDSS. A GDSS provides a m
echanism for group interaction. It may impose any of various structured processe
s on individuals in the group, such as a particular voting scheme. A GDSS may im
pose any of several management control processes on the individuals on the group
, such as that of imposing or removing the effects of a dominant personality. Th
e design of the GDSS and the way in which it is used are the primary determinant
s of these. DeSanctis and Gallupe have developed a taxonomy of GDSS. A Level I G
DSS would simply be a medium for enhanced information interchange that might lea
d ultimately to a decision. Electronic mail, large video screen displays that ca
n be viewed by a group, or a decision room that contains these features could re
present a Level I GDSS. A Level I GDSS provides only a mechanism for group inter
action. It might contain such facilities as a group scratchpad, support for meet
ing agenda development, and idea generation and voting software. A Level II GDSS
would provide various decision structuring and other analytic tools that could
act to reduce information imperfection. A decision room that contained software
that could be used for problem solution would represent a Level II GDSS. Thus, s
preadsheets would primarily represent a Level II DSS. A Level II GDSS would also
have to have some means of enabling group communication. Figure 17 represents a
Level II GDSS. It is simply a communications medium that has been augmented wit
h some tools for problem structuring and solution with no prescribed management
control of the use of these tools. A Level III GDSS also includes the notion of
management control of the decision process. Thus, a notion of facilitation of th
e process is present, either through the direct intervention of a human in the p
rocess or through some rule-based speci cations of the management control process
that is inherent in Level III GDSS. Clearly, there is no sharp transition line b
etween one level and the next, and it may not always be easy to identify at what
level a GDSS is operating. The DSS generator, such as discussed in our precedin
g section, would generally appear to produce a form of Level III DSS. In fact, m
ost of the DSSs that we have been discussing in this book are either Level II or
Level III DSSs or GDSSs. The GDSS of Figure 17, for example, becomes a Level II
I GDSS if is supported by a facilitator. DeSanctis and Gallupe (1987) identify f
our recommended approaches: 1. 2. 3. 4. Decision room for small group face-to-fa
ce meetings Legislative sessions for large group face-to-face meetings Local are
a decision networks for small dispersed groups Computer-mediated conferencing fo
r large groups that are dispersed
DeSanctis and Gallupe discuss the design of facilities to enable this, as well a
s techniques whereby the quality of efforts such as generation of ideas and acti
ons, choosing from among alternative courses of action, and negotiating con icts m
ay be enhanced. On the basis of this, they recommend six areas as very promising
for additional study: GDSS design methodologies; patterns of information exchan
ge; mediation of the effects of participation; effects of (the presence or absen
ce of) physical proximity, interpersonal attraction, and group cohesion; effects
on power and in uence; and performance / satisfaction trade-offs. Each of these s
upports the purpose of computerized aids to planning, problem solving, and decis
ion making: removing a number of common communication barriers; providing techni
ques for structuring decisions; and systematically directing group discussion an
d associated problem-solving and decision making in terms of the patterns, timin
g, and content of the information that in uences these actions. We have mentioned
the need for a GDSS model base management system (MBMS). There are a great varie
ty of MBMS tools. Some of the least understood are group tools that aid in the i
ssue formulation effort. Because these may represent an integral part of a GDSS
effort, it is of interest to describe GDSS issue formation here as one component
of a MBMS. Other relevant efforts and interest areas involving GDSS include the
group processes in computer mediated communications study; the computer support
for collaboration and problem solving in meetings study of Ste k et al. (1987); t
he organizational planning study of Applegate et al. (1987), and the knowledge m
anagement and intelligent information sharing systems study of Malone et al. (19
87). Particularly interesting current issues surround the extent to which cognit
ive science and engineering studies that involve potential human information pro
cessing aws can be effectively dealt with, in the sense of design of debiasing ai
ds, in GDSS design. In a very major way, the purpose of a GDSS is to support enh
anced organizational communications. This is a very important contem-
aged and structured, and deal with their products and services. In particular, i
t creates a number of opportunities and challenges that affect the way in which
data is converted into information and then into knowledge. It poses many opport
unities for management of the environment for these transfers, such as enhancing
the productivity of individuals and organizations. Decision support is much nee
ded in these endeavors, and so is knowledge management. It is tting that we concl
ude our discussions of decision support systems with a discussion of knowledge m
anagement and the emergence in the 21st century of integrated systems to enhance
knowledge management and decision support. Major growth in the power of computi
ng and communicating and associated networking is fundamental to the emergence o
f these integrated systems and has changed relationships among people, organizat
ions, and technology. These capabilities allow us to study much more complex iss
ues than
146
TECHNOLOGY
was formerly possible. They provide a foundation for dramatic increases in learn
ing and both individual and organizational effectiveness. This is due in large p
art to the networking capability that enables enhanced coordination and communic
ations among humans in organizations. It is also due to the vastly increased pot
ential availability of knowledge to support individuals and organizations in the
ir efforts, including decision support efforts. However, information technologie
s need to be appropriately integrated within organizational frameworks if they a
re to be broadly useful. This poses a transdisciplinary challenge (Somerville an
d Rapport 2000) of unprecedented magnitude if we are to move from high-performan
ce information technologies and high-performance decision support systems to hig
h-performance organizations. In years past, broadly available capabilities never
seemed to match the visions proffered, especially in terms of the time frame of
their availability. Consequently, despite these compelling predictions, traditi
onal methods of information access and utilization continued their dominance. As
a result of this, comments something like computers are appearing everywhere exce
pt in productivity statistics have often been made (Brynjolfsson and Yang 1996). I
n just the past few years, the pace has quickened quite substantially, and the n
eed for integration of information technology issues with organizational issues
has led to the creation of related elds of study that have as their objectives:
Capturing human information and knowledge needs in the form of system requiremen
ts and
speci cations
Developing and deploying systems that satisfy these requirements Supporting the
role of cross-functional teams in work Overcoming behavioral and social impedime
nts to the introduction of information technology
systems in organizations
Enhancing human communication and coordination for effective and ef cient work ow th
rough
knowledge management Because of the importance of information and knowledge to a
n organization, two related areas of study have arisen. The rst is concerned with
technologies associated with the effective and ef cient acquisition, transmission
, and use of information, or information technology. When associated with organi
zational use, this is sometimes called organizational intelligence or organizati
onal informatics. The second area, known as knowledge management, refers to an o
rganizations capacity to gather information, generate knowledge, and act effectiv
ely and in an innovative manner on the basis of that knowledge. This provides th
e capacity for success in the rapidly changing or highly competitive environment
s of knowledge organizations. Developing and leveraging organizational knowledge
is a key competency and, as noted, it requires information technology as well a
s many other supporting capabilities. Information technology is necessary for en
abling this, but it is not suf cient in itself. Organizational productivity is not
necessarily enhanced unless attention is paid to the human side of developing a
nd managing technological innovation (Katz 1997) to ensure that systems are desi
gned for human interaction. The human side of knowledge management is very impor
tant. Knowledge capital is sometimes used to describe the intellectual wealth of
employees and is a real, demonstrable asset. Sage (1998) has used the term syst
ems ecology to suggest managing organizational change to create a knowledge orga
nization and enhance and support the resulting intellectual property for the pro
duction of sustainable products and services. Managing information and knowledge
effectively to facilitate a smooth transition into the Information Age calls fo
r this systems ecology, a body of methods for systems engineering and management
(Sage 1995; Sage and Rouse, 1999a,b) that is based on analogous models of natur
al ecologies. Such a systems ecology would enable the modeling, simulation, and
148
TECHNOLOGY
these. Davenport and Prusak (1998) note that when organizations interact with en
vironments, they absorb information and turn it into knowledge. Then they make d
ecisions and take actions. They suggest ve modes of knowledge generation: 1. Acqu
isition of knowledge that is new to the organization and perhaps represents newl
y created knowledge. Knowledge-centric organizations need to have appropriate kn
owledge available when it is needed. They may buy this knowledge, potentially th
rough acquisition of another company, or generate it themselves. Knowledge can b
e leased or rented from a knowledge source, such as by hiring a consultant. Gene
rally, knowledge leases or rentals are associated with knowledge transfer. 2. De
dicated knowledge resource groups may be established. Because time is required f
or the nancial returns on research to be realized, the focus of many organization
s on short-term pro t may create pressures to reduce costs by reducing such expend
itures. Matheson and Matheson (1998) describe a number of approaches that knowle
dge organizations use to create value through strategic research and development
. 3. Knowledge fusion is an alternative approach to knowledge generation that br
ings together people with different perspectives to resolve an issue and determi
ne a joint response. Nonaka and Takeuchi (1995) describe efforts of this sort. T
he result of knowledge fusion efforts may be creative chaos and a rethinking of
old presumptions and methods of working. Signi cant time and effort are often requ
ired to enable group members to acquire suf cient shared knowledge, work effective
ly together, and avoid confrontational behavior. 4. Adaptation through providing
internal resources and capabilities that can be utilized in new ways and being
open to change in the established ways of doing business. Knowledge workers who
can acquire new knowledge and skills easily are the most suitable to this approa
ch. Knowledge workers with broad knowledge are often the most appropriate for ad
aptation assignments. 5. Knowledge networks may act as critical conduits for inn
ovative reasoning. Informal networks can generate knowledge provided by a divers
ity of participants. This requires appropriate allocation of time and space for
knowledge acquisition and creation. In each of these efforts, as well as in much
more general situations, it is critical to regard technology as a potential ena
bler of human effort, not as a substitute for it. There are, of course, major fe
edbacks here because the enabled human efforts create incentives and capabilitie
s that lead to further enhanced technology evolution. Knowledge management (Nona
ka and Takeuchi 1995; Cordata and Woods 1999; Bukowitz and Williams, 1999) refer
s to management of the environment for knowledge creation, transfer, and sharing
throughout the organization. It is vital in ful lling contemporary needs in decis
ion support. Appropriate knowledge management considers knowledge as the major o
rganizational resource and growth through enhanced knowledge as a major organiza
tional objective. While knowledge management is dependent to some extent upon th
e presence of information technology as an enabler, information technology alone
cannot deliver knowledge management. This point is made by McDermott (1999), wh
o also suggests that the major ingredient in knowledge management, leveraging kn
owledge, is dramatically more dependent upon the communities of people who own a
nd use it than upon the knowledge itself. Knowledge management is one of the maj
or organizational efforts brought about by the realization that knowledge-enable
d organizations are best posed to continue in a high state of competitive advant
age. Such terms as new organizational wealth (Sveiby 1997), intellectual capital
(Brooking 1996; Edvinsson and Malone 1997; Klein 1998; Roos et al. 1998), the i
n nite resource (Halal 1998), and knowledge assets (Boisot 1998) are used to descr
ibe the knowledge networking (Skyrme, 1999) and working knowledge (Davenport and
Prusak 1998) in knowledge-enabled organizations (Liebowitz and Beckman 1998; To
bin 1998). Major objectives of knowledge management include supporting organizat
ions in turning information into knowledge (Devin 1999) and, subsequent to this
through a strategy formation and implementation process (Zack 1999), turning kno
wledge into action (Pfeffer and Sutton 2000). This latter accomplishment is vita
l because a knowledge advantage is brought to fruition only through an action ad
150
TECHNOLOGY
Atzeni, P., Ceri, S., Paraboschi, S., and Torione, R. (2000), Database Systems:
Concepts, Languages and Architectures, McGraw-Hill, New York. Banks, J., Ed. (19
98), Handbook of Simulation, John Wiley & Sons, New York. Barbosa, L. C., and He
rko, R. G. (1980), Integration of Algorithmic Aids into Decision Support Systems, MI
S Quarterly, Vol. 4, No. 3, pp. 112. Bennet, J. L., Ed. (1983), Building Decision
Support Systems, Addison-Wesley, Reading, MA. Blanning, R. W., and King, D. R.
(1993), Current Research in Decision Support Technology, IEEE Computer Society P
ress, Los Altos, CA. Boisot, M. H. (1998), Knowledge Assets: Securing Competitiv
e Advantage in the Information Economy, Oxford University Press, New York. Brook
ing, A. (1996), Intellectual Capital: Core Asset for the Third Millennium Enterp
rise, Thompson Business Press, London. Brynjolfsson, E., and Yang, S. (1996), Info
rmation Technology and Productivity: A Review of the Literature, in Advances in Co
mputers, Vol. 43, pp. 179214. Bukowitz, W. R., and Williams, R. L. (1999), The Kn
owledge Management Fieldbook, Financial Times Prentice Hall, London. Card, S.K.,
Moran, T. P., and Newell, A. (1993), The Psychology of Human Computer Interacti
on. Lawrence Erlbaum, Hillsdale, NJ. Chaffey, D. (1998), Groupware, Work ow and In
tranets: Reengineering the Enterprise with Collaborative Software, Digital Press
, Boston. Chen, P. P. S. (1976), The Entity-Relationship Model: Towards a Uni ed Vie
w of Data, ACM Transactions on Database Systems, Vol. 1, pp. 936. Choo, C. W. (1998
), The Knowing Organization: How Organizations Use Information to Construct Mean
ing, Create Knowledge, and Make Decisions, Oxford University Press, New York. Co
ad, P., and Yourdon, E. (1990), Object-Oriented Analysis, Prentice Hall, Englewo
od Cliffs, NJ. Cordata, J. W., and Woods, J. A., Eds. (1999), The Knowledge Mana
gement Yearbook 19992000, Butterworth-Heinemann, Woburn, MA. Darwen, H., and Date
, C. J. (1998), Foundation for Object / Relational Databases: The Third Manifest
o, Addison-Wesley, Reading MA. Date, C. J. (1983), Database: A Primer, Addison-W
esley, Reading, MA. Date, C. J. (1999), An Introduction to Database Systems, 7th
Ed., Addison-Wesley, Reading, MA. Davenport, T. H., and Prusak, L. (1994), Work
ing Knowledge: How Organizations Manage What They Know, Harvard Business School
Press, Boston. Davis, G. B. (1982), Strategies for Information Requirements Determ
ination, IBM Systems Journal, Vol. 21, No. 1, pp. 430. Debenham, J. (1998), Knowled
ge Engineering: Unifying Knowledge Base and Database Design, Springer-Verlag, Be
rlin. DeSanctis, G., and Gallupe, R. B. (1987), A Foundation for the Study of Grou
p Decision Support Systems, Management Science, Vol. 33, pp. 547588. DeSanctis, G.,
and Monge, P., Eds. (1999), Special Issue: Communications Processes for Virtual
Organizations, Organizational Science, Vol. 10, No. 6, NovemberDecember. Devin,
K. (1999), Infosense: Turning Information into Knowledge, W. H. Freeman & Co., N
ew York. Dolk, D. R. (1986), Data as Models: An Approach to Implementing Model Man
agement, Decision Support Systems, Vol. 2, No. 1, pp, 7380. Dutta, P. K. (1999), St
rategies and Games: Theory and Practice, MIT Press, Cambridge, MA. Edvinsson, L.
, and Malone, M. S. (1997), Intellectual Capital: Realizing your Companys True Va
lue by Finding Its Hidden Brainpower, HarperCollins, New York. Fang, L., Hipel,
K. W., and Kilgour, D. M. (1993), Interactive Decision Making: The Graph Model f
or Con ict Resolution, John Wiley & Sons, New York. Firesmith, D. G. (1993), Objec
t Oriented Requirements and Logical Design, John Wiley & Sons, New York. Fraser,
N. M., and Hipel, K. W. (1984), Con ict Analysis: Models and Resolution. North Ho
lland, New York. Gass, S. I., and Harris, C. M., Eds. (2000), Encyclopedia of Op
erations Research and Management Science, Kluwer Academic Publishers, Boston. Gr
ay, P., and Olfman, L. (1989), The User Interface in Group Decision Support System
s, Decision Support Systems, Vol. 5, No. 2, pp. 119137.
152
TECHNOLOGY
Klein, G. A. (1990), Information Requirements for Recognitional Decision Making, in
Concise Encyclopedia of Information Processing in Systems and Organizations, A.
P. Sage, Ed., Pergamon Press, Oxford, pp. 414418. Klein, G. A. (1998), Sources of
Power: How People Make Decisions, MIT Press, Cambridge. Kraemer, K. L., and Kin
g, J. L. (1988), Computer Based Systems for Cooperative Work and Group Decision Ma
king, ACM Computing Surveys, Vol. 20, No. 2, pp. 115146. Kroenke, D. M. (1998), Dat
abase Processing: Fundamentals, Design, and Implementation, Prentice Hall, Engle
wood Cliffs, NJ. Lee, E. (1990), User-Interface Development Tools, IEEE Software, Vo
l. 7, No. 3, pp. 3136. Liang, B. T. (1988), Model Management for Group Decision Sup
port, MIS Quarterly, Vol. 12, pp. 667680. Liang, T. P. (1985), Integrating Model Mana
gement with Data Management in Decision Support Systems, Decision Support Systems.
Vol. 1, No. 3, pp. 221232. Liebowitz, J., and Beckman, T. (1998), Knowledge Orga
nizations: What Every Manager Should Know, CRC Press, Boca Raton, FL. Liebowitz,
J., and Wilcox, L. C., Eds. (1997), Knowledge Management and Its Integrative El
ements, CRC Press, Boca Raton, FL. Malone, T. W., Grant, K. R., Turbak, F. A., B
robst, S. A., and Cohen, M. D. (1987), Intelligent Information Sharing Systems, Comm
unications of the ACM, Vol. 30, pp. 390402. Marakas, G. M., Decision Support Syst
ems in the 21st Century, Prentice Hall, Englewood Cliffs, NJ. March, J. G. (1983
), Bounded Rationality, Ambiguity, and the Engineering of Choice, Bell Journal of Ec
onomics, Vol. 9, pp. 587608. March, J., and Wessinger-Baylon, T., Eds. (1986), Am
biguity and Command: Organizational Perspectives on Military Decisionmaking, Pit
man, Boston. Matheson, D., and Matheson, J. (1998), The Smart Organization: Crea
ting Value Through Strategic R&D, Harvard Business School Press, Boston. McDermo
tt, R. (1999), Why Information Technology Inspired but Cannot Deliver Knowledge Ma
nagement, California Management Review, Vol. 41, No. 1, pp. 103117. McGrath, J. E.
(1984), Groups: Interaction and Performance, Prentice Hall, Englewood Cliffs, NJ
. Miller, W. L., and Morris, L. (1999), Fourth Generation R&D: Managing Knowledg
e, Technology and Innovation, John Wiley & Sons. Mintzberg, H. (1973), The Natur
e of Managerial Work, Harper & Row, New York. Murphy, F. H., and Stohr, E. A. (1
986), An Intelligent System for Formulating Linear Programs, Decision Support System
s, Vol. 2, No. 1, pp. 3947. Myers, P. S. (1997), Knowledge Management and Organiz
ational Design, Butterworth-Heinemann, Boston. Mylopoulos, J., and Brodie, M. L.
Eds. (1998), Arti cial Intelligence and Databases, Morgan Kaufmann, San Mateo, CA
. Nielson, J. (1989), Hypertext and Hypermedia, Academic Press, San Diego. Nonak
a, I., and Takeuchi, H. (1995), The Knowledge-Creating Company: How Japanese Com
panies Create the Dynamics of Innovation, Oxford University Press, New York. Par
saei, H. R., Kolli, S., and Handley, T. R., Eds. (1997), Manufacturing Decision
Support Systems, Chapman & Hall, New York. Parsaye, K., Chignell, M., Khosha an, S
., and Wong, H. (1989), Intelligent Databases: ObjectOriented, Deductive, Hyperm
edia Technologies, John Wiley & Sons, New York. Pfeffer, J., and Sutton, R. I. (
2000), The KnowingDoing Gap: How Smart Companies Turn Knowledge into Action, Harv
ard Business School Press, Boston. Poe, V., Klauer, P., and Brobst, S. (1998), B
uilding a Data Warehouse for Decision Support, 2nd Ed., Prentice Hall, Englewood
Cliffs, NJ. Pool, R. (1997), Beyond Engineering: How Society Shapes Technology,
Oxford University Press, New York. Prusak, L., Ed. (1997), Knowledge in Organiz
ations, Butterworth-Heinemann, Woburn, MA. Purba, S., Ed. (1999), Data Managemen
t Handbook, 3rd Ed., CRC Press, Boca Raton, FL. Raiffa, H. (1968), Decision Anal
ysis, Addison-Wesley, Reading, MA. Rasmussen, J., Pejtersen, A. M., and Goodstei
n, L. P. (1995), Cognitive Systems Engineering, John Wiley & Sons, New York.
154
TECHNOLOGY
Stacey, R. D. (1996), Complexity and Creativity in Organizations, Berrett-Koehle
r, San Francisco. Starbuck, W. E. (1985), Acting First and Thinking Later, in Organi
zational Strategy and Change, Jossey-Bass, San Francisco. Ste k, M., Foster, G., B
obrow, D. G., Kahn, K., Lanning, S., and Suchman, L. (1987), Beyond the Chalkboard
: Computer Support for Collaboration and Problem Solving in Meetings, Communicatio
ns of the ACM, Vol. 30, No. 1, pp. 3247. Stewart, T. A. (1997). Intellectual Capi
tal: The New Wealth of Organizations, Currency Doubleday, New York. Sullo, G. C.
(1994), Object Oriented Engineering: Designing Large-Scale Object-Oriented Syst
ems, John Wiley & Sons, New York. Sveiby, K. E. (1997), The New Organizational W
ealth: Managing and Measuring Knowledge Based Assets, Berrett-Koehler, San Franc
isco. Tobin, D. R. (1998), The Knowledge Enabled Organization: Moving from Train
ing to Learning to Meet Business Goals, AMACOM, New York. Toulmin, S., Rieke, R.
, and Janik, A. (1979), An Introduction to Reasoning, Macmillan, New York. Ulric
h, D. (1998), Intellectual Capital Competence
Commitment, Sloan Management Review, V
ol. 39, No. 2, pp. 1526. Utterback, J. M. (1994), Mastering the Dynamics of Innov
ation: How Companies Can Seize Opportunities in the Face of Technological Change
, Harvard Business School Press, 1994. Volkema, R. J. (1990), Problem Formulation,
in Concise Encyclopedia of Information Processing in Systems and Organizations,
A. P. Sage, Ed., Pergamon Press, Oxford. Watson, R. T. (1998), Database Managem
ent: An Organizational Perspective, John Wiley & Sons, New York. Watson, R. T.,
DeSanctis, G., and Poole, M. S. (1988), Using a GDSS to Facilitate Group Consensus
: Some Intended and Unintended Consequences, MIS Quarterly, Vol. 12, pp. 463477. We
ick, K. E. (1979), The Social Psychology of Organizing, Addison-Wesley, Reading,
MA. Weick, K. E. (1985), Cosmos vs. Chaos: Sense and Nonsense in Electronic Conte
xt, Organizational Dynamics, Vol. 14, pp. 5064. Welch, D. A. (1989), Group Decision M
aking Reconsidered, Journal of Con ict Resolution, Vol. 33, No. 3, pp. 430445. Will,
H. J. (1975), Model Management System, in Information Systems and Organizational Str
ucture, E. Grochia and N. Szyperski, Eds., de Gruyter, Berlin, pp. 468482. Winogr
ad, T., and Flores, F. (1986), Understanding Computers and Cognition, Ablex Pres
s, Los Altos, CA. Zack, M. H., Ed. (1999), Knowledge and Strategy, Butterworth-H
einemann, Boston. Zeigler, B. P. (1997), Objects and Systems, Springer, New York
.
156
TECHNOLOGY
2. Decisions: Powerful computation allows many simulation trials to nd a better s
olution in decision making. For example, an optimal material handling equipment
selection can be obtained through repeated simulation runs. 3. Sensing: Input de
vices (e.g., sensors, bar code readers) can gather and communicate environmental
information to computers or humans. A decision may be made by computer systems
based on the input information. The decision may also trigger output devices (e.
g., robot arms, monitors) to realize the decisions. 4. Recovery: Computer system
s may apply techniques of arti cial intelligence (AI) (e.g., fuzzy rules, knowledg
e-based logic, neural networks) to improve the quality of activities. For exampl
e, a robot system may be recovered automatically from error conditions through d
ecisions made by AI programs. 5. Collaboration: Distributed designers can work t
ogether on a common design project through a computer supported collaborative wo
rk (CSCW) software system. 6. Partners: A computer system in an organization may
automatically nd cooperative partners (e.g., vendors, suppliers, and subcontract
ors) from the Internet to ful ll a special customer order without any increase in
the organizations capacity. 7. Logistics: Logistics ows of products and packages a
re monitored and maintained by networked computers. Although these seven categor
ies re ect the impact of computer and communication technologies, they are driven
by four automation technologies: physical automation systems, automatic control
systems, arti cial intelligence systems, and integration technology. Physical auto
mation systems and automatic control systems represent two early and ongoing ach
ievements in automation technology. Through automatic control theories, most sys
tems can be controlled by the set points de ned by users. With the support of both
automatic control theories and modern digital control equipment, such as the pr
ogrammable logic controller (PLC), physical automation systems that consist of p
rocessing machines (e.g., CNCs), transportation equipment (e.g., robots and AGVs
), sensing equipment (e.g., bar code readers) can be synchronized and integrated
. Arti cial intelligence systems and integration technology are two relatively rec
ent technologies. Many AI techniques, such as arti cial neural networks, knowledge
-based systems, and genetic algorithms, have been applied to automate the comple
x decision making processes in design, planning, and managerial activities of en
terprises. Additionally, integration techniques, such as electronic data interch
ange (EDI), client-server systems, and Internet-based transactions, have automat
ed business processes even when the participants are in remote sites. In this ch
apter, we will discuss the above four technologies to give readers comprehensive
knowledge of automation technology. Section 2 addresses physical automation tec
hnologies that are applied in the processing, transportation, and inspection act
ivities. Section 3 introduces classical automatic control theory. The purpose of
addressing automatic control theory is to review the traditional methods and ex
plain how a system can be automatically adjusted to the set point given by users
. Section 4 addresses arti cial intelligence techniques and introduces basic appli
cation approaches. Section 5 introduces integration technology, which is based m
ostly on todays information technology. Section 6 introduces the emerging trends
of automation technologies, which include virtual machines, tool perspective env
ironment, and autonomous agents. Section 7 makes some concluding remarks.
2.
PHYSICAL AUTOMATION TECHNOLOGY
Since MIT demonstrated the rst numerically controlled machine tool in 1952, infor
mation technologies have revolutionized and automated the manufacturing processe
s. See Chapter 12 for physical automation techniques such as robots. In general,
physical automation technology can be applied in three areas: processing, trans
portation / storage, and inspection. Representative examples of automated equipm
ent are: 1. Automated processing equipment: CNC machine tools, computer-controll
AUTOMATION TECHNOLOGY
157
that it achieves desired objectives within its planned speci cations and subject t
o cost and safety considerations. A well-planned system can perform effectively
without any control only as long as no variations are encountered in its own ope
ration and its environment. In reality, however, many changes occur over time. M
achine breakdown, human error, variable material properties, and faulty informat
ion are a few examples of why a system must be controlled. When a system is more
complex and there are more potential sources of dynamic variations, a more comp
licated control is required. Particularly in automatic systems where human opera
tors are replaced by machines and computers, a thorough design of control respon
sibilities and procedures is necessary. Control activities include automatic con
trol of individual machines, material handling, equipment, manufacturing process
es, and production systems, as well as control of operations, inventory, quality
, labor performance, and cost. Careful design of correct and adequate controls t
hat continually identify and trace variations and disturbances, evaluate alterna
tive responses, and result in timely and appropriate actions is therefore vital
to the successful operation of a system.
3.1.
Fundamentals of Control
Automatic control, as the term is commonly used, is self-correcting, or feedback, co
ntrol; that is, some control instrument is continuously monitoring certain outpu
t variables of a controlled process and is comparing this output with some prees
tablished desired value. The instrument then compares the actual and desired val
ues of the output variable. Any resulting error obtained from this comparison is
used to compute the required correction to the control setting of the equipment
being controlled. As a result, the value of the output variable will be adjuste
d to its desired level and maintained there. This type of control is known as a
servomechanism. The design and use of a servomechanism control system requires a
knowledge of every element of the control loop. For example, in Figure 1 the en
gineer must know the dynamic response, or complete operating characteristics, of
each pictured device: 1. The indicator or sampler, which senses and measures th
e actual output 2. The controller, including both the error detector and the cor
rection computer, which contain the decision making logic 3. The control value a
nd the transmission characteristics of the connecting lines, which communicate a
nd activate the necessary adjustment 4. The operating characteristics of the pla
nt, which is the process or system being controlled Dynamic response, or operati
ng characteristics, refer to a mathematical expression, for example, differentia
l equations, for the transient behavior of the process or its actions during per
iods of change in operating conditions. From it one can develop the transfer fun
ction of the process or prepare an experimental or empirical representation of t
he same effects. Because of time lags due to the long communication line (typica
lly pneumatic or hydraulic) from sensor to controller and other delays in the pr
ocess, some time will elapse before knowledge of changes in an output process va
riable reaches the controller. When the controller notes a change, it must compa
re it with the variable value it desires, compute how much and in what direction
the control valve must be repositioned, and then activate this correction in th
e valve opening. Some time is required, of course, to make these decisions and c
orrect the valve position.
Figure 1 Block Diagram of a Typical Simple, Single Control Loop of a Process Con
trol System. (From Williams and Nof 1992)
158
TECHNOLOGY
Some time will also elapse before the effect of the valve correction on the outp
ut variable value can reach the output itself and thus be sensed. Only then will
the controller be able to know whether its rst correction was too small or too l
arge. At that time it makes a further correction, which will, after a time, caus
e another output change. The results of this second correction will be observed,
a third correction will be made, and so on. This series of measuring, comparing
, computing, and correcting actions will go around and around through the contro
ller and through the process in a closed chain of actions until the actual proce
ss valve is nally balanced again at the desired level by the operator. Because fr
om time to time there are disturbances and modi cations in the desired level of th
e output, the series of control actions never ceases. This type of control is ap
tly termed feedback control. Figure 1 shows the direction and path of this close
d series of control actions. The closed-loop concept is fundamental to a full un
derstanding of automatic control. Although the preceding example illustrates the
basic principles involved, the actual attainment of automatic control of almost
any industrial process or other complicated device will usually be much more di
f cult because of the speed of response, multivariable interaction, nonlinearity,
response limitations, or other dif culties that may be present, as well as the muc
h higher accuracy or degree of control that is usually desired beyond that requi
red for the simple process just mentioned. As de ned here, automatic process contr
ol always implies the use of a feedback. This means that the control instrument
is continuously monitoring certain output variables of the controlled process, s
uch as a temperature, a pressure, or a composition, and is also comparing this o
utput with some preestablished desired value, which is considered a reference, o
r a set point, of the controlled variable. An error that is indicated by the com
parison is used by the instrument to compute a correction to the setting of the
process control valve or other nal control element in order to adjust the value o
f the output variable to its desired level and maintain it there. If the set poi
nt is altered, the response of the control system to bring the process to the ne
w operating level is termed that of a servomechanism or self-correcting device.
The action of holding the process at a previously established level of operation
in the face of external disturbances operating on the process is termed that of
a regulator.
3.2.
Instrumentation of an Automatic Control System
The large number of variables of a typical industrial plant constitute a wide va
riety of ows, levels, temperatures, compositions, positions, and other parameters
to be measured by the sensor elements of the control system. Such devices sense
some physical, electrical, or other informational property of the variable unde
r consideration and use it to develop an electrical, mechanical, or pneumatic si
gnal representative of the magnitude of the variable in question. The signal is
then acted upon by a transducer to convert it to one of the standard signal leve
ls used in industrial plants (315 psi for pneumatic systems and 14, 420, or 1050 mA
or 05 V for electrical systems). Signals may also be digitized at this point if t
he control system is to be digital. The signals that are developed by many types
of sensors are continuous representations of the sensed variables and as such a
re called analog signals. When analog signals have been operated upon by an anal
og-to-digital converter, they become a series of bits, or onoff signals, and are
then called digital signals. Several bits must always be considered together in
order to represent properly the converted analog signal (typically, 1012 bits). A
s stated previously, the resulting sensed variable signal is compared at the con
troller to a desired level, or set point, for that variable. The set point is es
tablished by the plant operator or by an upperlevel control system. Any error (d
ifference) between these values is used by the controller to compute the correct
ion to the controller output, which is transmitted to the valve or other actuato
r of the systems parameters. A typical algorithm by which the controller computes
its correction is as follows (Morriss 1995). Suppose a system includes componen
ts that convert inputs to output according to relationships, called gains, of th
ree types: proportional, derivative, and integral gains. Then the controller out
put is
Output
KP e(t)
e(t) dt
KI
D
d (e(t)) dt
where KP , KD, and KI
proportional, derivative, and integral gains, respectively
, of the controller. The error at time t, e(t), is calculated as e(t)
set point
feedback signal
AUTOMATION TECHNOLOGY
159
3.3. 3.3.1.
Basic Control Models Control Modeling
Five types of modeling methodologies have been employed to represent physical co
mponents and relationships in the study of control systems: 1. Mathematical equa
tions, in particular, differential equations, which are the basis of classical c
ontrol theory (transfer functions are a common form of these equations) 2. Mathe
matical equations that are used on state variables of multivariable systems and
associated with modern control theory 3. Block diagrams 4. Signal ow graphs 5. Fu
nctional analysis representations (data ow diagram and entity relationships) Math
ematical models are employed when detailed relationships are necessary. To simpl
ify the analysis of mathematical equations, we usually approximate them by linea
r, ordinary differential equations. For instance, a characteristic differential
equation of a control loop model may have the form d 2x dx 2
2x
(t) dt 2 dt with in
itial conditions of the system given as x (0) X0 x (0)
V0 where x (t) is a time f
unction of the controlled output variable, its rst and second derivatives over ti
me specify the temporal nature of the system,
and
are parameters of the system p
roperties, (t) speci es the input function, and X0 and V0 are speci ed constants. Mat
hematical equations such as this example are developed to describe the performan
ce of a given system. Usually an equation or a transfer function is determined f
or each system component. Then a model is formulated by appropriately combining
the individual components. This process is often simpli ed by applying Laplace and
Fourier transforms. A graph representation by block diagrams (see Figure 2) is
usually applied to de ne the connections between components. Once a mathematical m
odel is formulated, the control system characteristics can be analytically or em
pirically determined. The basic characteristics that are the object of the contr
ol system design are: 1. Response time 2. Relative stability 3. Control accuracy
Figure 2 Block Diagram of a Feedback Loop. (From Williams and Nof 1992)
160
TECHNOLOGY
They can be expressed either as functions of frequency, called frequency domain
speci cations, or as functions of time, called time domain speci cations. To develop
the speci cations the mathematical equations have to be solved. Modern computer s
oftware, such as MATLAB (e.g., Kuo 1995), has provided convenient tools for solv
ing the equations.
3.3.2.
Control Models
Unlike the open-loop control, which basically provides a transfer function for t
he input signals to actuators, the feedback control systems receive feedback sig
nals from sensors then compare the signals with the set point. The controller ca
n then control the plant to the desired set point according to the feedback sign
al. There are ve basic feedback control models (Morriss 1995): 1. On / off contro
l: In on / off control, if the e(t) is smaller than 0, the controller may activa
te the plant; otherwise the controller stays still. Most household temperature t
hermostats follow this model. 2. Proportional (PE ) control: In PE control, the
output is proportional to the e(t) value, i.e., e(t) KPe(t). In PE, the plant re
sponds as soon as the error signal is non-zero. The output will not stop exactly
at the set point. When it approaches the set point, the e(t) becomes smaller. E
ventually, the output is too small to overcome opposing force (e.g., friction).
Attempts to reduce this small e(t), also called steady state error, by increasin
g KP can only cause more overshoot error. 3. Proportional-integral (PI ) control
: PI control tries to solve the problem of steady state error. In PI, output
KPe
(t) KI e(t) dt. The integral of the error signal will have grown to a certain va
lue and will continue to grow as soon as the steady state error exists. The plan
t can thus be drawn to close the steady state error. 4. Proportional-derivative
(PD) control: PD control modi es the rate of response of the feedback control syst
em in order to prevent overshoot. In PD, Output
KPe(t) KD d(e(t)) dt
When the e(t) gets smaller, a negative derivative results. Therefore, overshoot
is prevented. 5. Proportional-integral-derivative (PID) control: PID control tak
es advantage of PE, PI, and PD controls by nding the gains (KP, KI, and KD) to ba
lance the proportional response, steady state reset ability, and rate of respons
e control, so the plant can be well controlled.
3.4.
Advanced Control Models
Based on the control models introduced in Section 3.3, researchers have develope
d various advanced control models for special needs. Table 1 shows the applicati
on domains and examples of the models, rather than the complicated theoretical c
ontrol diagram. Interested readers can refer to Morriss (1995).
4.
TECHNOLOGIES OF ARTIFICIAL INTELLIGENCE
Automation technologies have been bestowed intelligence by the invention of comp
uters and the evolution of arti cial intelligence theories. Because of the introdu
ction of the technologies of arti cial intelligence, automated systems, from the p
erspective of control, can intelligently plan, actuate, and control their operat
ions in a reasonable time limit by handling / sensing much more environmental in
put information (see the horizontal axis in Figure 3). Meanwhile, arti cial intell
162
TECHNOLOGY
Figure 4 Block Diagram of a Knowledge-Based Control System. (From Williams and N
of 1992)
The numerous industrial engineering applications of control models in computer i
nformation systems can be classi ed into two types: (1) development of decision su
pport systems, information systems that provide the information, and decisions t
o control operations, and (2) maintenance of internal control over the quality a
nd security of the information itself. Because information systems are usually c
omplex, graphic models are typically used. Any of the control models can essenti
ally incorporate an information system, as indicated in some of the examples giv
en. The purpose of an information system is to provide useful, high-quality info
rmation; therefore, it can be used for sound planning of operations and preparat
ion of realistic standards of performance. Gathering, classifying, sorting, and
analyzing large amounts of data can provide timely and accurate measurement of a
ctual performance. This can be compared to reference information and standards t
hat are also stored in the information system in order to immediately establish
discrepancies and initiate corrective actions. Thus, an information system can i
mprove the control operation in all its major functions by measuring and collect
ing actual performance measures, analyzing and comparing the actual to the desir
ed set points, and directing or actuating corrective adjustments. An increasing
number of knowledge-based decision support and control systems have been applied
since the mid-1980s. Typical control functions that have been implemented are:
Scheduling Diagnosis Alarm interpretation Process control Planning Monitoring
4.2.
Arti cial Neural Networks
The powerful reasoning and inference capabilities of arti cial neural networks (AN
N) in control are demonstrated in the areas of
Adaptive control and learning Pattern recognition / classi cation Prediction
To apply ANN in control, the user should rst answer the following questions:
AUTOMATION TECHNOLOGY
163
1. If there are training data, ANN paradigm with supervised learning may be appl
ied; otherwise, ANN paradigm with unsupervised learning is applied. 2. Select a
suitable paradigm, number of network layers, and number of neurons in each layer
. 3. Determine the initial weights and parameters for the ANN paradigm. A widely
applied inference paradigm, back propagation (BP), is useful with ANN in contro
l. There are two stages in applying BP to control: the training stage and the co
ntrol stage (Figure 5).
4.2.1.
Training Stage
1. Prepare a set of training data. The training data consist of many pairs of da
ta in the format of inputoutput. 2. Determine the number of layers, number of neu
rons in each layer, the initial weights between the neurons, and parameters. 3.
Input the training data to the untrained ANN. 4. After the training, the trained
ANN provides the associative memory for linking inputs and outputs.
4.2.2.
Control Stage
Input new data to the trained ANN to obtain the control decision. For instance,
given a currently observed set of physical parameter values, such as noise level
and vibration measured on a machine tool, automatically adapt to a new calculat
ed motor speed. The recommended adjustment is based on the previous training of
the ANN-based control. When the controller continues to update its training ANN
over time, we have what is called learning control. Other application examples o
f neural networks in automated systems are as follows:
Object recognition based on robot vision Manufacturing scheduling Chemical proce
ss control
The ANN could be trained while it is transforming the inputs on line to adapt it
self to the environment. Detailed knowledge regarding the architectures of ANN,
initial weights, and parameters can be found in Dayhoff (1990), Freeman and Skap
ura (1991), Fuller (1995), Lin and Lee (1996).
4.3.
Fuzzy Logic
Figure 6 shows a structure of applying fuzzy logic in control. First, two types
of inputs must be obtained: numerical inputs and human knowledge or rule extract
ion from data (i.e., fuzzy rules). Then the numerical inputs must be fuzzi ed into
fuzzy numbers. The fuzzy rules consist of the fuzzy membership functions (knowl
edge model) or so-called fuzzy associative memories (FAMs). Then the
Figure 5 Structure of Applying Back-Propagation ANN in Control.
164
TECHNOLOGY
Figure 6 The Structure of Applying Fuzzy Logic in Control.
FAMs map fuzzy sets (inputs) to fuzzy sets (outputs). The output fuzzy sets shou
ld be defuzzi ed into numerical values to control the plant. Due to the powerful a
bility of fuzzy sets in describing system linguistic and qualitative behavior an
d imprecise and / or uncertain information, many industrial process behavior and
control laws can be modeled by fuzzy logic-based approaches. Fuzzy logic has be
en applied in a wide range of automated systems, including:
Chemical process control Autofocusing mechanism on camera and camcorder lens Tem
perature and humidity control for buildings, processes, and machines
4.4.
Genetic Algorithms
Genetic algorithms (GAs), also referred to as evolutionary computation, are high
ly suitable for certain types of problems in the areas of optimization, product
design, and monitoring of industrial systems. A GA is an automatically improving
(evolution) algorithm. First the user must encode solutions of a problem into t
he form of chromosomes and an evaluation function that would return a measuremen
t of the cost value of any chromosome in the context of the problem. A GA consis
ts of the following steps: 1. Establish a base population of chromosomes. 2. Det
ermine the tness value of each chromosome. 3. Create new chromosomes by mating cu
rrent chromosomes; apply mutation and recombination as the parent chromosomes ma
te. 4. Delete undesirable members of the population. 5. Insert the new chromosom
es into the population to form a new population pool. GA are useful for solving
large-scale planning and control problems. Several cases indicate that GA can ef
fectively nd an acceptable solution for complex product design, production schedu
ling, and plant layout planning.
4.5.
Hybrid Intelligent Control Models
Intelligent control may be designed in a format combining the techniques introdu
ced above. For example, fuzzy neural networks use computed learning and the adap
tive capability of neural networks to improve the computed learnings associative
memory. Genetic algorithms can also be applied to nd the optimal structure and pa
rameters for neural networks and the membership functions for fuzzy logic system
s. In addition, some techniques may be applicable in more than one area. For exa
mple, the techniques of knowledge acquisition in KBSs and fuzzy logic systems ar
e similar.
5.
INTEGRATION TECHNOLOGY
Recent communication technologies have enabled another revolution in automation
technologies. Stand-alone automated systems are integrated via communication tec
hnologies. Integration can be identi ed into three categories:
AUTOMATION TECHNOLOGY
165
1. Networking: Ability to communicate. 2. Coordinating: Ability to synchronize t
he processes of distributed automated systems. The coordination is realized by e
ither a controller or an arbitrary rule (protocol). 3. Integration: Distributed
automated systems are able to cooperate or collaborate with other automated syst
ems to ful ll a global goal while satisfying their individual goals. The cooperati
on / collaboration is normally realized via a protocol that is agreed upon by di
stributed automated systems. However, the agreement is formed through the intell
igence of the automated systems in protocol selection and adjustment. In this se
ction, automated technologies are introduced based on the above categories. Howe
ver, truly integrated systems (technologies) are still under development and are
mostly designed as agent-based systems, described in Section 6. Only technologi
es from the rst two categories are introduced in this section.
5.1.
Networking Technologies
To connect automated systems, the simplest and the most economic method is to wi
re the switches on the automated systems to the I / O modules of the programmabl
e logic controller (PLC). Ladder diagrams that represent the control logic on th
e automated systems are popularly applied in PLCs. However, as the automated sys
tems are remotely distributed, automated systems become more intelligent and div
ersi ed in the communication standards that they are built in, and the coordinatio
n decisions become more complex, the simple messages handled by the PLC-based ne
twork are obviously not enough for the control of the automated systems. Hence,
some fundamental technologies, such as eld bus (Mahalik and Moore 1997) and local
area networks (LANs), and some advanced communication standards, such as LonWor
ks, Pro bus, Manufacturing Automation Protocol (MAP), Communications Network for M
anufacturing Applications (CNMA), and SEMI* Equipment Communication Standard (SE
CS) are developed. In a eld bus, automated devices are interconnected. Usually th
e amount of data transmitted in the bus is not large. In order to deliver messag
e among equipments timely in a eld bus, the seven layers of the open system interc
onnection (OSI) are simpli ed into three layers: physical layer, data link layer,
and application layer. Unlike a eld bus, which usually handles the connections am
ong devices, of ce activities are automated and connected by a LAN. Usually more t
han one le server is connected in a LAN for data storage, retrieval, and sharing
among the connected personal computers. Three technological issues have to be de
signed / de ned in a LAN topology, media, and access methods (Cohen and Apte 1997)
. For the advanced communication standard, MAP and CNMA are two technologies tha
t are well known and widely applied. Both technologies are based on the Manufact
uring Message Speci cation (MMS) (SISCO 1995), which was developed by the ISO Indu
strial Automation Technical Committee Number 184. MMS provides a de nition to spec
ify automated equipments external behaviors (Shanmugham et al. 1995). Through the
speci cation, users can control the automated equipment with little knowledge abo
ut the internal message conversion within MMS and the equipment. SECS is current
ly a popular communication standard in semiconductor industries (SEMI 1997). The
standard was developed by SEMI based on two standards, SECS-I (SEMI Equipment C
ommunications Standard Part 1 Message Transfer) and SECS-II (SEMI Equipment Comm
unication Standard 2 Message Content). The relationship between SECS-I and SECSII, as shown in Figure 7, shows that SECS-I transmits the message that is de ned b
y SECS-II to RS-232. SECS-IIs message
Figure 7 Basic Structure of SECS.
* SEMI stands for Semiconductor Equipment and Materials International.
166
TECHNOLOGY
is added as control information by the SECS-I so the transmission message can co
nform the format of RS-232 for message delivery. For SECS-II, it provides a set
of interequipment communication standards under various situations. Hence, engin
eers only need to know and follow the SECS-II to control the connected automated
equipment, rather than taking time to de ne the detailed message conversion.
5.2.
Object Orientation and Petri Net Techniques
Object orientation and Petri net are automation techniques in modeling and analy
sis levels. Automated systems and their associated information and resources can
be modeled by object models. The coordination and communication among the autom
ated systems can then be uni ed with the message passing among the objects. Howeve
r, the complex message passing that is used to coordinate behaviors of the autom
ated systems relies on the technique of Petri net. The graphical and mathematica
lly analyzable characteristics make Petri net a very suitable tool for synchroni
zing the behaviors and preventing deadlocks among automated systems. Combination
s of both techniques have been developed and applied in the controllers of exible
manufacturing systems (Wang 1996; Wang and Wu 1998).
5.3.
Distributed Control vs. Central Control
The rapid development of microprocessor technology has made distributed control
possible and attractive. The use of reliable communication between a large numbe
r of individual controllers, each responsible for its own tasks rather than for
the complete operation, improves the response of the total system. We can take P
LC-based control as a typical example of central control system and LonWorks, de
veloped by Echelon, as an example of distributed control system. In LonWorks, ea
ch automated device is controlled by a control moduleLTM-10 (Figure 8). The contr
ol modules are connected on a LonTalk network that provides an ISO / OSI compati
ble protocol for communication.
Virtual Machines
Figure 8 Applying LonWorks to Develop Virtual Machines.
AUTOMATION TECHNOLOGY
167
Figure 9 An Example of a Robot Emulator. (From Witzerman and Nof 1995)
Usually the control modules are loaded with Neuron C programs from a host comput
er that has a PCNSS network card for network management and Neuron C programming
. Hence, under running mode the host computer is not necessary and the control m
odules can work under a purely distributed environment. Distributed automated sy
stems have the following advantages over centralized automated systems:
Higher system reliability Better response to local demands Lower cost in revisin
g the system control programs when automated equipment is added or
deleted from the system
5.4.
Robot Simulator / Emulator
In recent years, powerful robot simulators / emulators have been developed by se
veral companies (Nof 1999). Examples include ROBCAD by Tecnomatix and RAPID by A
dept Technologies. With these highly interactive, graphic software, one can prog
ram, model, and analyze both the robots and their integration into a production
facility. Furthermore, with the geometric robot models, the emulation can also c
heck for physical reachability and identify potential collisions. Another import
ant feature of the robot simulators / emulators is off-line programming of robot
ic equipment, which allows designers to compare alternative virtual design befor
e the program is transferred to actual robots (see Figure 9).
6.
EMERGING TRENDS
Automation technology has reached a new era. An automation system is automated n
ot only to reach the setting point but to situate itself intelligently in its co
mplex environment. The individual automated systems are also networked to accomp
lish collaborative tasks. However, networking automated
168
TECHNOLOGY
systems is not an easy task. It involves the technologies from the physical leve
ls, which handle the interface and message conversion between automation systems
, to the application level, which handles mostly the social interactions between
the automation systems. Three typical ongoing research directions are described
next to present the trends of automation technology.
6.1.
Virtual Machines
Traditionally, a hierarchy structure of the computer control system for a fully
automated manufacturing system can be divided into four levels (Figure 10). The
supervisory control is responsible for managing the direct digit control by send
ing control commands, downloading process data, monitoring the process, and hand
ling exception events. From the perspective of message transmission, the hierarc
hy in Figure 10 can be classi ed into three standard levels, as shown in Figure 11
: 1. Production message standard: a standard for obtaining / sending production
information from / to low-level computers. The production information can be a r
outing, a real-time statistics, etc. Usually the information is not directly rel
ated to the equipment control. 2. Control message standard: a standard for contr
olling the equipment logically. The control message includes commands, the param
eters associated with the command, and data. SECSII is a standard that can ful ll
such a need (Elnakhal and Rzehak 1993).
Figure 10 The Hierarchy Structure of the Computer Control System for a Fully Aut
omated Industrial Plant. (From Williams and Nof 1992)
AUTOMATION TECHNOLOGY
Production Message Standard Function
Production ID On-Line Info. Routing Mgt. Program Mgt.
169
Product Data Production Mgt. Data Statistics
Human at PC/Workstation API Server Scheduler Database Server
Function
Human at PC/Workstation Control Data Program Library Shop-Floor Status Call Cont
roller Equipment Host Data Collection Host
Control Message Standard
Command Execution S. F. Monitoring Data Collection Error Recovery
Examples
SECS-II MMS
Virtual Machine
Human at PC Equipment Controller NC/CNC Machine Data Collection Device PLC. . .
Mach./Cell Setup Control Program Device Node Single Machine Work Center Work Cel
l
Function
Monitoring Commands Event & Status Report Error & Warning
Examples
MMS VMD Lonworks
Figure 11 Three Layers of Message Transmission.
3. Virtual machine: a mechanism that converts the logical commands based on the
control message standard, e.g., SECS-II, to commands format that can be accepted
by a physical machine. Figure 12 shows that a virtual machine is laid between a
host that delivers SECS-II commands and a real device that receives and execute
s the commands in its format. A virtual machine is therefore responsible for bri
dging the message format gap. The development of virtual machines reduces the pr
oblems of incompatibility in an automated system, especially when the automated
equipment follows different communication standards and protocols. It also enabl
es the high-level controller to focus on his or her control activities, such as
sequencing and scheduling, rather than detailed command incompatibility problems
. Further information on virtual machines and automation can be found in Burdea
and Coiffet (1999).
6.2.
Tool Perspective Environment
Modern computer and communication technologies not only improve the quality, acc
uracy, and timeliness of design and decisions but also provide tools for automat
ing the design and decision making processes. Under such a tool perspective, hum
an experts focus on developing tools, then users can apply the tools to model an
d resolve their automation problems (see the right-side diagram in Figure 13). I
n contrast with traditional approaches, dif culties occur only during the tool dev
elopment. For traditional approaches, experts are the bottlenecks in design projects
. They must understand not only the problems but also the applications and limit
ations in practice. In practice, the costs of looking for and working with the e
xperts are the main expense. For tool perspective, the cost of training the user
s becomes the major expense. However, frequent environment changes usually resul
t in the design of a exible system to respond to repeated needs to modify and eva
luate the existing system. Modi cation and evaluation needs can be ful lled by rapid
ly applying computerized tools, which provide relatively high exibility in design
. Some researchers have noted the importance of modeling manufacturing systems w
ith the tool perspective. A sample survey is presented in Table 2. The three mod
eling concerns ((1) con icts among designers, (2) constraints in physical environm
ent, (3) the information ows in manufacturing systems) are part of the criteria i
n the survey table. It is found
170
TECHNOLOGY
Figure 12 Role of Virtual Machine in Automation Applications.
Figure 13 A Comparison of Traditional Approaches and Tool Perspective. (From Hua
ng and Nof 1998)
These information items are supported by the modeling functions of FDL (Table 3)
. An example of modeling an aisle by an FDL function is:
Facility, Part,
Detach, Displ
Print, Save R
Material Flow
FDL Manipulation Input Functions FDL Utility Input Functions FDL Evaluation Func
tion
Aisle
Rede ne syntax action path parent size
aspects pertaining to an aisle. aisle action path parent size char (A, C, D) add
, change, delete record (mandatory) char[16] path name (ROBCAD name) char[16] pa
rent name from device table char HUMAN, AGV, SMALL FL, or LARGE FL
The model in FDL then becomes a list of syntax. The list of syntax triggers the
ROBCAD program to construct a 3D emulation model (see the model inside the windo
w of Figure 14). Figure 14 is an example of FDL in a ROBCAD system. The upper le
ft window is used to input the information of the system, including the geometri
c information of facilities, material ow information, and material ow control info
rmation. The lower left window is used to show the output information (e.g., a c
ollision occurring to the robot during the material handling). In addition, FDL
provides a reconciliation function (see the right function menu in Figure 14). T
herefore, all the control and physical con icts on the manufacturing systems can b
e resolved according to the built in algorithm. The reconciliation function may
change the positions of robots or machines to avoid the collision or unreachabil
ity of material handling. Recently, FDL / CR has been developed to provide knowl
edgebased computer support for con ict resolution among distributed designers. Bec
ause FDL provides such direct syntax speci cations, the designers can use the synt
ax to model and develop their subsystems. When the designers are in different lo
cations, their subsystems can submit input to the host ROBCAD system to construc
t the entire system, then use the reconciliation function to adjust the subsyste
ms if con icts occur. Therefore, the cooperation of designers in different locatio
ns for different subsystems can be achieved in FDL. In the FDL working environme
nt, two types of information are exchanged among the designers: (1) the design b
ased on FDL syntax and (2) the operations of the facilities described by a task
description language (TDL). TDL represents the control functions of the faciliti
es. Thus, not only the models of the material ow but also the control information
are presented and shared among the designers.
6.2.2.
Concurrent Flexible Speci cations (CFS)
By speci cation, engineers describe the way that a system should be constructed. C
oncurrent, exible speci cations for manufacturing systems, in other words, are prov
ided by several engineers to model manufacturing systems with exibility to design
changes. In a manufacturing system, there is a physical ow of raw materials, par
ts, and subassemblies, together with an information and control ow consisting of
status (system state) and control signals. The control and status signals govern
the behavior of the physical ows. In order to simultaneously achieve optimal cap
acity loading with maintained or increased exibility, an exact de nition of materia
AUTOMATION TECHNOLOGY
173
Figure 14 An Example of Facility Description Language in ROBCAD System.
repeatedly. Furthermore, many systems are made up of subsystems, any of which ma
y be active or nonactive at a particular time during system operation. For this
reason, the treatment of timing is an essential element of the speci cation. To in
tegrate the above requirements, tools that can incorporate material ows, informat
ion ows, and control signals are required. Therefore, different representations o
f speci cations through different tools should not be independent or mutually excl
usive but should support each other by forming a concurrent, comprehensive speci c
ation of the system. For manufacturing systems, the speci cation of functional and
real-time logic is important because these attributes describe what and how the
system is executing. This information is necessary to determine how the process
es are to be implemented with physical equipment. By utilizing two complementary
representations, both these aspects of system behavior can be speci ed concurrent
ly. Data / control ow diagrams (DFD / CFDs), which are enhanced with real-time ex
tensions when used in conjunction with Petri nets, provide a suitable framework
for concurrent speci cation of functional and real-time state logic. The main reas
on for this is the ability to maintain identical structural decompositions in bo
th representations at all levels of detail in the speci cation. This model is acco
mplished by maintaining identical partitioning of processes in both speci cations.
With DFD / CFDs, partitioning is accomplished by hierarchical decomposition of
bubbles that represent processes or tasks. An identical hierarchical partitionin
g can be created with Petri nets by representing processes with subnets at highe
r levels and then showing the detailed, internal net at lower levels of detail.
The DFD / CFDs provide a process de nition model of the system, while the Petri ne
ts provide a process analysis model for the study of real-time state behavior. E
ven though object-oriented modeling is becoming a more popular technique of syst
em design, data / control ow diagrams are still an acceptable technique in our ca
se study. Researchers have proved the possibility of transforming data ow diagram
s to object models (Alabiso 1988). Both these techniques are realized by two sof
tware packages: Teamwork and P-NUT. Teamwork is a computer aided software engine
ering (CASE) tool family that automates standard structured methodologies using
interactive computer graphics and multiuser workstation power. P-NUT is a set of
tools developed by the Distributed Systems Project in the Information and Compu
ter Science
174
TECHNOLOGY
Department of the University of California at Irvine (Razouk 1987) to assist eng
ineers in applying various Petri net-based analysis methods.
6.3.
Agent-Based Control Systems
Early agents were de ned for distributed arti cial intelligence, which includes two
main areas: distributed problem solving (DPS) and multi-agent systems (MASs). DP
S focuses on centrally designed systems solving global problems and applying bui
ld-in cooperation strategies. In contrast, MAS deals with heterogeneous agents w
hose goal is to plan their utility-maximizing coexistence. Examples of DPS are m
obile robots exploring uncertain terrain, and task scheduling in manufacturing f
acilities. Both can be operated with centralized programs, but in relatively mor
e distributed environments they are usually more effective with autonomous progr
ams, or agents. Examples of MAS are collaborative product design and group behav
ior of several mobile robots. Recent research has explored the concept of autono
mous agents in control. An agent is a computing system that can autonomously rea
ct and re ex to the impacts from the environment in accordance with its given goal
(s). An agent reacts to the environment by executing some preloaded program. Mea
nwhile, there is an autonomous adjustment mechanism to provide a threshold. When
the environmental impacts are higher than the threshold, the agent re exes; other
wise it is intact. An agent may seek collaboration through communicating with ot
her agents. The communication among agents is regulated by protocols, structure
of dialogue, to enhance the effectiveness and ef ciency of communication. An impor
tant difference between autonomous agents and other techniques is that an autono
mous agent evaluates the rules that it will perform. It may even automatically c
hange its goal to keep itself alive in a harsh environment. Autonomous agents ha
ve been applied in many control systems, including air traf c control, manufacturi
ng process control, and patient monitoring. Usually an agent functions not alone
, but as a member of a group of agents or an agent network. The interaction and
communication among agents can be explained by the analogy of organizational com
munication. An organization is an identi able social pursuing multiple objectives
through the coordinated activities and relations among members and objects. Such
a social system is open ended and depends for its effectiveness and survival on
other individuals and subsystems in the society of all related organizations an
d individuals. (It is actually similar for both human societies and agent societ
ies.) Following this analogy, three characteristics of an organization and of an
agent network can be observed (Weick 1990): 1. Entities and organization 2. Goa
ls and coordinated activities 3. Adaptability and survivability of the organizat
ion Five motivations have been observed for organizational and agent network com
munication (Jablin 1990): 1. 2. 3. 4. 5. Generate and obtain information Process
and integrate information Share information needed for the coordination of inte
rdependent organizational tasks Disseminate decisions Reinforce a groups perspect
ive or consensus
These ve motivations can serve as a checklist for developing protocols. One of th
e most in uential factors affecting interpersonal or interagent communication patt
erns among group members is the characteristic of the task on which they are wor
king. As task certainty increases, the group coordinates itself more through for
mal rules and plans than through individualized communication modes. Therefore,
the interacting behaviors and information exchanges among agents have to follow
interaction and communication protocols. Although different agent applications w
ill require different agent design, ve general areas have to be addressed: 1. 2.
3. 4. 5. Goal identi cation and task assignment Distribution of knowledge Organiza
tion of the agents Coordination mechanism and protocols Learning and adaptive sc
hemes
AUTOMATION TECHNOLOGY
175
Research into intelligent, collaborative agents is extremely active and in its p
reliminary stages (Nof 1999; Huang and Nof 2000). While the best-known applicati
ons have been in Internet search and remote mobile robot navigation, emerging ex
amples combine agents through computer networks with remote monitoring for secur
ity, diagnostics, maintenance and repair, and remote manipulation of robotic equ
ipment. Emerging agent applications will soon revolutionize computer and communi
cation usefulness. Interaction and communication with and among intelligent tool
s, home appliances, entertainment systems, and highly reliable, safe mobile serv
ice robots will change the nature of manufacturing, services, health care, food
delivery, transportation, and virtually all equipment-related activities.
7.
CONCLUSION
This chapter discusses automation technology in various levels: 1. Single proces
s: automatic control theory and technology 2. Single machine: arti cial intelligen
ce and knowledge-based technology 3. Distributed machines and distributed system
s: integration theory and technology Additionally, the trends in future automati
on technology are classi ed into another three levels: 1. Machine integration: vir
tual machines 2. Humanmachine integration: tool-oriented technologies 3. Machine
autonomy: agent-based technologies In our postulation, how to automate machines
in terms of operations will soon be a routine problem because of the technologic
al maturity of actuators, sensors, and controllers. The application of automatio
n technology will then focus on the issues of intelligence, integration, and aut
onomy. Another emerging trend involves the incorporation of micro-electromechani
cal systems (MEMSs) as sensors and controllers of automation and the microscale.
However, education and training of workers who interact with intelligent and au
tonomous machines may be another important research issue in the future.
Acknowledgments
The authors would like to thank Professor Ted J. Williams of Purdue University a
nd Professor Li-Chih Wang, Director of the Automation Laboratory, Tunghai Univer
sity, Taiwan, for their valuable contributions to this chapter.
REFERENCES
Alabiso, B. (1988), Transformation of Data Flow Diagram Analysis Models to ObjectOriented Design, in OOPSLA 88: Object-Oriented Programming Systems, Languages and A
pplications Conference Proceedings (San Diego, CA, September 2530, 1988), N. Meyr
owitz, Ed., Association for Computing Machinery, New York, pp. 335353. Burdea, G.
C., and Coiffet, P. (1999), Virtual Reality and Robotics, in Handbook of Industrial
Robotics, 2nd Ed., S. Y. Nof, Ed., John Wiley & Sons, New York, pp. 325333. Chad
ha, B., Fulton, R. E., and Calhoun, J. C. (1994). Design and Implementation of a C
IM Information System, Engineering with Computers, Vol. 10, No. 1, pp. 110. Cohen,
A., and Apte, U. (1997), Manufacturing Automation, McGraw-Hill, New York. Csurga
i, G., Kovacs, V., and Laufer, J. (1986), A Generalized Model for Control and Supe
rvision of Unmanned Manufacturing Cells, in Software for Discrete Manufacturing: P
roceedings of the 6th International IFIP / IFAC Conference on Software for Discr
ete Manufacturing, PROLAMAT 85 (Paris, France, June 1113, 1985), J. P. Crestin an
d J. F. McWaters, Eds., North-Holland, Amsterdam, pp. 103112. Dayhoff, J. E. (199
0), Neural Network Architecture, Van Nostrand Reinhold, New York. Elnakhal, A. E
., and Rzehak, H. (1993), Design and Performance Evaluation of Real Time Communica
tion Architectures, IEEE Transactions on Industrial Electronics, Vol. 40, No. 4, p
p. 404 411. Freeman, J. A., and Skapura, D. M. (1991), Neural Networks: Algorithm
s, Applications, and Programming Techniques, Addison-Wesley, Reading, MA. Fuller
, R. (1995), Neural Fuzzy Systems, http://www.abo. / rfuller/nfs.html.
176
TECHNOLOGY
Furtado, G. P., and Nof, S. Y. (1995), Concurrent Flexible Speci cations of Function
al and RealTime Logic in Manufacturing, Research Memorandum No. 95-1, School of In
dustrial Engineering, Purdue University. Huang, C. Y., and Nof, S. Y. (1998), Deve
lopment of Integrated Models for Material Flow Design and ControlA Tool Perspecti
ve, Robots and CIM, Vol. 14, 441454. Huang, C. Y., and Nof, S. Y. (1999), Enterprise
Agility: A View from the PRISM Lab, International Journal of Agile Management Syst
ems, Vol. 1, No. 1, pp. 5159. Huang, C. Y., and Nof, S. Y. (2000), Formation of Aut
onomous Agent Networks for Manufacturing Systems, International Journal of Product
ion Research, Vol. 38, No. 3, pp. 607624. Jablin, F. M. (1990), Task / Work Relatio
nships: A Life-Span Perspective, in Foundations of Organizational Communication: A
Reader, S. R. Corman, S. P. Banks, C. R. Bantz, M. Mayer, and M. E. Mayer, Eds.
, Longman, New York, pp. 171196. Kuo, B. C. (1995), Automatic Control Systems, 7t
h Ed., John Wiley & Sons, New York. Lara, M. A., Witzerman, J. P., and Nof, S. Y
. (2000), Facility Description Language for Integrating Distributed Designs, Interna
tional Journal of Production Research, Vol. 38, No. 11, pp. 2471 2488. Lin, C.-T.
, and Lee, G. (1996), Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intellige
nt Systems, Prentice Hall, Upper Saddle River, NJ. Mahalik, N. P., and Moore, P.
R. (1997), Fieldbus Technology Based, Distributed Control in Process Industries:
A Case Study with LonWorks Technology, Integrated Manufacturing Systems, Vol. 8, N
os. 34, pp. 231243. Morriss, S. B. (1995), Automated Manufacturing Systems: Actuat
ors, Controls, Sensors, and Robotics, McGraw-Hill, New York. Nadoli, G., and Ran
gaswami, M. (1993), An Integrated Modeling Methodology for Material Handling Syste
ms Design, in Proceedings of the 1993 Winter Simulation Conference (Los Angeles, D
ecember 1215, 1993), G. W. Evans, Ed., Institute of Electrical and Electronics En
gineers, Piscataway, NJ, pp. 785789. Nof, S. Y., Ed. (1999), Handbook of Industri
al Robotics, 2nd Ed., John Wiley & Sons, New York. Prakash, A., and Chen, M. (19
95), A Simulation Study of Flexible Manufacturing Systems, Computers and Industrial
Engineering, Vol. 28, No. 1, pp. 191199. Razouk, R. R. (1987), A Guided Tour of P-N
UT (Release 2.2), ICS-TR-86-25, Department of Information and Computer Science at
the University of California, Irvine. Semiconductor Equipment and Materials Inte
rnational (SEMI) (1997), SEMI International Standards, SEMI, Mt View, CA. Shanmu
gham, S. G., Beaumariage, T. G., Roberts, C. A., and Rollier, D. A. (1995), Manufa
cturing Communication: A Review of the MMS Approach, Computers and Industrial Engi
neering, Vol. 28, No. 1, pp. 121. SISCO Inc. (1995), Overview and Introduction to
the Manufacturing Message Speci cation (MMS) Revision 2. Sly, D. P. (1995), Plant D
esign for Ef ciency Using AutoCAD and FactoryFLOW, in Proceedings of the 1995 Winter
Simulation Conference (Arlington, VA, December 36, 1995), C. Alexopoulos, K. Kon
g, W. R. Lilegdon, and D. Goldsman, Eds., Society for Computer Simulation, Inter
national, San Diego, CA, pp. 437444. Tecnomatix Technologies (1989), ROBCAD TDL R
eference Manual, version 2.2. Wang, L.-C. (1996), Object-Oriented Petri Nets for M
odelling and Analysis of Automated Manufacturing Systems, Computer Integrated Manu
facturing Systems, Vol. 9, No. 2, pp. 111125. Wang, L. C., and Wu, S.-Y. (1998), Mo
deling with Colored Timed Object-Oriented Petri Nets for Automated Manufacturing
Systems, Computers and Industrial Engineering, Vol. 34, No. 2, pp. 463480. Weick,
K. (1990), An Introduction to Organization, In Foundations of Organizational Communi
cation: A Reader, S. R. Corman, S. P. Banks, C. R. Bantz, M. Mayer, and M. E. Ma
yer, Eds., Longman, New York, pp. 124133. Williams, T. J., and Nof, S. Y. (1992),
Control Models, in Handbook of Industrial Engineering, 2nd Ed., G. Salvendy, Ed., J
ohn Wiley & Sons, pp. 211238. Witzerman, J. P., and Nof, S. Y. (1995), Facility Des
cription Language, in Proceedings of the 4th Industrial Engineering Research Confe
rence (Nashville, TN, May 2425, 1995), B. Schmeiser and R. Vzsoy, Eds., Institute
of Industrial Engineers, Norcross, GA, pp. 449455. Witzerman, J. P. and Nof, S.
Y. (1996), Integration of Simulation and Emulation with Graphical Design for the D
evelopment of Cell Control Programs, International Journal of Production Research,
Vol. 33, No. 11, pp. 31933206.
228 14. NETWORKING IN THE PRODUCTION AND SERVICE INDUSTRIES SOME PRACTICAL ASPEC
TS OF INTRODUCING AND USING COMPUTER NETWORKS IN INDUSTRIAL ENGINEERING 15.1. In
ternet Connectivity 15.2. 253 15.3.
TECHNOLOGY LANs, Intranets, WANs, and Extranets World Wide Web Communication, In
tegration, and Collaboration
255
15.
256 257 257
254 254
16.
SUMMARY
REFERENCES
1.
INTRODUCTION
Computer networks serve today as crucial components of the background infrastruc
ture for virtually all kinds of human activities. This is the case with Industri
al Engineering and any related activities. The importance of these computer netw
orks stems from their role in communication and information exchange between hum
ans (as actors in any kind of activities) and / or organizations of these human
actors. In contrast to the situation several decades ago, a number of key attrib
utes of the working environment today serve as basic new supportive elements of
working conditions. Some of the most important such elements are:
Computer technology has moved to the desktop, making possible direct access to a
dvanced
techniques of high-speed information processing and high-volume information stor
age from individual workstations. Interconnection of computers has been developi
ng at an exponentially increasing rate, and today high-speed global connectivity
is a reality, providing fast and reliable accessibility of highvolume distant i
nformation at any connected site worldwide. Global addressing schemes, structure
d information content, and multimedia information handling have become an ef cient
ly functioning framework for computer-to-computer (and thus human-to-human) comm
unication and information exchange, as well as for storing and delivering large
amounts of highly sophisticated information at millions of sites, accessible by
any other authorized sites anywhere within the global network of networks, the I
nternet. Computer networks and services themselves, as well as related applicati
ons, are brie y investigated in this chapter. The topic is very broad and there ar
e thousands of sources in books and other literature providing detailed informat
ion about all the aspects of computer networking introduced here. This chapter t
herefore provides a condensed overview of selected key subtopics. However, empha
sis is given to every important element of what the network infrastructure looks
like, what the main characteristics of the services and applications are, and h
ow the information content is being built and exploited. Some basic principles a
nd methods of computer networking are repeatedly explained, using different appr
oaches, in order to make these important issues clear from different aspects. Th
is chapter starts with an introductory description of the role of computer netwo
rking in information transfer. Then, after a short historical overview, the netw
orking infrastructure is investigated in some detail, including the basic techni
cal elements of how networking operates in practice. Later sections deal with se
COMPUTER NETWORKING
229
Multimedia: Agnew and Kellerman (1996); Buford (1994); Wesel (1998) Virtual real
ity: Singhal and Zyda (1999) Quality of service: Croll and Packman (1999) Securi
ty: Cheswick and Bellovin (1994); Goncalves (1999); Smith (1997) Applications: A
ngell and Heslop (1995); Hannam (1997); Kalakota and Whinston (1997); McMahon an
d Browne (1998); Treese and Stewart (1998) Practical issues: Der er (1998); Dowd (
1996); Guengerich et al. (1996); Murhammer et al. (1999); Ptak et al. (1998); Sc
hulman and Smith (1997); Ward (1999) Future trends: Conner-Sax and Krol (1999);
Foster and Kesselman (1998); Huitema (1998); Mambretti and Schmidt (1999)
2.
THE ROLE OF COMPUTER NETWORKING
Computer networks (see Figure 1) are frequently considered as analogous to the n
ervous system in the human body. Indeed, like the nerves, which connect the more
or less autonomous components in the body, the links of a computer network prov
ide connections between the otherwise stand-alone units in a computer system. An
d just like the nerves, the links of the network primarily serve as communicatio
n channels between the computers. This is also the basis of the global network o
f networks, the Internet. The main task in both cases is information transfer. C
onnected and thus communicating elements of the system can solve their tasks mor
e ef ciently. Therefore, system performance is not just the sum of the performance
of the components but much more, thanks to the effect of the information exchan
ge between the components. Computer networks consist of computers (the basic com
ponents of the system under consideration), connecting links between the compute
rs, and such additional things as devices making the information transfer as fas
t, intelligent, reliable, and cheap as possible.
Speed depends on the capacity of the transmission lines and the processing speed
of the additional devices, such as modems, multiplexing tools, switches, and routers. (A sh
ort description of these devices is given in Section 4, together with more detai
ls about transmission line capacity.) Intelligence of the network depends on pro
cessing capabilities as well as stored knowledge of these active devices of the
network. (It is worth mentioning that network intelligence is different from the
intelligence of the interconnected computers.) Reliability stems from, rst of al
l, the decreased risk of losing, misinterpreting, or omitting necessary informat
ion, as well as from the well preprocessed character of the information availabl
e within the system.
HOST 1
HOST 2 LAN 1
router
X
Distance links
HOST 3 Switch HOST 4 LAN 2
HOST 7 Phone line Modem
Accessing Device Modem
HOST 5
HOST 6
Figure 1 Networks and Network Components.
230
TECHNOLOGY installation costs, and operational costs. Indirect costs cover a wid
e range, from those stemming from faulty system operation to those associated wi
th loss of business opportunity. In the planning of the system, the cost / perfo
rmance ratio must always be taken into consideration.
Cost is associated with a number of things. The most important direct costs are
purchase costs,
It is important not to forget that adequate system operation requires adequate o
peration of all components. Obviously, adequate operation of each component in a
system requires intensive interaction among them. However, direct information t
ransfer between these components makes the interaction much faster and more reli
able than traditional means of interaction. This means highly elevated adequacy
in the behavior of each component and thus of the system itself. Another importa
nt capability of a system consisting of connected and communicating components i
s that the intelligence of the individual components is also connected and thus
each component can rely not only on its own intelligence but also on the knowled
ge and problem solving capability of the other components in the system. This me
ans that the connections between the computers in the network result in a qualit
atively new level of complexity in the system and its components. The network co
nnections allow each system component to:
Provide information to its partners Access the information available from the ot
her system components Communicate while operating (thus, to continuously test, c
orrect, and adapt its own operation) Utilize highly sophisticated distant collab
oration among the system components Exploit the availability of handling virtual
environments*
These features are described brie y below. As will be shown below, these basic fea
tures pave the way for:
The systematic construction of network infrastructures The continuous evolution
of network services The rapid proliferation of network-based applications The ex
ponentially increasing amount of information content accumulated in the network
of interconnected computers
2.1.
Information Access
Information access is one of the key elements of what networking provides for th
e users of a computer connected to other computers. The adequate operation of a
computer in a system of interconnected computers naturally depends on the availa
bility of the information determining what, when, and how to perform with it. Tr
aditionally, the necessary information was collected through manual methods: through
conversation and through traditional information carriers: rst paper, and then,
in the computer era, magnetic storage devices. These traditional methods were ve
ry inef cient, slow, and unreliable. The picture has now totally changed through c
omputer networking. By the use of the data communication links connecting comput
ers and the additional devices making possible the transfer of information, all
the information preprocessed at the partner sites can be easily, ef ciently, immed
iately, and reliably downloaded from the partner computers to the local machine.
The model above works well until individual links are supposed to serve only th
e neighbor computers in accessing information at each others sites. But this was
the case only at the very beginning of the evolution of networking. Nowadays, mi
llions of computers can potentially access information stored in all of the othe
COMPUTER NETWORKING
231
High speed backbone WAN
LAN MAN MAN
LAN MAN
redundant link
LAN LAN LAN LAN LAN LAN LAN
redundant link
Institutional, organizational networks
Figure 2 Hierarchical Network Model.
area. Local area networks (LANs) arise from this way of solving the information
traf c problem. The hierarchy can be further fragmented, and thus metropolitan are
a networks (MANs) and wide area networks (WANs) enter the picture. Although this
model basically supposes tree-like network con gurations (topologies), for sake o
f reliability and routing ef ciency, cross-connections (redundant routes) are also
established within this infrastructure. If a network user wishes to access info
rmation at a networked computer somewhere else, his or her request is sent throu
gh the chain of special networking devices (routers and switches; see Section 4)
and links. Normally this message will be sliced into packets and the individual
ly transferred packets will join possibly thousands of packets from other messag
es during their way until the relevant packets, well separated, arrive at their
destination, recombine, and initiate downloading of the requested information fr
om that distant site to the local machine. The requested information will then a
rrive to the initiator by a similar process. If there is no extraordinary delay
anywhere in the routes, the full process may take just a few seconds or less. Th
is is the most important strength of accessing information through the network.
Note that the described method of packet-based transmission is a special applica
tion of time division multiplexing (TDM). The TDM technique utilizes a single ph
ysical transmission medium (cable, radiated wave, etc.) for transmitting multipl
e data streams coming from different sources. In this application of TDM, all th
e data streams are sliced into packets of speci ed length of time, and these packe
ts, together with additional information about their destination, are inserted i
nto time frames of the assembled new, higher-bit rate data stream. This new data
stream is transmitted through a well-de ned segment of the network. At the border
s of such network segments, the packets may be extracted from the assembled data
stream and a new combination of packets can be assembled by a similar way. By t
his technique, packets from a source node may even arrive to their destination n
ode through different chains of network segments. More details about the techniq
ue will be given in Sections 6.2 and 6.3. The next subsection describes the basi
c prerequisites at the site providing the information. First a basic property of
the data communication links will be introduced here. The capability of transmi
tting / processing high-volume traf c by the connecting lines in the network depen
ds on their speed / bandwidth / capacity. Note that these three characteristic p
roperties are
232
TECHNOLOGY
equivalent in the sense that high bandwidth means high speed and high speed mean
s high capacity. Although all three measures might be characterized by well-de ned
units, the only practically applied unit for them is the one belonging to the s
peed of the information transfer, bits per second (bps). However, because of the
practical speed values, multiples of that elementary unit are used, namely Kbps
, Mbps, Gbps, and more recently, Tbps (kilobits, megabits, gigabits, and terabit
s per second, respectively, which means thousands, millions, billions and thousa
nd billions of bits per second).
2.2.
Information Provision
Access to information is not possible if the information is not available. To ma
ke information accessible to other network users, information must be provided.
Provision of information by a user (individual or corporate or public) is one of
the key contributors to the exponential development of networking and worldwide
network usage. How the information is provided, whether by passive methods (by
accessible databases or, the most common way today, the World Wide Web on the In
ternet) or by active distribution (by direct e-mail distribution or by using new
sgroups for regular or irregular delivery), is a secondary question, at least if
ease of accessing is not the crucial aspect. If the information provided is fre
ely accessible by any user on the network, the information is public. If accessi
bility is restricted, the information is private. Private information is accessi
ble either to a closed group of users or by virtually any user if that user ful ls
the requirements posed by the owner of the information. Most frequently, these
requirements are only payment for the access. An obvious exception is government
security information. All information provided for access has a certain value,
which is, in principle, determined by the users interest in accessing it. The hig
her the value, the more important is protection of the related information. Prov
iders of valuable information should take care of protecting that information, p
artly to save ownership but also in order to keep the information undistorted. F
aulty, outdated, misleading, irrelevant information not only lacks value but may
also cause problems to those accessing it in the belief that they are getting a
ccess to reliable content. Thus, information provision is important. If the prov
ider makes mistakes, whether consciously or unconsciously, trust will be lost. P
roviding valuable (relevant, reliable, and tested) information and taking care o
f continuous updating of the provided information is critical if the provider wa
nts to maintain the trustworthiness of his information. The global network (the
millions of sites connected by the network) offers an enormous amount of informa
tion. The problem nowadays is not simply to nd information about any topic but ra
ther to nd the best (most relevant and most reliable) sources of the required inf
ormation. Users interested in accessing proper information should either turn to
reliable and trusted sources (providers) or take the information they collect t
hrough the Internet and test it themselves. That is why today the provision of v
aluable information is separated from ownership. Information brokerage plays an
increasingly important role in where to access adequate information. Users looki
ng for information about any topic should access either those sources they know
and trust or the sites of brokers who take over the task of testing the relevanc
e and reliability of the related kinds of information. Information brokerage is
therefore an important new kind of network-based service. However, thoroughly te
sting the validity, correctness, and relevance of the information is dif cult or e
ven impossible, especially because the amount of the stored and accessible infor
mation increases extremely fast. The nal answer to the question of how to control
the content (i.e., the information available worldwide) is still not known. One
of the possibilities (classifying, rating, and ltering) is dealt with in more de
tail in a later section. Besides the dif culty of searching for adequate informati
on, there is a further important aspect associated with the increasing interest
worldwide in accessing information. It relates to the volume of information traf c
generated by the enormous number of attempts to download large amounts of infor
mation. Traf c volume can be decreased drastically by using the caching technique
(see Figure 3). In the case of frequently accessed information sources, an enorm
ous amount of traf c can be eliminated by storing the frequently requested informa
tion in dedicated servers that are much closer to the user than the original inf
ormation provider. The user asking for these kinds of information doesnt even kno
w if the requested information arrives at the requesting site from such a cache
server or from the primary source of the information. Hierarchical cache server
systems help to avoid traf c congestion on network links carrying intensive traf c.
2.3.
Electronic Communication
Providing and accessing information, although extremely important, is just one e
lement of information traf c through the network. Electronic communication in gene
ral, as far as the basic function of the network operation is concerned, involve
s a lot more. The basic function of computer networks is to make it possible to
overcome distance, timing, and capability problems.
COMPUTER NETWORKING
233
Web server Web client
DB Internet Object Cache
Web client
INTERNET
DB Web client Web server
Organization DB
Figure 3 Caching.
Distance is handled by the extremely fast delivery of any messages or other kind
s of transmitted
information at any connected site all over the world. This means that users of t
he network (again individual, corporate, or public) can do their jobs or perform
their activities so that distances virtually disappear. Users or even groups of
users can talk to each other just as if they were in direct, traditional contac
t. This makes, for example, remote collaboration a reality by allowing the forma
tion of virtual teams. Timing relates to the proper usage of the information tra
nsmitted. In the most common case, information arrives at its destination in acc
ordance with the best efforts method of transmission provided by basic Internet
operation. This means that information is carried to the target site as soon as
possible but without any guarantee of immediate delivery. In most cases this is
not a problem because delays are usually only on the order of seconds or minutes
. Longer delays usually occur for reasons other than simple transmission charact
eristics. On the other hand, email messages are most often read by the addressee
s from their mail servers much later than their arrival. Although obviously this
should not be considered the basis of network design, the fact that the recipie
nt doesnt need to be at his or her machine when the message arrives is important
in overriding timing problems. However, some information can be urgently needed,
or real-time delivery may even be necessary, as when there are concurrent and i
nteracting processes at distant sites. In these cases elevated-quality services
should be provided. Highest quality of service (QoS) here means a guarantee of u
ndisturbed real-time communication. Capability requirements arise from limited p
rocessing speed, memory, or software availability at sites having to solve compl
ex problems. Networked communication helps to overcome these kinds of problems b
y allowing distributed processing as well as quasisimultaneous access to distrib
uted databases or knowledge bases. Distributed processing is a typical applicati
on where intensive (high-volume) real-time (high-speed) traf c is assumed, requiri
ng high-speed connection through all the routes involved in the communication. T
he latter case already leads us to one of the more recent and most promising app
lications of the network. Although in some special areas, including highly inten
sive scienti c computations, distributed processing has been used for some time, m
ore revolutionary applications are taking place in networked remote collaboratio
n. This new role of computer networking is brie y reviewed in the following sectio
n.
234
TECHNOLOGY
2.4.
Networked Collaboration
Computer networks open new ways of increasing ef ciency, elevating convenience, an
d improving cost and performance of collaboration. The basis of this overall imp
rovement is provided by the properties of computer networks mentioned above, the
ir ability to help overcome distance, timing, and capability problems. The workp
lace can thus be dislocated or even mobile, the number of collaborators can be t
heoretically in nite, the cooperating individuals or groups can be involved in joi
nt activities and in shifted time periods (thus even geographical time zone prob
lems can be virtually eliminated), and the collaborating partners can input thei
r information without having to move highvolume materials (books and other docum
ents) to a common workplace. These are all important issues in utilizing the opp
ortunities that networks provide for collaborating when communities are geograph
ically dispersed. However, the key attribute of networked collaboration is not s
imply the possibility of avoiding having to collect all the collaborating partne
rs at a well-de ned venue and time for a xed time period. The fact that the network
interconnects computers, rather than human users, is much more useful. The reas
on is that networked computers are not just nodes in a network but also intellig
ent devices. They not only help exchange information between the collaborating i
ndividuals or groups, but also provide some important additional features.
The information processing capability of the computers at the network nodes of t
he collaborating
parties may also solve complex problems.
The intelligence of each computer connected and participating in the joint activ
ities is shareable
among the collaborating partners.
The intelligence in these distant computers can be joined to solve otherwise uns
olvable problems
by utilizing an enormously elevated computing power. However, appropriate condit
ions for ef cient and successful collaboration assume transmission of complex info
rmation. This information involves not only the data belonging to the joint task
but also the multimedia information stemming from the networked character of th
e collaboration. This may be a simple exchange of messages, such as e-mail conve
rsation among the participants, but may also be a true videoconference connectio
n between the cooperating sites. Thus, in many cases networked collaboration req
uires extremely high bandwidths. Real-time multimedia transmission, as the basis
of these applications, supposes multi-Mbps connectivity throughout. Depending o
n the quality demands, the acceptable transmission speed varies between one or t
wo Mbps and hundreds of megabits per second. The latter speeds have become a rea
lity with the advent of gigabit networking. If the above conditions are all met,
networked collaboration provides multiple bene ts:
Traditional integration of human contributions Improvement in work conditions In
tegration of the processing power of a great many machines High-quality collabor
ation attributes through high-speed multimedia transmission
The highest level of networked collaboration is attained by integrated environme
nts allowing truly integrated activities. This concept will be dealt with later
in the chapter. The role of the World Wide Web in this integration process, the
way of building and using intranets for, among others, networked collaboration,
and the role of the different security aspects will also be outlined later.
2.5.
Virtual Environments
Network-based collaboration, by using highly complex tools (computers and periph
erals) and highspeed networking, allows the participating members of the collabo
rating team to feel and behave just as if they were physically together, even th
ough the collaboration is taking place at several distant sites. The more the at
tributes of the environment at one site of the collaboration are communicated to
the other sites, the more this feeling and behavior can be supported. The envir
onment itself cannot be transported, but its attributes help to reconstruct a vi
rtual copy of the very environment at a different site. This reconstruction is c
alled a virtual environment. This process is extremely demanding as far as infor
mation processing at the related sites and information transfer between distant
sites are concerned. Although the available processing power and transmission sp
eed increase exponentially with time, the environment around us is so complex th
at full copying remains impossible. Thus, virtual environments are only approxim
ated in practice. Approximation is also dictated by some related practical reaso
ns:
subnetworks into one cohesive Internet is what makes the Internet such a great n
etwork tool. After more than 10 years of exclusively special (defense, and later
scienti c) applications of the emerging network, commercial use began to evolve.
Parallel activities related to leading-edge academic and research applications a
nd everyday commodity usage appeared. This parallelism was more or less maintain
ed during more recent periods, as well, serving as a basis of a fast and continu
ous development of the technology. A breakthrough occurred when, in the mid-1980
s, the commercial availability of system components and services was established
. Since then the supply of devices and services has evolved continuously, enabli
ng any potential individual, corporate, or public user simply to buy or hire wha
t it needed for starting network-based activities. While the leading edge in the
early 1980s was still just 56 Kbps (leased line), later in the decade the 1.5 M
bps (T1) transmission speed was achieved in connecting the emerging subnetworks.
Late in the 1980s the basics of electronic mail service were also in use. This
was probably the most important step in the application of networking to drive i
nnovation prior to the introduction of the World Wide Web technology. Early in t
he 1990s the backbone speed reached the 45 Mbps level, while the development and
spread of a wide range of commercial information services determined the direct
ions of further evolution in computer networking.
COMPUTER NETWORKING
237
Real-time transmission of similarly simple alphanumeric information requires eit
her similar
speed, as above, or, in case of higher volume of these kinds of data, a certain
multiple of that speed. Real-time transmission of sampled, digitized, coded, and
possibly compressed information, stemming from high- delity voice, high-resolutio
n still video, or high-quality real video signals do require much higher speed.
The last-mile speed in these cases ranges from dozens of Kbps to several and eve
n hundreds of Mbps (HDTV). This information is quite often cached by the receive
r to overcome short interruptions in the networks delivery of the information at
high speed, providing the receiver a smooth uninterrupted ow of information. This
means that although until recently the last-mile line speeds were in the range
of several dozens of Kbps, the area access lines were in the range of several Mb
ps, and the top-level backbones were approaching the Gbps speed, lately, during
the 1990s, these gures have increased to Mbps, several hundreds of Mbps, and seve
ral Gbps level, respectively. The near future will be characterized by multi-Gbp
s speed at the backbone and area access level, and the goal is to reach several
hundreds of Mbps even at the last-mile sections. Of course, the development of t
he infrastructure means coexistence of these leading-edge gures with the more est
ablished lower-speed solutions. And obviously, in the longer term, the gures may
increase further. As far as the organization of the operation of the infrastruct
ure, the physical and the network infrastructure should be distinguished. Provid
ing physical infrastructure means little more than making available the copper c
ables or bers and the basic active devices of the transmission network, while ope
rating the network infrastructure also means managing of the data traf c. Both org
anizational levels of the infrastructure are equally important, and the complexi
ty of the related tasks has, in an increasing number of cases, resulted in the j
obs being shared between large companies specializing in providing either the ph
ysical or the network infrastructure. The rst step from physical connectivity to
network operation is made with the introduction of network protocols. These prot
ocols are special complex software systems establishing and controlling appropri
ate network operation. The most important and best-known such protocol is the In
ternet Protocol. The elementary services are built up on top of the infrastructu
re outlined above. Thus, the next group of companies working for the bene t of the
end users is the Internet service providers (ISPs), which take care of providin
g network access points. Although network services are discussed separately in S
ection 7, it should be mentioned here that one of the basic tasks of these servi
ces is to take care of the global addressing system. Unique addressing is perhap
s the most important element in the operation of the global network. Network add
resses are associated with all nodes in the global network so that the destinati
on of the transmitted information can always be speci ed. Without such a unique, e
xplicit, and unambiguous addressing system, it would not be possible even to rea
ch a target site through the network. That is why addressing (as well as naming,
i.e., associating unique symbolic alphanumeric names with the numeric addresses
) is a crucial component of network operation. The task is solved again by a hie
rarchy of services performed by the domain name service (DNS) providers. This is
sue is dealt with in more detail later. Note that it is possible, as a common pr
actice, to proxy the end-node computer through an access point to share the use
of a unique address on the Internet. The internal subnetwork address may in fact
be used by some other similarly isolated node in a distinctly separate subnetwo
rk (separate intranet). Everything brie y described above relates to the global pu
blic network. However, with the more sophisticated, serious, and sometimes extre
mely sensitive usage of the network, the need to establish closed subsets of the
networked computers has emerged. Although virtual private networks (subnetworks
that are based on public services but that keep traf c separated by the use of sp
ecial hardware and software solutions) solve the task by simply separating the g
eneral traf c and the traf c within a closed community, some requirements, especiall
y those related to security, can only be met by more strictly separating the rel
ated traf c. The need for this kind of separation has led to the establishment of
intranets (special network segments devoted to a dedicated user community, most
often a company) and extranets (bunches of geographically distant but organizati
onally and / or cooperatively connected intranets, using the public network to c
onnect the intranets but exploiting special techniques for keeping the required
security guarantees). Although building exclusively private networks, even wide
area ones, is possible, these are gradually disappearing, and relying on public
services is becoming common practice even for large organizations.
5.
INTERNET, INTRANETS, EXTRANETS
The Internet is the worlds largest computer network. It is made up of thousands o
f independently operated (not necessarily local) networks collaborating with eac
h other. This is why it is sometimes
238
TECHNOLOGY
called the network of networks. Today it extends to most of the countries in the
world and connects dozens of millions of computers, allowing their users to exc
hange e-mails and data, use online services, communicate, listen to or watch bro
adcast programs, and so on, in a very fast and costeffective way. Millions of pe
ople are using the Internet today in their daily work and life. It has become pa
rt of the basic infrastructure of modern human life. As was already mentioned ab
ove, the predecessor of the Internet was ARPANet, the rst wide area packet switch
ed data network. The ARPANET was created within a scienti c research project initi
ated by the U.S. Department of Defense in the late 1960s. Several universities a
nd research institutes participated in the project, including the University of
Utah, the University of California at Los Angeles, the University of California
at Santa Barbara, and the Stanford Research Institute. The goal of the project w
as to develop a new kind of network technology that would make it possible to bu
ild reliable, effective LANs and WANs by using different kinds of communication
channels and connecting different kinds of computers. By 1974, the basics of the
new technology had been developed and the research groups led by Vint Cerf and
Bob Kahn had published a description of the rst version of the TCP / IP (Transmis
sion Control Protocol and Internet Protocol) suite. The experimental TCP / IP-ba
sed ARPANet, which in its early years carried both research and military traf c, w
as later split into the Internet, for academic purposes, and the MILNet, for mil
itary purposes. The Internet grew continuously and exponentially, extending step
by step all over the world. Early in the 1990s, the Internet Society (see www.i
soc.org) was established, and it has served since then as the driving force in t
he evolution of the Internet, especially in the technological development and st
andardization processes. The invention of the World Wide Web in 1990 has given n
ew impetus to the process of evolution and prompted the commercialization of the
Internet. Today the Internet is a general-purpose public network open to anyone
who wishes to be connected. Intranets are usually TCP / IP-based private networ
ks. They may in fact gateway through a TCP / IP node, but this is not common. Th
ey operate separately from the worldwide Internet, providing only restricted and
controlled accessibility. Although an intranet uses the same technology as the
Internet, with the same kinds of services and applications, it principally serve
s only those users belonging to the organization that owns and operates it. An i
ntranet and its internal services are closed to the rest of the world. Often thi
s separation is accomplished by using network addresses reserved for such purpos
es, the so-called ten net addresses. These addresses are not routed in the Inter
net, and a gateway must proxy them with a normal address to the Internet. Connec
ting intranets at different geographical locations via the public Internet, resu
lts in extranets. If an organization is operating at different locations and wan
ts to interconnect its TCP / IP-based LANs (intranets), it can use the inexpensi
ve public Internet to establish secure channels between these intranets rather t
han build very expensive, large-scale, wide area private networks. This way, cor
porate-wide extranets can be formed, allowing internal users to access any part
of this closed network as if it were a local area network.
6.
TECHNICAL BACKGROUND
To understand the possibilities of the TCP / IP-based networks fully, it is impo
rtant to know how they work and what kinds of technical solutions lie behind the
ir services.
6.1.
Architecture of the Internet
COMPUTER NETWORKING
Host13 Host14 Host12 Host11 LAN2 Host23 LAN1 GW1 GW2 Host21 Host22
239
GW4
GW3 LAN3
LAN4
Host41 Host32 Host42 Host43 Host44 Host45 Host31
Figure 4 Internet Architecture.
In most cases, communication lines establish steady, 24-hour connection between
their endpoints. In order to increase reliability, and sometimes also for traf c l
oad balancing reasons, alternative routes can be built between the local area ne
tworks. If any of the lines is broken, the gateways can automatically adapt to t
he new, changing topology. Thus, the packets are continuously delivered by the a
lternative routes. Adaptive routing is possible at gateways, particularly for su
pporting 24 7 operation. It commonly occurs in the backbone of the Internet.
6.2.
Packet Switching
Packet switching is one of the key attributes of the TCP / IP technology. The da
ta stream belonging to a speci c communication session is split into small data pi
eces, called packets. The packets are delivered independently at the target host
. The separated packets of the same communication session may follow different r
outes to their destination. In contrast to line-switching communication technolo
gies, in packet switched networks there is no need to set up connections between
the communicating units before the start of the requested data transmission. Ea
ch packet contains all of the necessary information to route it to its destinati
on. This means that packets are complete from a network perspective. A good exam
ple of line-switched technology is the traditional public phone service. In cont
rast to packet switching, line switching assumes a preliminary connection setup
procedure being performed before a conversation starts. After the connection is
set up, an individual communication channel (called a circuit) is provided for t
he data transmission of that communication session. When the data transmission i
s over, the connection should be closed.
6.3.
Most Important Protocols
A network protocol is a set of rules that determines the way of communication on
the network. All the attached hosts must obey these rules. Otherwise they wont b
e able to communicate with the other hosts or might even disturb the communicati
on of the others. For proper high-level communication, many protocols are needed
. The system of protocols has a layered structure. High-level protocols are plac
ed in the upper layers and low-level protocols in the lower layers. Each protoco
l layer has a well-de ned standard
240
TECHNOLOGY
interface by which it can communicate with the other layers, up and down. TCP /
IP itself is a protocol group consisting of several protocols on four different
layers (see Figure 5). TCP and IP are the two most important protocols of this p
rotocol group. IP is placed in the internetworking layer. It controls the host-t
o-host communication on the network. The main attributes of IP are that it is co
nnectionless, unreliable, robust, and fast. The most astounding of these is the
second attribute, unreliability. This means that packets may be lost, damaged, a
nd / or multiplied and may also arrive in mixed order. IP doesnt guarantee anythi
ng about the safe transmission of the packets, but it is robust and fast. Becaus
e of its unreliability, IP cannot satisfy the needs of those applications requir
ing high reliability and guaranteed QoS. Built upon the services of the unreliab
le IP, the transport layer protocol, TCP, provides reliable communication for th
e applications. Reliability is guaranteed by positive acknowledgement and automa
tic retransmission. TCP performs process-to-process communication. It also check
s the integrity of the content. TCP is connection oriented, although the TCP con
nections are virtual, which means that there is no real circuit setup, only a vi
rtual one. A TCP connection is a reliable, stream-oriented, full-duplex, bidirec
tional communication channel with built-in ow control and synchronization mechani
sms. There are also routing and discovery protocols that play an important role
in affecting network reliability, availability, accessibility, and cost of servi
ce.
6.4.
ClientServer Mechanism
Any communication on a TCP / IP network is performed under the clientserver schem
e (see Figure 6). When two hosts are communicating with each other, one of them
is the client and the other is the server. The same host can be both client and
server, even at the same time, for different communication sessions, depending o
n the role it plays in a particular communication session. The client sends requ
ests to the server and the server replies to these requests. Such servers includ
e le servers, WWW servers, DNS servers, telnet servers, and database servers. Cli
ents include ftp ( le transfer) programs, navigator programs (web browsers), and t
elnet programs (for remote access). Note that the term server does not imply spe
cial hardware requirements (e.g. high speed, or large capacity, or continuous op
eration). The typical mode of operation is as follows: The server is up and wait
s for requests. The client sends a message to the server. The requests arrive at
particular ports, depending on the type of the expected service. Ports simply a
ddress information that is acted upon by the serving computer. Clients provide p
ort information so that server responses return to the correct application. Well
-known ports are assigned to well-known services. Communication is always initia
ted by the client with its rst message. Because multiple requests may arrive at t
he server at the same time from different clients, the server should be prepared
to serve multiple communication sessions. Typically, one communication
APPLICATION LAYER
SMTP, FTP, Telnet, HTTP, SNMP, etc.
APPLICATION LAYER
TRANSPORT LAYER
TCP, UDP
TRANSPORT LAYER
INTERNETWORK LAYER
IP, ICMP, routing protocols
INTERNETWORK LAYER
DATA LINK + PHYSICAL LAYER
ARP, RARP + Eg. Ethernet
DATA LINK + PHYSICAL LAYER
NETWORK
Figure 5 The Layered TCP / IP Protocol Stack.
COMPUTER NETWORKING
241
Client program, e.g., Netscape Server daemon process, e.g., httpd Well known por
t, e.g., 80
Port above 1023
Client host 1
Server host Port above 1023
Client program, e.g., Netscape
Client host 2
Figure 6 Client / Server Scheme.
session is served by one process or task. Therefore, servers are most often impl
emented on multitasking operating systems. There is no such requirement at the c
lient side, where only one process at a time may be acting as a client program.
One of the main features in the new concept of computing grids is the substituti
on of the widely used clientserver mechanism by realizations of a more general di
stributed metacomputing principle in future computer networking applications (se
e Section 12.3).
6.5.
Addressing and Naming
In order to be unambiguously identi able, every host in a TCP / IP network must ha
ve at least one unique address, its IP address. IP addresses play an essential r
ole in routing traf c in the network. The IP address contains information about th
e location of the host in the network: the associated number (address) of the pa
rticular LAN and the associated number (address) of the host in the LAN. Current
ly, addresses in the version 4 Internet protocol are 32 bits long (IPv4 addresse
s) and are classi ed into ve groups: A, B, C, D, and E classes. Class A has been cr
eated for very large networks. Class A networks are rare. They may contain up to
224 hosts. Class B numbers (addresses) are given to medium-sized networks (up t
o 65,534 hosts), while class C network numbers are assigned to small (up to 254
hosts) LANs. The 32-bit-long IP addressing allows about 4 billion different comb
inations. This means that in principle, about 4 billion hosts can be attached to
the worldwide Internet, by using IPv4 addressing. However, intranets, because t
hey are not connected to the Internet or connected through rewalls, may use IP ad
dresses being used by other networks too. Because the number of Internet hosts a
t the start of the third millennium is still less than 100 million, it seems tha
t the available IPv4 address range is wide enough to satisfy all the needs. Howe
ver, due to the present address classi cation scheme and other reasons, there are
very serious limitations in some address classes (especially in class B). The lo
w ef ciency of the applied address
242
TECHNOLOGY
distribution scheme has led to dif cult problems. Because of the high number of me
dium-sized networks, there are more claims for class B network numbers than the
available free numbers in this class. Although many different suggestions have b
een made to solve this situation, the nal solution will be brought simply by impl
ementing the new version of the Internet protocol, IPv6, intended to be introduc
ed early in the 2000s. It will use 128-bit-long IP addresses. This space will be
large enough to satisfy any future address claims even if the growth rate of th
e Internet remains exponential. There are three different addressing modes in th
e current IPv4 protocol:
Unicast (one to one) Multicast (one to many) Broadcast (one to all)
The most important is unicast. Here, each host (gateway, etc.) must have at leas
t one class A, class B, or class C address, depending on the network it is conne
cted to. These classes provide unicast addresses. Class D is used for multicasti
ng. Multicast applications, such as radio broadcasting or video conferencing, as
sign additional D class addresses to the participating hosts. Class E addresses
have been reserved for future use. Class A, B, and C addresses consist of three
pieces of information: the class pre x, the network number, and the host number. T
he network number, embedded into the IP address, is the basic information for ro
uting decisions. If the host number part of the address contains only nonzero bi
ts (1s), the address is a broadcast address to that speci c network. If all the bi
ts in the host number part are zeroes (0s), the address refers to the network it
self. In order to make them easier to manage, IP addresses are usually described
by the four bytes they contain, all speci ed in decimal notation, and separated b
y single dots. Examples of IP addresses are:
Class A: 16.1.0.250 Class B: 154.66.240.5 Class C: 192.84.225.2
The human interface with the network would be very inconvenient and unfriendly i
f users had to use IP addresses when referring to computers they would like acce
ss. IP addresses are long numbers, making them inconvenient and dif cult to rememb
er and describe, and there is always a considerable risk of misspelling them. Th
ey dont even express the type, name, location, and so on of the related computer
or the organization operating that computer. It is much more convenient and reli
able to associate descriptive names with the computers, the organizations operat
ing the computers, and / or the related LANs. The naming system of the TCP / IP
networks is called the domain name system (DNS). In this system there are host n
ames and domains. Domains are multilevel and hierarchical (see Figure 7). They m
irror the organizational / administrative hierarchy of the global network. At pr
esent, top-level domains (TLDs) fall into two categories:
.
Root
my uk
br
es
nz
hu
be
pt edu
net org mil gov int com
Top Level TLD
bme iif elte u-szeged
whitehouse nato wiley
Second Level SLD
iit
Figure 7 DNS Hierarchy.
COMPUTER NETWORKING
243
1. Geographical (two-letter country codes by ISO) 2. Organizational (three-lette
r abbreviations re ecting the types of the organizations: edu, gov, mil, net, int,
org, com) Second-level domains (SLDs) usually express the name or the category
of the organization, in a particular country. Lower-level subdomains re ect the hi
erarchy within the organization itself. The tags of the domain names are separat
ed by dots. The rst tag in a domain name is the hostname. The tags are ordered fr
om most speci c to least speci c addressing. The domain name service is provided by
a network of connected, independently operated domain name servers. A domain nam
e server is a database containing information (IP address, name, etc.) about hos
ts, domains, and networks under the authority of the server. This is called the
domain zone. Domain name servers can communicate with each other by passing info
rmation retrieved from their databases. If a host wishes to resolve a name, it s
ends a query to the nearest domain name server, which will provide the answer by
using its local database, called a cache. However, if necessary, it forwards th
e query to other servers. The Internet domain name service system is the largest
distributed database in the world.
7.
OVERVIEW OF NETWORKING SERVICES
Thirty years ago, at the beginning of the evolution of computer networking, the
principal goal was to create a communication system that would provide some or a
ll of the following functional services:
Exchanging (electronic) messages between computers (or their users)today the popu
lar name
of this service is e-mail
Transferring les between computers Providing remote access to computers Sharing r
esources between computers
E-mail is perhaps the most popular and most widely used service provided by wide
area computer networks. E-mail provides easy-to-use, fast, cheap, and reliable
of ine communication for the users of the network. It allows not only one-to-one c
ommunication but one-to-many. Thus, mailing lists can be created as comfortable
and effective communication platforms for a group of users interested in the sam
e topics. Of course, le transfer is essential. Computer les may contain different
kinds of digitized information, not only texts and programs but formatted docume
nts, graphics, images, sounds, movies, and so on. File transfer allows this info
rmation to be moved between hosts. With the remote access service, one can use a
distant computer just as if one were sitting next to the machine. Although not
all operating systems allow remote access, most of the multiuser operating syste
ms (UNIX, Linux, etc.) do. Sharing resources makes it possible for other machine
s to use the resources (such as disk space, le system, peripheral devices, or eve
n the CPU) of a computer. Well-known examples of resource sharing are the le and
printer servers in computer networks. Besides the above-listed basic functions,
computer networks today can provide numerous other services. Operating distribut
ed databases (the best example is the World Wide Web), controlling devices, data
acquisition, online communication (oral and written), teleconferencing, radio a
nd video broadcasting, online transaction management, and e-commerce are among t
he various possibilities.
8.
OVERVIEW OF NETWORK-BASED APPLICATIONS
244
TECHNOLOGY
(wide area public network, in contrast to intranet / extranet-type solutions usi
ng private or virtual private networks). On the other hand, security issues are
de nitely different: on top of the general security and privacy requirements, spec
ial security and authorization aspects require special rewall techniques in the c
ase of private applications. Second, applications can be distinguished by how th
ey are utilized. The three main classes are:
Personal applications (individual and family use) Administrative applications (u
se by governments, municipalities, etc., in many cases with overall, or at least partly authorized, citizen access)
Industrial applications (practically any other use, from agriculture to manufact
uring to commercial services) The major differences among these three classes are the requir
ed levels of availability, geographical distribution, and ease of access. As far
as networking is concerned, the classes differ in the required quality of servi
ce level / grade. Third, there is a well-structured hierarchy of the networked a
pplications based on their complexity. This hierarchy more or less follows the h
ierarchy of those single-computer applications that are enhanced by the introduc
tion of network-based (distributed, dislocated) operation:
Low-level applications in this hierarchy include distributed word processing, da
tabase handling,
document generation, computer numerical control, etc.
Medium-complexity applications include network-integrated editing and publishing
, networked
logistics and inventory control, CAD (computer aided design), CAM (computer aide
d manufacturing), etc. High-complexity applications include integrated operation
s management systems, such as integrated management and control of a publishing
/ printing company, integrated engineering / management of a CAD-CAM-CAT system,
etc. Note that high-complexity applications include lower-complexity applicatio
ns, while lowcomplexity applications can be integrated to build up higher-level
applications in the hierarchy. Computer networking plays an important role not o
nly in turning from single-computer applications to network-based / distributed
applications but also in the integration / separation process of moving up and d
own in the hierarchy. Finally, network-based applications can be classi ed by the
availability of the related application tools (hardware and software). The possi
ble classes (custom, semicustom, and off-the-shelf solutions) will be considered
later in Section 13.
9.
THE WORLD WIDE WEB
Next to e-mail, the World Wide Web is perhaps the most popular application of th
e Internet. It is a worldwide connected system of databases containing structure
d, hypertext, multimedia information that can be retrieved from any computer con
nected to the Internet.
9.1.
History
The idea and basic concept of the World Wide Web was invented in Switzerland. In
the late 1980s, two engineers, Tim Berners-Lee from the United Kingdom and Robe
COMPUTER NETWORKING
245
tium, formed in 1994, drives and leads the technological development and standar
dization processes of Web technology.
9.2.
Main Features and Architecture
As mentioned above, the World Wide Web is a hypertext-based database system of m
ultimedia content. It can be used for building either local or global informatio
n systems. It is interactive, which means that the information ow can be bidirect
ional (from the database to the user and from the user to the database). It inte
grates all the other traditional information system technologies, such as ftp, g
opher (a predecessor of the WWW in organizing and displaying les on dedicated ser
vers), and news. The content of a Web database can be static or dynamic. Static
information changes rarely (or perhaps never), while dynamic content can be gene
rated directly preceding (or even parallel to) actual downloading. The Web conte
nt is made available by web servers, and the information is retrieved through th
e use of web browsers. The more or less structured information is stored in web
pages. These pages may refer to multimedia objects and other web pages, stored e
ither locally or on other Web servers. Thus, the network of the Internet web ser
vers forms a coherent, global information system. Nevertheless, because the diff
erent segments of this global system have been created and are operated and main
tained independently, there is no uni ed structure and layout. This fact may resul
t in considerable dif culties in navigating the Web. The Universal Resource Locato
r is a general addressing scheme that allows unambiguous reference to any public
object (or service), of any type, available on the Internet. It was originally
invented for the Word Wide Web, but today URL addressing is used in many other I
nternet applications. Examples of different URLs include:
http: / / www.bme.hu / index.html ftp: / / ftp.funet. / mailto:maray@fsz.bme.hu
9.3.
HTTP and HTML
Hypertext Transfer Protocol (HTTP) is the application-level network protocol for
transferring Web content between servers and clients. (Hypertext is a special d
atabase system of multimedia objects linked to each other.) HTTP uses the servic
es of the TCP transport protocol. It transfers independently the objects (text,
graphics, images, etc.), that build up a page. Together with each object, a spec
ial header is also passed. The header contains information about the type of the
object, its size, its last modi cation time, and so on. HTTP also supports the us
e of proxy and cache servers. HTTP proxies are implemented in rewalls to pass HTT
P traf c between the protected network and the Internet. Object caches are used to
save bandwidth on overloaded lines and to speed up the retrieval process. Hyper
text Markup Language (HTML) is a document description language that was designed
to create formatted, structured documents using hypertext technique (where link
s are associated to content pieces of the document so that these links can be ea
sily applied as references to other related documents). As originally conceived,
HTML focused on the structure of the document. Later it was improved to provide
more formatting features. In spite of this evolution, HTML is still quite simpl
e. An HTML text contains tags that serve as instructions for structuring, format
ting, linking, and so on. Input forms and questionnaires can also be coded in HT
ML. A simple example of an HTML text is as follows:
HTML
HEAD
TITLE Example /TITLE
/ HEAD
BODY
CENTER H1 Example /H1
/CENTER
le, containing almost nothing. P
HR
/BODY
/HTML
Thi
Details about the syntax can be found in the titles related to multimedia and th
e World Wide Web in the References. The above example will be displayed by the b
rowser like this:
246
TECHNOLOGY
Example
This is just a short HTML example, containing almost nothing.
9.4.
Multimedia Elements
The World Wide Web is sometimes called a multimedia information system because W
eb databases may contain also multimedia objects. It is obvious that in a well-d
esigned and formatted document page graphic information and images can be used i
n addition to text. Web technology allows the inclusion of sound and moving pict
ures as well. Web pages may also contain embedded programs (e.g., Javascript, a
language for designing interactive WWW sites) and thus can have some built-in in
telligence.
10. THE ROLE OF THE WORLD WIDE WEB IN COMMUNICATION, INTEGRATION, AND COLLABORAT
ION
The following subsections provide an insight into the role of the World Wide Web
in information access, communication, exchanging information, collaboration, an
d integration.
10.1.
The World Wide Web as a Means for Universal Information Access
In the past few years the World Wide Web has become an essential tool of common
application in many areas of human life. Today it is a universal, global, and wi
dely used information system that makes it possible for every user of the Intern
et to access a vast amount of information from every segment of human activity,
including science, business, education, and entertainment. The World Wide Web is
universal because it can easily be adapted to almost any kind of information pu
blishing needs. It can substitute for traditional paper-based publications like
books, newspapers, newsletters, magazines, catalogs, and lea ets. But it is much m
ore. It can be dynamic, multimedia based, and interactive. The World Wide Web is
global because using the transport provided by the Internet, it can be used wor
ldwide. It allows any kind of information to ow freely to any destination, regard
less of physical distance. The available human interfaces (editors, browsers, et
c.) of the Web are friendly and easy to use, making it open for everyone. The We
b technology supports different national languages and character sets and thus h
elps to eliminate language barriers.
10.2.
The World Wide Web as a Tool for Communicating and Exchanging Information
The basic role of the Web is to allow users to share (exchange) information. Acc
essing a particular set of web pages can be free to anybody or restricted only t
o a group of users. However, the Web is also a technology for communication. The
re are Web-based applications for online and of ine communication between users, f
or teleconferencing, for telephoning (Voice over IP [VoIP]), for using collectiv
e whiteboards, and so on. Because the Web integrates almost all other Internet a
pplications (including e-mail), it is an excellent tool for personal and group c
ommunication.
10.3.
COMPUTER NETWORKING
247
requires the integration of these separate sets of information content as well.
Moreover, the organizations outside relations can be made most effective if the i
nformation content within the organization is correlated to the relevant informa
tion content accessible in communicating with partner organizations. As far as t
he structure of the correlated individual systems of information content is conc
erned, widely accepted distributed database techniques, and especially the world
wide-disseminated World Wide Web technology, provide a solid background. However
, the structure and the above-mentioned techniques and technologies provide only
the common framework for communicating and cooperating with regard to the relat
ed systems of information content. Determining what information and knowledge sh
ould be involved and how to build up the entire content is the real art in utili
zing computer networks. There are no general recipes for this part of the task o
f introducing and integrating computer and network technologies into the activit
ies of an organization. The generation and accessibility (provision) of informat
ion content are brie y investigated in the following subsections. Also provided is
a short overview of the classi cation of the information content, along with a su
bsection on some aspects of content rating and ltering.
11.1.
Electronic I / O, Processing, Storage, and Retrieval of Multimedia Information
Generation and provision of information content involves a set of different step
s that generally take place one after the other, but sometimes with some overlap
ping and possibly also some iterative re nement. In principle, these steps are ind
ependent of the type of the information. However, in practice, multimedia inform
ation requires the most demanding methods of handling because of its complexity.
(Multimedia means that the information is a mix of data, text, graphics, audio, sti
ll video, and full video components.) Obviously, before starting with the steps,
some elementary questions should answered: what information is be to put into t
he stored content, how, and when is this information to be involved? These quest
ions can be answered more or less independently of the computer network. If the
answers to these questions above are known, the next phase is to start with that
part of the task related to the network itself. The basic chain of steps consis
ts of the following elements: 1. 2. 3. 4. Inputting the information electronical
ly into the content-providing server Processing the input in order to build up t
he content by using well-prepared data Storing the appropriate information in th
e appropriate form Providing anytime accessibility of the information
The rst step is more or less straightforward using the available input devices (k
eyboard, scanner, microphone, camera, etc.) The next steps (processing and stori
ng) are a bit more complicated. However, this kind of processing should be built
into the software belonging to the application system. This kind of software is
independent of the information itself and normally is related exclusively to th
e computer network system being under consideration. More important is how to nd
the necessary information when it is needed, whether the information is internal
or external to the organization. The task is relatively simple with internal in
formation: the users within an organization know the system or at least have dir
ect access to those who can supply the key to solving any problems in accessing
it. However, the task of looking for external information will not be possible w
ithout the most recent general tools. These tools include different kinds of dis
tributed database handling techniques. More important, they include a number of
ef cient and convenient World Wide Web browsers, search engines, directories, gene
ral sites, portals, topical news servers, and custom information services. With
these new ways of accessing the required information and a good infrastructure a
s a basis, there is virtually no search task that cannot be solved quickly and e
application areas there are (see Section 8 above), the more content cla
be identi ed. Thus, we can distinguish between:
and private contents (depending on who can access them) Personal, adminis
and industrial contents (depending on where they were generated and
248
TECHNOLOGY
These classi cations determine the requirements for the contents that are under co
nsideration. The main aspects of these requirements are:
Availability (where and how the content can be reached) Accessibility (what auth
orization is needed for accessing the content) Reliability (what levels of relia
bility, validity, and completeness are required) Adequacy (what level of matchin
g the content to the required information is expected) Updating (what the import
ance is of providing the most recent related information) Ease of use (what leve
l of dif culty in accessing and using the information is allowed) Rating (what use
r groups are advised to, discouraged from, prevented from accessing the content)
Some of these aspects are closely related to the networking background (availabi
lity, accessibility), while others are more connected to the applications themse
lves. However, because accessing is always going on when the network is used, us
ers may associate their possibly disappointing experiences with the network itse
lf. Thus, reliability, adequacy, updating, ease of use, and rating (content qual
i cation) are considered special aspects in generating and providing network acces
sible contents, especially because the vast majority of these contents are avail
able nowadays by using the same techniques, from the same source, the World Wide
Web. Therefore, special tools are available (and more are under development) fo
r supporting the care taken of these special aspects. Networking techniques cant
help too much with content reliability. Here the source (provider or broker) is
the most important factor in how the user may rely on the accessed content. The
case is similar with adequacy and updating, but here speci c issues are brought in
by the browsers, search techniques / engines, general sites, portal servers (th
ose responsible for supporting the search for adequate information), and cache s
ervers (mirroring Web content by taking care of, for example, correct updating).
Some speci c questions related to rating and ltering are dealt with in the next su
bsection.
11.3.
Rating and Filtering in Content Generation, Provision, and Access
Although the amount of worldwide-accessible information content is increasing ex
ponentially, the sophistication, coverage, and ef ciency of the tools and services
mentioned in the previous two subsections are increasing as well. However, one
problem remains: the user can never exactly know the relevance, reliability, or
validity of the accessed information. This problem is well known, and many effor
ts are being made to nd ways to select, lter, and, if possible, rate the different
available sources and sites (information providers) as well as the information
itself. An additional issue should be mentioned: some information content may be
hurtful or damaging (to minorities, for example) or corrupting, especially to c
hildren not well prepared to interpret and deal with such content when they acci
dentally (or consciously) access it. There is no good, easily applicable method
available today or anticipated in the near future for solving the problem. The o
nly option at the moment is to use ags as elementary information for establishing
a certain level of rating and ltering. This way, special labels can be associate
d with content segments so that testing the associated labels before accessing t
he target content can provide information about the content, itself. These ags ca
n provide important facts about the topics, the value, the depth, the age, and s
o on, of the related content and about the target visitors of the site or target
audience of the content. The ags can be associated with the content by the conte
nt provider, the content broker, the service provider, or even the visitors to t
he related sites. However, because appropriate ags can be determined only if the
content is known, the most feasible way is to rely on the content provider. The
problem is that if the provider consciously and intentionally wishes to hide the
truth about the content or even to mislead the potential visitors / readers, th
ere is practically no way of preventing such behavior. This is a real problem in
the case of the above-mentioned hurtful, damaging, or corrupting contents, but
also with any other contents as well, as far as their characteristics (validity,
value, adequacy, age, etc.) are concerned. If ags are present, ltering is not a d
if cult task. It can be performed either manually (by a human decision whether to acce
ss or not access) or even by an automated process, inhibiting the access if nece
ssary or advisable. However, the above procedure assumes standardized techniques
and standard labeling principles / rules, as well as fair usage. There is still
a lot of work to do before these conditions will generally be met (including co
di cation). More has to be done with respect to future intelligent machine techniq
ues that could automatically solve the tasks of labeling and ltering. Solving thi
s problem is extremely complex but also extremely
COMPUTER NETWORKING
249
important and urgent because of the exponentially increasing amount of content a
ccessible through the Internet.
12.
TRENDS AND PERSPECTIVES
Computer networking is a continuously developing technology. With the rapid evol
ution of the theory and techniques behind them, the worldwide infrastructure, th
e globally available services, the widening spectrum of applications, the expone
ntially growing amount of accessible content, and the ability and readiness of h
undreds of millions of users are all going through an intensive developmental pr
ocess. With the progression of the networks (sometimes simply called the Interne
t), together with the similarly rapid evolution of the related computing, commun
ications, control, and media technologies and their applications, we have reache
d the rst phase of a new historical period, the Information Society. The evolving
technologies and applications penetrate every segment of our life, resulting in
a major change in how people and organizations, from small communities to globa
l corporations, live their lives and manage their activities and operations. Beh
ind the outlined process is the development of computer networking in the last 3
0 years. Global connectivity of cooperating partners, unbounded access to tremen
dous amounts of information, as well as virtually in nite possibilities of integra
ting worldwide distributed tools, knowledge, resources, and even human capabilit
ies and talents are becoming a reality as we go rapidly down a road that we have
only just started to pave. Although the future of networking is impossible to s
ee today, some important facts can already be recognized. The most important att
ributes of the evolution in computer networking are openness, exibility, and inte
roperability:
Openness, in allowing any new technologies, new solutions, new elements in the g
lobal, regional, and local networks to be integrated into the existing and continuously d
eveloping system of infrastructure, services, and applications Flexibility, in b
eing ready to accept any new idea or invention, even if unforeseen, that takes g
lobal advancement a step ahead Interoperability, in allowing the involvement of
any appropriate tool or device so that it can work together with the existing sy
stem previously in operation These attributes stem from:
The more conscious care taken of the hierarchical and / or distributed architect
ure in networking
technology
The cautious standardization processes going on worldwide The carefulness in tak
ing into consideration the requirements for interfacing between evolving
new tools and devices As a result, the progression in networking continues unbro
ken and the proliferation of network usage unsaturated, even at the start of the
third millennium. Some trends in this development process are brie y outlined in
the following subsections.
12.1.
Network Infrastructure
Major elements of the trends in the development of the network infrastructure ca
n be summarized, without aiming at an exhaustive listing, as follows:
Transmission speeds are increasing rapidly. The trend is characterized by the in
troduction of
more beroptic cables; the application of wavelength division multiplexing and mul
tiple wavelengths in the transmission technique (utilizing the fact that a singl
e ber can guide several different frequency waves in parallel and sharing these w
aves among the information coming from different sources and being transmitted t
o different destinations); the integration of both guided and radiated waves in
data communication by combining terrestrial and satellite technologies in radio
transmission; the introduction of new techniques for mobile communication; the a
pplication of a more structured hierarchy in the infrastructure; and more care b
eing taken of last-mile connectivity (to the access speeds at the nal sections of the
connections, close to the individual PCs or workstations), so that multiple-Mbp
s cable modems and digital subscriber lines, or cable TV lines, direct satellite
connections, or wireless local loops, are applied close to the user terminals.
Active devices of the infrastructure, together with the end user hardware and so
ftware tools and devices, are improving continually, allowing more intelligence
in the network.
250
TECHNOLOGY
The number, capability, and knowhow of the public network and telecom operators
is increasing,
so market-oriented competition is continuously evolving. The result is lower pri
ces, which allows positive feedback on the wider dissemination of worldwide netw
ork usage. Governments and international organizations are supporting intensive
development programs to help and even motivate fast development. Tools applied i
nclude nancial support of leadingedge research and technological development, int
roduction of special rules and regulations about duties and taxes related to inf
ormation technologies, and, last but not least, accelerating codi cation on Intern
et-related issues. As a result of these elements, the network infrastructure is
developing rapidly, allowing similarly fast development in the service sector.
12.2.
Services
The trends in the evolution of networking services, available mostly through Int
ernet service providers (ISPs) and covering domain registration and e-mail and W
eb access provision, as well as further service types related to providing conne
ctivity and accessibility, can be characterized by continuing differentiation. A
lthough the kinds of services dont widen considerably, the number of users needin
g the basic services is growing extremely fast, and the needs themselves are bec
oming more fragmented. This is because the use of the network itself is becoming
differentiated, covering more different types of applications (see Section 12.3
) characterized by different levels of bandwidth and quality demand. Quality of
service (QoS) is the crucial issue in many cases. From special (e.g., transmissi
onintensive critical scienti c) applications to real-time high-quality multimedia
videoconferencing to bandwidth-demanding entertainment applications and to less
demanding e-mail traf c, the different quality-level requirements of the service r
esult in multiple grades of QoS, from best-efforts IP to guaranteed transmission
capacity services. Different technologies, from asynchronous transfer mode (ATM
) managed bandwidth services (MBS) to new developments in Internet Protocol (IP)
level quality management, support the different needs with respect to QoS grade
s. Lower quality demand may result in extremely low prices of service. An import
ant trend is that access to dark ber (optical cables as physical infrastructure e
lements rather than leased bandwidths or managed connectivity services) and / or
just separate wavelengths (speci c segments of the physical ber capacity) is becom
ing available and increasingly popular among well-prepared users, besides the tr
aditional, mainly leased-line, telecom services. While this new element in the s
ervice market assumes higher levels of technological knowhow from the user, the
cost aspects may make this possibility much more attractive in some cases than b
uying the more widespread traditional services. This also means an important new
kind of fragmentation in the market, based on buying ber or wavelength access an
d selling managed transmission capacity. Another trend is also evolving with res
pect to services, namely in the eld of providing and accessing content on the Web
. It is foreseeable that content provision services such as hierarchical intelli
gent caching / mirroring, as well as content-accessing services such as operatin
g general and topical portals or general and topical sites, will proliferate rap
idly in the future, together with the above-mentioned trend toward separation of
content provision and content brokerage. These trends in networking services wi
ll result in a more attractive background for the increasing amount of new appli
cations.
12.3.
Applications
An overview of the development trends in network infrastructure and networking s
ervices is not dif cult to provide. In contrast, summarizing similar development t
rends in the far less homogeneous applications is almost impossible because of t
heir very wide spectrum. However, some general trends can be recognized here. Fi
rst, the areas of applications do widen. Just to mention some key potential tren
ds:
The most intensive development is foreseeable in the commercial and business app
lications (ecommerce, e-business).
Industrial applications (including the production and service industries) are al
so emerging rapidly; one of the main issues is teleworking, with its economic as well as social
effects (teleworking means that an increasing number of companies are allowing
or requesting part of their staff to work for them either at home or through tea
mwork, with team members communicating through the network and thus performing t
heir joint activities at diverse, distant sites).
COMPUTER NETWORKING common, but here the trends in the prices of the service ind
ustry have a strong in uence.
251
Applications in the entertainment sector (including home entertainment) are also
becoming more Applications in governmental administration and the eld of wide ac
cess to publicly available
administrative information seem to be progressing more slowly at the start of th
e new millennium. The reasons are probably partly nancial, partly political (lack
of market drive and competition results in lack of elevated intensity in the de
velopment of the networking applications too). Applications related to science a
nd art will also evolve rapidly, but the speed will be less impressive than in t
he commercial area, although internal interest and motivation complement speci c m
arket drive elements here. Some special factors are playing an important role in
the elds of telemedicine and teleteaching (teleinstruction). Here, the healthy m
ix of market-driven competitive environments, governmental commitment, and wide
public interest will probably result in an extremely fast evolution. Second, som
e general features of the applications are showing general trends as well, indep
endently of the application elds themselves. Two of these trends are:
Multimedia transmission is gradually entering practically all applications, toge
ther with exploitation of the above-mentioned techniques of telepresence and teleimmersion (thes
e techniques are characterized by combining distant access with virtual reality
and augmented reality concepts; see Section 2). The concept of grids will probab
ly move into a wide range of application areas. In contrast to the well-known cl
ientserver scheme, this concept exploits the potential of integrating distributed
intelligence (processing capability), distributed knowledge (stored information
), and distributed resources (computing power) so that a special set of tools (c
alled middleware) supports the negotiation about and utilization of the intellig
ence, knowledge, and resource elements by goal-oriented integration of them with
in a grid. (A grid is a speci c infrastructure consisting of a mutually accessible
and cooperative set of the joined hosting sites, together with the distributed
intelligence, knowledge, and resources and the middleware tools.) The developmen
t trends in the eld of the applications obviously deeply in uence the trends in the
area of content generation and content provision.
12.4.
Information Content
Forecasting in the area of content generation and content provision is even more
dif cult than in the eld of network applications. Only some basic issues can be me
ntioned here. The rst important trend is the emergence of a new branch of industr
y, the content industry. No longer are content generation and content provision
secondary to building the infrastructure, providing services, and / or introduci
ng or keeping applications. The related complex tasks (at least in the case of p
rofessional activities of a high standard) require large investments and call fo
r adequate return on these investments. Well-organized joint efforts of highly e
ducated and talented staff are required in such demanding activities. The second
trend is directly linked to the amount of work and money to be invested in cont
ent generation. Contents will gain more and more value in accordance with such i
nvestments. Moreover, while accessing the infrastructure and hiring services wil
l probably become less expensive in the future, the price for accessing valuable
information will probably grow considerably. The price for accessing any inform
ation (content) will soon be cost based, depending on the size of the investment
and the potential number of paying users. The third trend is also due to the in
creasing complexity of the tasks related to content generation and content provi
sion. Task complexities have initiated the separation of the activities related
to generating contents from those related to organizing services for content pro
vision to the public. This means that special expertise can be achieved more eas
ily in both of these types of demanding activities. The fourth trend is related
to storing the information. Future contents will be distributed: because transmi
ssion will get cheaper and cheaper, merging distant slices of information conten
t will be made much cheaper by maintaining the current distance and accessing th
e required remote sites as needed instead of transporting these slices into a ce
ntral site and integrating them on-site. However, in order to exploit this possi
bility, a certain level of distributed intelligence is also required, as well as
inexpensive access to the connectivity services. This trend is closely related
to those regarding portals and grids discussed in the previous subsection.
252
TECHNOLOGY
XML is an emerging protocol of note. Applied for exchange of structured informat
ion, it is increasingly used in e-commerce applications.
13.
NETWORKING IN PRACTICE
The following subsections go into some detail about the practical issues of netw
orking. A brief overview is provided of the suggested way of starting activities
devoted to advance analysis of the environment, planning the applications and t
heir prerequisites, designing the network itself, and implementing the appropria
te hardware and software. Classi cation of networking solutions and implementation
of network technology are investigated separately. As in the other parts of thi
s chapter, the reader should look for a more detailed discussion in the speciali
zed literature, some of which is listed in the References.
13.1.
Methodology
Designing a good network is a complex task. It is essential that the planning an
d implementation phase be preceded by a thorough survey in which the following q
uestions, at least, have to be answered:
What is the principal goal of building the network? What is the appropriate type
and category of the network? Is there a need for integration with existing netw
orks? What kinds of computers and other hardware equipment are to be connected?
What kinds of applications are to be implemented over the network? What kind of
protocols should be supported? How much traf c should be carried by the network? W
hat is the required bandwidth and throughput of the network? What is the highest
expected peak load of the network? What is the maximum acceptable delay (latenc
y) on the network? What level of reliability should be guaranteed? What level of
security has to be reached? What kinds of enhancements and improvements are exp
ected in the future? What level of scalability is desirable? What kinds of physi
cal constraints (distance, trace, etc.) should be taken into consideration?
The actual process of implementation depends on the answers to these questions.
The result (the adequacy and quality of the network) will greatly depend on how
consistently the rules have been followed about building networks. Designing the
most appropriate topology is one of the key issues. Many other things depend on
the actual topology. If the topology is not designed carefully, the network may
be far from having optimum adequacy, quality, and cost / performance. Choosing
the right physical solutions and low-level protocols is also very important. Low
-level protocols (TCP / IP, etc.) have a strong impact on the functionality of t
he network. Building a multiprotocol network is advisable only if there is an ex
plicit need for such a selection; this may increase the costs and make the manag
ement and maintenance tasks dif cult and complicated.
13.2.
Classi cation
Classi cation of networks by type may help signi cantly in selecting the solution fo
r a speci c application. Obviously, different types of networks are designed to se
rve different types of application needs. Networks can be classi ed by a number of
COMPUTER NETWORKING
253
technologies and / or protocols. Fortunately, modern network equipment allows th
e use of different technologies and protocols at the same time within the same n
etwork. This equipment takes care of the necessary translations when routing the
information traf c.
13.3.
Implementation Issues
Like the design process, implementation is a complex task that can involve dif cul
t problems. The design team should understand the available technological soluti
ons well in order to select the most appropriate ones for each part of the syste
m. As an example, ethernet technology is a widespread, ef ciently functioning solu
tion in most cases, but because it does not guarantee appropriate response times
, it cannot be used in certain real-time environments. Ethernet is a reliable, e
ffective, inexpensive, and very good technology, but the applied CSMA / CD algor
ithm is not real time. In practice, the process to be controlled by the network
may be much slower than the ethernet technology. Therefore, especially if oversi
zed network capacity is implemented, there will be no problem in such cases. But
theoretically it is not the best choice. Of course, the choice and the means of
implementation are also a nancial question, and all the important factors should
be taken into consideration in looking for a good compromise. Decisions about w
hat and how to implement have a serious impact on the reliability and security o
f the network, too. For building a high-reliability, high-availability network,
redundant components must be used. Communication lines and passive and active ne
twork components all have to be multiplied in a well-designed way to get a partl
y or fully fault-tolerant system. Of course, this may result in signi cant extra c
osts. Similarly, if extra-high levels of security must be guaranteed, a number o
f additional hardware and software components may be needed. Again, a good compr
omise can be found by taking into account all the reliability and security aspec
ts and cost considerations.
14.
NETWORKING IN THE PRODUCTION AND SERVICE INDUSTRIES
The application of computer networks in the production and service industries sh
ould be tted to the required complexity and coverage. As far as complexity of net
work usage, two main aspects should be taken into consideration: 1. What kind of
infrastructure is built within the organization (including internal structure a
nd technology as well as connectivity towards the outside world)? The spectrum g
oes from connecting a few workplaces (PCs) to each other, and possibly to the In
ternet, by the simplest LAN techniques and telephone modems, respectively, to us
ing the most complex high-speed intranet and / or extranet applications with hig
h-end workstations, or even large computer centers, together with broadband conn
ections to the global Internet. 2. What kinds of services are applied? In this r
espect, the simplest solutions make possible only the simple exchange of message
s between workplaces, while the other end is characterized by the exchange of th
e most complex multimedia information by using World Wide Web techniques, access
ing large databases, applying distributed information processing, utilizing virt
ual environments, exploiting network-based long-distance collaboration, and so o
n. Coverage of network usage also involves two important aspects: 1. What kinds
of applications within the organization are introduced? The low end is simple wo
rd processing with possible electronic document handling. The other extreme is c
haracterized by a complex system of network-based planning, computer-based distr
ibuted decision making, network-based integrated management, computer aided desi
gn, manufacturing, and testing, distributed nancing by the use of computer networ
king, computerized human resource management, networked advertising, promotion,
marketing, retailing, online sales transactions, public relations, and so on. 2.
What amount and complexity of information content are generated, processed, and
stored by the networked system of the organization? Here the levels are very mu
ch company speci c, but it can be said that companies in both the production and t
he service industries can start with applications using elementary sets of infor
mation and may get ahead until virtually the entire information and knowledge ba
se about the full spectrum of activities within the company (and also about the
related outside world) are appropriately stored and processed by a wellestablish
ed system of networked services and applications. The basic principle here is th
at introducing computers and computer networks ampli es the strengths as well as t
he weaknesses of the organization. Well-organized companies can gain a lot from
computerized and networked applications. However, those with considerable organi
zational
254
TECHNOLOGY
problems mustnt look for a miracle: they will rapidly realize that these tools in
crease rather than decrease the problems of operating the related production or
service activities. The full process (from the rst elementary steps until complet
ing the implementation of the network-based system) is a combination of two basi
c components: 1. Introducing and integrating network technology (infrastructure
and services) inside the company 2. Introducing and integrating networked applic
ations into the activities within the company (together with continuously buildi
ng up the information content). If careful introduction and integration of the n
ew technologies and methods is performed in a systematic manner and by appropria
tely restructuring and re-forming all the related activities within the company,
the results will be higher ef ciency, better performance, and lower costs, provid
ed that good management exploits the opportunities that are made available. More
over, properly introduced and appropriately applied computer techniques and netw
ork technologies will not only help in getting ahead with enhancing ef ciency and
performance as well as cost cutting, but also result in elevated competitiveness
. This way, survival against the increasing worldwide competition is supported,
too.
15. SOME PRACTICAL ASPECTS OF INTRODUCING AND USING COMPUTER NETWORKS IN INDUSTR
IAL ENGINEERING
As mentioned above, applying up-to-date computer networks in the production and
service industries requires two basic steps: 1. Integrating network technology (
Internet connectivity and accessibility) in the related organization or company
2. Integrating networking services and network-based applications (Internet tech
nology) in the operations and activities of the organization or company. The mos
t important practical issues with regard to these steps are brie y investigated in
the following subsections but some preliminary comments should be made. First,
there are, in principle, two distinct possibilities, based on the top-down and t
he bottom-up approach, respectively:
In the case of fast top-down introduction of network technology, an advance anal
ysis (feasibility
study) should be performed, in order to avoid the potential risks associated wit
h lack of careful preparation for taking a giant step ahead with networking in t
he company. In the case of applying the bottom-up approach, the speed of getting n
etworked companywide is much lower, but getting ahead step-by-step makes it possib
le to correct any mistakes and adapt to all the recognized internal circumstance
s so that the risks are minimized. In any case, the top-down approach usually re
quires outside guidance from a professional company that specializes in introduc
ing and integrating network technology in the organization and activities of ind
ustrial companies. This is normally not required with the bottom-up approach, al
though preliminary analysis and outside advice may help a lot there, too. Anothe
r issue is coverage and depth in introducing networked information technology. T
he basic questions are where to stop with the bottom-up approach and what goals
to de ne with the top-down approach. Coverage here means organizational (what depa
rtments to involve) and geographic (which sites to connect to the corporate netw
ork) coverage, while depth relates to the level at which network technology is i
ntroduced (which tasks to involve and what level of completion to achieve by net
work integration). A further issue is to determine whether buying an off-the-she
lf solution, buying a semicustom solution and adapting it to the local circumsta
nces, or starting in-house development of a full custom solution is preferable.
For small enterprises, the rst alternative will be optimum; for high-end corporat
ions, in many cases, the third. Although today the second alternative is the mos
t common way of integrating network technology in a company, each organization s
hould be considered separately, requiring its own speci c handling.
15.1.
Internet Connectivity
The introduction of Internet technology in industrial engineering starts with a
preliminary de nition of the companys business requirements and objectives. From th
ese, the potential network applications can be derived. The planning and impleme
ntation phases can be started.
COMPUTER NETWORKING
255
The rst steps in the process of establishing Internet connectivity at an industri
al company are to de ne the speci cations and launch the necessary procedures:
Consulting with experts about possibilities and goals Identifying what Internet
applications will be introduced De ning human resource needs and deciding on staff
issues related to networking Contracting an advisor / consultant to support act
ivities related to company networking Estimating what bandwidth the selected app
lications will require Determining what equipment (hardware and software devices
and tools) will be necessary Deciding about security issues and the extra equip
ment required Selecting the most appropriate Internet service provider (which ma
y be the company itself) Deciding about the domain name(s) to be applied by the
company Negotiating with the ISP about services to be bought (provided that the
company is not its own ISP) Starting purchasing the necessary equipment Naturall
y, the above steps do not follow each other in linear order. An iterativeinteract
ive process is to be assumed, in which earlier decisions may sometimes be change
d because of later recognition of speci c problems, barriers, or dif culties. The ne
xt steps should be devoted to preparations inside the company (or company sites)
. This means not just planning and building the in-house infrastructure (cabling
, server room, etc.), but also deciding about the internal server structure (dom
ain name server, mail server, web server, etc.). Another issue is starting the p
urchasing of the components (HW and SW) of the internal infrastructure and servi
ces that will be needed for covering the target applications. A further task is
to prepare the tools and devices for the applications themselves. Ideally, the f
ull process may only take several weeks, but it may last several months (e.g., i
n the case of a complex infrastructure covering several sites, each bringing its
own complex networking tasks into the picture). Behind some of the above decisi
ons and steps are several opportunities and choices (bandwidth of connectivity,
level of security, server structure, equipment brands, etc.). Some also involve
selecting among different technology variants. These alternatives require carefu
l analysis before decisions are made. It is not surprising that the Internet con
nection is to be upgraded later. This is normally not a problem at all and can b
e performed relatively simply, even if the upgrade also means a technological ch
ange.
15.2.
LANs, Intranets, WANs, and Extranets
Establishing Internet connectivity is just the rst step in the process. It makes
communication with the outside world possible, but it doesnt yet solve the task o
f establishing internal connections within the enterprise. In the case of a smal
l enterprise and a single company site, the only additional task in establishing
connectivity is to implement interconnections between the PCs and workstations
used by the company staff. This is done by building the LAN (local area network)
of the organization so that the resources (servers as well as PCs and workstati
ons) are connected to the in-house infrastructure, in most cases an ethernet net
work, of the company. The required speed of the internal network depends on the
traf c estimates. New ethernet solutions allow even gigabit per second transmissio
n. Once the internal LAN is established, the next question is how the users with
in the LAN will communicate with their partners outside the corporate LAN. Here
is where the company intranet enters the picture. The intranet is a network util
izing the TCP / IP protocols that are the basis of the Internet but belonging ex
clusively to the company and accessible only to the company staff. This means th
at outside partners can access the machines within the intranet only if they hav
e appropriate authorization to do so. The intranet is in most cases connected to
the global Internet through a rewall, so that although the websites within the i
ntranet look and behave just like other websites outside the intranet, the rewall
in front of the Intranet prevents unauthorized access. In companies with separa
te, distant sites, all these sites may have their own LANs, rewalls, and intranet
s. In most cases these are connected to each other through the public MAN (highspeed metropolitan area network of a town or city), or sometimes through the reg
ional WAN (extra-highspeed wide area network of a large region), although privat
e network connections can also be applied in case of speci c security or traf c requ
irements. However, intranets connected through public MANs or WANs may also be i
nterconnected so that, in spite of the geographically scattered locations of the
related company sites, they behave as a single
256
TECHNOLOGY
intranet. A common solution is to establish extranets so that the separate LANs
are connected by a virtual private network over the public MAN or WAN and author
ized access is controlled by a set of rewalls taking care of intranet-like traf c b
etween the authorized users (company staff and authorized outside partners). Thu
s, extranets may also be considered as distributed Intranets accessible not only
to company employees, but partially accessible to authorized users outside the
organization or company. LANs, intranets, and extranets provide a means for orga
nizations or companies to utilize best the possibilities stemming from integrati
ng the Internet in their operations and activities without losing security. Alth
ough LANs, intranets and extranets are quite scalable, it is good to think ahead
when planning the infrastructure and services of the company so that extension
and upgrading do not occur more frequently than really necessary. The network ad
dress schema is impacted by the infrastructure, so changing network numbering ca
n be costly.
15.3.
World Wide Web Communication, Integration, and Collaboration
Once the LANs are in place, with their servers, PCs, and workstations, and the i
ntranet / extranet infrastructure is working well, by taking care of secure inte
rconnectivity towards the outside world, and by utilizing appropriate routing an
d rewall equipment, the details of the applications come into the picture. Here t
he best solutions can be built up by exploiting the possibilities that modern Wo
rld Wide Web systems provide. The Web is the global system of specialized Intern
et servers (computers delivering web pages to machines asking for them). The Web
is also a tool integrating:
Worldwide addressing (and thus accessibility) Hypertext technique (allowing navi
gation through a multitude of otherwise separate websites
by following the links associated to web page content pieces)
Multimedia capabilities (covering a wide spectrum of formats used by the Web con
tents, from
structured text to graphics and from audio to still and real video) Interacting
with the Web content by initiating content-oriented actions, including also cust
omizing them, is thus supported, too. The Web is an ideal tool not only for worl
dwide information access, but also for communication, integration, and collabora
tion inside an enterprise and among enterprises (eXtensible Markup Language [XML
]). The key is the combination of Web technology with the intranet / extranet en
vironment. Practically, this means operating several web servers within the ente
rprise so that some of them serve only internal purposes and thus do not allow o
utside access, while others support communication outside the enterprise boundar
ies. While the latter normally differ from any other web servers only in their s
pecial property of being able to take over public content from the internally ac
cessible web servers, the structure, linking, and content of the former are cons
tructed so that they directly serve internal communication, integration, and col
laboration. This type of communication differs from general e-mail usage only in
providing a common interface and a well-organized archive of what has been comm
unicated through the system. This makes it possible to build the archive of the
company continuously by automatically storing all relevant and recorded document
s for easy, fast, and secure retrieval. The management of the organization or co
mpany can thus be considerably improved. More important is that such a (Web-base
d) store of all relevant documents also take care of integrating all the activit
ies resulting in or stemming from those documents. If a document starts its life
in the system, any access to it, modi cation of it, attachment to itin general, an
y event related to itis uniquely recorded by the system. Thus, a reliable and ef ci
ent way of integrating the activities within the company, as well as between the
company and its partners, can be achieved. Typical examples are e-commerce and
e-business. Communication and integration within an enterprise by the use of com
puter networks also mean high-quality support for overall collaboration, indepen
dent of where, when, and how the collaborating staff members take part in the jo
int activities. The only prerequisite is that they all take into consideration t
he rules of how to access the company websites. (Nevertheless, ef ciently function
ing systems are aware of these rules, too.) Different departments, from design t
o manufacturing, from sales to marketing, and from service provision to overall
logistics, can collaborate in this way so that the company can maintain reliable
and ef cient operation. However, as mentioned above, disorder in the operations o
f a company is ampli ed when World Wide Web communication, integration, and collab
oration are introduced. Thus, in order to achieve a really reliable and ef cient s
ystem of operations, careful introduction and well-regulated usage of the networ
k-based tools are a must.
COMPUTER NETWORKING
257
All this would be impossible without computer networking. And this is how indust
rial engineering can bene t fully from the computer network infrastructure and ser
vices within an industrial organization or company.
16.
SUMMARY
Computer networking in industrial engineering is an invaluable tool for establis
hing and maintaining reliable and ef cient operation and thus competitiveness. By
utilizing the possibilities provided by the Internet and the World Wide Web, ind
ustrial enterprises can achieve results that earlier were impossible even to ima
gine. However, the present level of networking technology is just the starting p
hase of an intensive global evolution. The development will not be stopping, or
even slowing down, in the foreseeable future. New methods and tools will continu
e to penetrate into all kinds of human activities, from private entertainment to
business life, from government to administrations to industrial engineering. Ge
tting involved in the mainstream of exploiting the opportunities, by getting con
nected to the network and introducing the new services and applications based on
the Net, means being prepared for the coming developments in computer networkin
g. However, missing these chances means not only losing present potential bene ts,
but also lacking the ability to join the future network-based innovation proces
ses. In 1999, the worldwide Internet Society announced its slogan: The Internet is
for everyone. Why not for those of us in the industrial engineering community?
REFERENCES
Ambegaonkar, P., Ed. (1997), Intranet Resource Kit, Osborne / McGraw-Hill, New Y
ork. Agnew, P. W., and Kellerman, S. A. (1996), Distributed Multimedia: Technolo
gies, Applications, and Opportunities in the Digital Information Industry: A Gui
de for Users and Practitioners, AddisonWesley, Reading, MA. Angell, D., and Hesl
op, B. (1995), The Internet Business Companion: Growing Your Business in the Ele
ctronic Age, Addison-Wesley, Reading, MA. Baker, R. H. (1997), Extranets: The Co
mplete Sourcebook, McGraw-Hill, New York. Bernard, R. (1997), The Corporate Intr
anet, John Wiley & Sons, New York. Bort, J., and Felix, B. (1997), Building an E
xtranet: Connect Your Intranet with Vendors and Customers, John Wiley & Sons, Ne
w York. Buford, J. F. K. (1994), Multimedia Systems, Addison-Wesley, Reading, MA
. Cheswick, W., and Bellovin, S. (1994), Firewalls and Internet Security, Addiso
n-Wesley, Reading, MA. Conner-Sax, K., and Krol, E. (1999), The Whole Internet:
The Next Generation, OReilly & Associates, Sebastopol, CA. Croll, A. A., and Pack
man, E. (1999), Managing Bandwidth: Deploying QoS in Enterprise Networks, Prenti
ce Hall, Upper Saddle River, NJ. Der er, F. (1998), Using Networks, Macmillan, New
York. Der er, F., and Freed, L. (1998), How Networks Work, Macmillan, New York. D
owd, K. (1996), Getting Connected, OReilly & Associates, Sebastopol, CA. Foster,
I., and Kesselman, K., Eds. (1998), The Grid: Blueprint for a New Computing Infr
astructure, Morgan Kaufmann, San Francisco. Goncalves, M., Ed. (1999), Firewalls
: A Complete Guide, Computing McGraw-Hill, New York. Guengerich, S., Graham, D.,
Miller, M., and McDonald, S. (1996), Building the Corporate Intranet, John Wile
y & Sons, New York. Hallberg, B. (1999), Networking: A Beginners Guide, McGraw-Hi
ll, New York. Hannam, R. (1997), Computer Integrated Manufacturing: From Concept
s to Realization, AddisonWesley, Reading, MA. Hills, M. (1996), Intranet as Grou
pware, John Wiley & Sons, New York. Huitema, C. (1998), Ipv6, the New Internet P
rotocol, Prentice Hall, Upper Saddle River, NJ. Kalakota, R., and Whinston, A. B
. (1997), Electronic Commerce: A Managers Guide, AddisonWesley, Reading, MA. Kesh
av, S. (1997), An Engineering Approach to Computer Networking, Addison-Wesley, R
eading, MA.
258
TECHNOLOGY
Kosiur, D. (1998), Virtual Private Networks, John Wiley & Sons, New York. Lynch,
D. C., and Rose, M. T. (1993), Internet System Handbook, Addison-Wesley, Readin
g, MA. Mambretti, J., and Schmidt, A. (1999), Next Generation Internet, John Wil
ey & Sons, New York. Marcus, J. S. (1999), Designing Wide Area Networks and Inte
rnetworks: A Practical Guide, AddisonWesley, Reading, MA. McMahon, C., and Brown
e, J. (1998), CAD / CAM: Principles, Practice and Manufacturing Management, Addi
son-Wesley, Reading, MA. Minoli, D. (1996), Internet and Intranet Engineering, M
cGraw-Hill, New York. Minoli, D., and Schmidt, A. (1999), Internet Architectures
, John Wiley & Sons, New York. Murhammer, M. W., Bourne, T. A., Gaidosch, T., Ku
nzinger, C., Rademacher, L., and Weinfurter, A. (1999), Guide to Virtual Private
Networks, Prentice Hall, Upper Saddle River, NJ. Ptak, R. L., Morgenthal, J. P.
, and Forge, S. (1998), Managers Guide to Distributed Environments, John Wiley &
Sons, New York. Quercia, V. (1997), Internet in a Nutshell, OReilly & Associates,
Sebastopol, CA. Salus, P. H. (1995), Casting the Net: From ARPANET to INTERNET
and Beyond . . . , AddisonWesley, Reading, MA. Schulman, M. A., and Smith, R. (1
997), The Internet Strategic Plan: A Step-by-Step Guide to Connecting Your Compa
ny, John Wiley & Sons, New York. Singhal, S., and Zyda, M. (1999), Networked Vir
tual Environments: Design and Implementation, Addison-Wesley, Reading, MA. Smith
, R. E. (1997), Internet Cryptography, Addison-Wesley, Reading, MA. Smythe, C. (
1995), Internetworking: Designing the Right Architectures, Addison-Wesley, Readi
ng, MA. Stout, R. (1999), The World Wide Web Complete Reference, Osborne / McGra
w-Hill, New York. Taylor, E. (1999), Networking Handbook, McGraw-Hill, New York.
Treese, G. W., and Stewart, L. C. (1998), Designing Systems for Internet Commer
ce, Addison-Wesley, Reading, MA. Ward, A. F. (1999), Connecting to the Internet:
A Practical Guide about LAN-Internet Connectivity, Computer Networks and Open S
ystems, Addison-Wesley, Reading, MA. Wesel, E. K. (1998), Wireless Multimedia Co
mmunications: Networking Video, Voice and Data, Addison-Wesley, Reading, MA. You
ng, M. L. (1999), Internet Complete Reference, Osborne / McGraw-Hill, New York.
advantages of the Internet over previous closed, proprietary networks are numero
us. First, the investment cost necessary to establish an Internet presence is re
latively small compared to earlier
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
259
260
TECHNOLOGY
private value-added networks, which limited EDI applications to large corporatio
ns. Lower costs in turn allow small rms and individuals to be connected to the gl
obal network. Open TCP / IP protocols of the Internet also ensure that communica
ting parties can exchange messages and products across different computing platf
orms and geographic regional boundaries. In physical markets, geographical dista
nce and political boundaries hinder the free movement of goods and people. Simil
arly, closed proprietary networks separate virtual markets arti cially by establis
hing barriers to interoperability. This is equivalent to having a railway system
with different track widths so that several sets of identical rail cars must be
maintained and passengers must be transferred at all exchange points. Neither c
omputing nor networking is new to businesses and engineers. Large-scale private
networks have been an essential ingredient in electronic data interchange, onlin
e banking, and automatic teller machines. Business investments in information te
chnology over the past decades have enabled rms to reengineer manufacturing, inve
ntorying, and accounting processes. Nevertheless, the strength of the Internet l
ies in its nature as an open network. Economically speaking, the open Internet a
llows easy entry and exit into a market because of lower costs and greater marke
t reach. A store located on a small tropical island can effectively reach global
partners and consumers, collaborating and competing with multinational corporat
ions without investing in international branches and sales presence. While compu
ters and networking technologies have advanced steadily over the past decades, t
hey have lacked the characteristics of a true infrastructure. An infrastructure
needs to be open and interoperable so as to allow various private enterprises wi
th differing products and goals to collaborate and transact business in a seamle
ss environment. As an infrastructure, the Internet provides open connectivity an
d uniformity as the rst technological medium of its kind that supports a persiste
nt development of universal applications and practices.
2.
ELECTRONIC COMMERCE FRAMEWORKS
An apparent rationale for implementing electronic commerce is to reduce transact
ion costs related to manufacturing, distribution, retailing, and customer servic
e. Many such uses involve automating existing processes through the use of compu
ters and networks. But more importantly, new technologies now enable economic ag
ents to move from simple automation to process innovation and reengineering. The
complex web of suppliers, distributors, and customers doing business on the Wor
ld Wide Web is allowing businesses to transform traditional markets and hierarch
ies into a new form called a network organization. Unlike hierarchies and centra
lized markets common in the physical economy, this structure based on networks a
llows a high degree of exibility and responsiveness, which have become two pillar
s of the digital economy (see Section 2.3). The Internet-based economy is multil
ayered. It can be divided into several layers that help us in grasping the natur
e of the new economy. Barua et al. (1999) have identi ed four layers of the Intern
et economy in their measurement of the Internet economy indicators. The rst two,
Internet infrastructure and Internet applications layers, together represent the
IP or Internet communications network infrastructure. These layers provide the
basic technological foundation for Internet, intranet, and extranet applications
. The intermediary / market maker layer facilitates the meeting and interaction
of buyers and sellers over the Internet. Through this layer, investments in the
infrastructure and applications layers are transformed into business transaction
s. The Internet commerce layer involves the sales of products and services to co
nsumers or businesses. According to their measurements, the Internet economy gen
erated an estimated $301 billion in U.S. revenues and created 1.2 million jobs i
n 1998. Estimates of revenues and jobs contributions by each layer are presented
in Table 1.
TABLE 1
Internet Revenues and Jobs in 1998, U.S. Estimated Internet Revenues (millions o
f dollars) Attributed Internet Jobs 372,462 230,629 252,473 481,990 1,203,799
Internet infrastructure layer Applications layer Intermediary / market maker lay
er Internet commerce layer Total
Source: Barua et al. 1999. Reprinted with permission.
114,982.8 56,277.6 58,240.0 101,893.2 301,393.6
ELECTRONIC COMMERCE
261
2.1.
Economics of the Digital Economy
The digital revolution is often viewed as the second industrial revolution. But
why does the Internet have such a great effect on business activities and the ec
onomy? How is the Internet-driven economy different from the previous industrial
economy? Despite its obvious usefulness, a comparison to the industrial revolut
ion is misleadingthe digital revolution operates on quite different premises. In
many respects, the digital revolution is undoing what we achieved in the previou
s age of industrial production. For example, the primary commodity of the digita
l ageinformation and other knowledge-based goodsbehaves quite differently than ind
ustrial goods. Industrial goods and production technologies that can churn out m
illions of an item with the least unit cost have been the hallmark of the modern
economy. From ordinary household goods such as silverware and dishes to mass-pr
oduced industrial goods like automobiles and consumer appliances, increasing ava
ilability and decreasing price of these goods have brought an unimaginable level
of mass consumption to the general public. Nevertheless, mass-produced industri
al goods, typi ed by millions of identical Ford Model Ts, are standardized in an e
ffort to minimize costs and as a result are unwieldy in tting individual needs. T
he characteristics of the industrial economy are summarized in Table 2. Business
processes of an industrial rm are optimized for a supply-driven commerce, while
the digital economy is geared toward customer demand. The economics of industria
l goods has promoted least-cost solutions and a pervasive focus on costs that ha
s become the limiting factor in both product choices offered to customers and ma
nufacturing options open to producers. Values are created not from maximizing us
er satisfaction but from minimizing costs, not from exibility in production but f
rom production ef ciency, which often disregards what the customers want and need.
Value creation in the Industrial Age ows in a linear, rigid, in exible, and predet
ermined stage of preproduction research, manufacturing, marketing, and sales. Th
e need to minimize costs is so overwhelming that rms apply the same cost economic
s to nonmanufacturing stages of their business, such as distribution, inventory
management, and retailing. Partly because of the economic ef ciency achieved durin
g the industrial revolution, manufacturing now rarely accounts for more than hal
f of a rms total operating costs. Product research, marketing and advertising, sal
es, customer support, and other nonproduction activities have become major aspec
ts of a business organization. This trend towards a nonmanufacturing pro le of a rm
is re ected in todays focus on business strategies revolving around quality manage
ment, information technology, customer focus, brand loyalty, and customization.
The Internet economy departs from the cost-minimization economics of the industr
ial age, but this transformation is not automatic simply because one is dealing
with digital goods. For example, typical information goods such as news and data
bases are subject to the same economics as industrial goods as long as they are
traded as manufactured goods. Cost minimization is still a necessary concern in
the newspaper business. Limitations of the industrial age will translate into th
e Internet economy even when newspapers and magazines are put on the Web if thes
e online products are nothing more than digitized versions of their physical cou
nterparts. Many content producers and knowledge vendors may be selling digital g
oods but be far from participating in the digital economy if their products stil
l conform to the cost-minimization economics of the industrial age.
2.2.
Product and Service Customization
Knowledge is a critical part of economic activities in both industrial and digit
al economies, but they differ signi cantly in the way knowledge is utilized. While
the main focus in generating and applying knowledge during the industrial age w
as on maximizing ef cient production through lower costs, the use of knowledge in
the digital economy focuses on providing customers with more choices. Instead of
standardizing products, the digital revolution drives rms to focus on maximizing
customer satisfaction by customizing products and meeting consumption needs. To
offer more choices and satisfaction to customers, business processes must be exi
ble and responsive. Web-based supply chain management, trading through online au
ctions, targeted marketing
TABLE 2
Industrial Economy vs. Digital Economy Digital Economy Demand-driven Value maxim
ization Customization Nonlinear value Web Service competition
Industrial Economy Supply-driven Cost minimization Standardization Linear value
chain Price competition
262
TECHNOLOGY
and sales, and interactive customer service create values not simply by reducing
costs but by allowing rms to be responsive to customers needs.
2.3.
Flexible and Responsive Organization
Just as new products have been born from the technologies of the Internet, so ha
s a new organizational form. The nonlinear technology of the Web makes it possib
le to have an organization that represents the highest level of exibility. In thi
s new form, de ning or classifying virtual rms and markets based on traditional org
anizational structures such as a hierarchy or an M-form can be very dif cult. Inde
ed, a very exible organization may exist only as a network organization that de es
any structural formula. In physical markets, a rm is organized into a functional
hierarchy from the top-level executive to divisions and managers on down the hie
rarchy. It may structure its divisions following various product groups that rar
ely intersect in the market. The markets are organized under the natural order o
f products and producers from materials, intermediate goods, consumption goods,
distributors, and retailers. Firms operating in traditional physical markets org
anize their business activities in a linear fashion. After the planning and prod
uct selection stage, the materials and labor are collected and coordinated for m
anufacturing. The manufactured products are then handled by the distribution and
marketing divisions, followed by sales and customer service activities. These f
unctions ow in a chain of inputs and outputs, with relatively minor feedback betw
een stages. The value embedded in the materials is increased in each step of the
manufacturing and marketing processes by the added labor, materials, and other
inputs. In a linear market process such as this, the concept of the value chain
can be used to highlight the links between business processes. On the other hand
, a networked economy is a mixture of rms that is not restricted by internal hier
archies and markets and does not favor controlled coordination like an assembly
line. Businesses operating in this virtual marketplace lack incentives to mainta
in long-term relationshipsbased on corporate ownership or contractswith a few supp
liers or partners. Increasingly, internal functions are outsourced to any number
of rms and individuals in a globally dispersed market. Rather than adhering to t
he traditional linear ow, the new digital economy will reward those that are exibl
e enough to use inputs from their partners regardless of where they are in the l
inear process of manufacturing. In fact, the linear value chain has become a value
web where each and every economic entity is connected to everyone else and where
they may often function in a parallel or overlapping fashion. Electronic commerc
e then becomes an essential business tool to survive and compete in the new econ
omy.
3.
ENTERPRISE AND B2B ELECTRONIC COMMERCE
Sharing information within business and between businesses is nothing new. Elect
ronic data interchange (EDI) has allowed rms to send and receive purchase orders,
invoices, and order con rmations through private value-added networks. Todays EDI
now allows distributors to respond to orders on the same day they are received.
Still, only large retailers and manufacturers are equipped to handle EDI-enabled
processes. It is also common for consumers to wait four to six weeks before a m
ail order item arrives at their door. Special order itemsitems not in stockat Barn
es & Noble bookstores, for example, require three to four weeks of delay. In con
trast, an order placed on a website at Lands End (a clothing retailer) or an onli
ne computer store arrives within a day or two. The business use of the Internet
and electronic commerce enables online rms to reap the bene ts of EDI at lower cost
. The ultimate fast-response distribution system is instantaneous online deliver
y, a goal that a few e-businesses in select industries have already achieved. By
their very nature, ondemand Internet audio and video services have no delay in
reaching customers. In these examples, the ef ciency stems from highly automated a
nd integrated distribution mechanisms rather than from the elimination of distri
bution channels as in more traditional industries.
3.1.
Web-Based Procurement
A traditional businesss rst encounter with e-commerce may well be as a supplier to
one of the increasingly common Internet Web stores. Supply chain management is
in fact a key, if not a critical, factor in the success of an Internet retailer.
The number of products offered in a Web store depends not on available shelf sp
ace but on the retailers ability to manage a complex sets of procurement, invento
ry, and sales functions. Amazon.com and eToys (http: / / www.etoys.com), for exa
mple, offer 10 times as many products as a typical neighborhood bookstore or toy
shop would stock. The key application that enables these EC enterprises is an i
ntegrated supply chain. Supply chain management refers to the business process t
hat encompasses inter rm coordination for order generation, order taking and ful llm
ent, and distribution of products, services, and information. Suppliers, distrib
utors, manufacturers, and retailers are closely linked in a supply chain as inde
pendent but integrated entities to ful ll transactional needs.
ELECTRONIC COMMERCE
263
In physical markets, concerns about existing investments in warehouses and distr
ibution systems often outweigh the desire and cost to implement fast response de
livery. Some retailers still rely 100% on warehouses and distribution centers to
replenish its inventory. Other retailers such as Tower Records have moved away
from warehousing solutions to drop shipping, which entails shipping directly fro
m manufacturers to retail stores. Drop shipping, just-in-time delivery, and othe
r quick-response replenishment management systems address buyers concerns about d
elays in receiving the right inventory at the right time. Many quick-response sy
stems take this a step further and empower suppliers to make shipments on their
own initiative by sharing information about sales and market demand. Application
s for a supply chain did not appear with the Internet or intranets. The supply c
hain concept evolved during previous decades to address manufacturers needs to im
prove production and distribution where managing parts and inventories is essent
ial in optimizing costs and minimizing production cycles. EDI on the Internet an
d business intranets, used by large corporations such as General Electric Inform
ation Services and 3M, are aimed at achieving ef cient supply management at the lo
wer costs afforded by the Internet. Intermediaries such as FastParts (http: / /
www. fastparts.com) further facilitate corporate purchasing on the Internet (see
Section 7.2 below for a detailed discussion of B2B auctions). Traditional retai
lers such as Dillards and Wal-Mart have implemented centralized databases and EDI
-based supply and distribution systems. The Internet and EC now allow small to m
edium-sized retailers to implement technologies in their procurement system thro
ugh a low-cost, open, and responsive network. An ef cient procurement system and s
oftware, once installed, can enable rms to cross market boundaries. In a highly s
calable networked environment, the size of a retailer depends only on the number
of customers it attracts, rather than on capital or the number of outlets it ac
quires. The infrastructure set up for one product can also be expanded for other
products. Amazon.com, for example, uses its integrated back of ce system to handl
e not only books but CDs, videos, and gifts. Store designs, product database and
search algorithms, recommendation systems, software for shopping baskets and pa
yments systems, security, and server implementation for book retailing have simp
ly been reused for other products. An ef cient online retailer and its IT infrastr
ucture can be scaled for any number of products with minimum constraints added.
3.2.
Contract Manufacturing
In the digital economy, the trend toward outsourcing various business functions
is growing rapidly because it offers a less costly alternative to in-house manuf
acturing, marketing, customer service, delivery, inventorying, warehousing, and
other business processes. This would be consistent with the observation that rms
in the physical economy also delegate production activities to external organiza
tions if they nd it less costly than to internalize them. As long as the cost of
internal production (or service provision) is higher than the cost of contractin
g and monitoring, rms will prefer to outsource. Regardless of the logic of this,
it appears that the type of cost savings plays a critical role in the outsourcin
g decision. A study by Lewis and Sappington (1991) argues that when a rms decision
to buy vs. internally produce inputs involves improvements in production techno
logy, more in-house production and less outsourcing is preferred. Their result d
oes not depend on whether the subcontractors production technology was idiosyncra
tic (only useful to produce the buyers inputs) or transferable (the supplier coul
d use its production technology and facility to service other potential buyers).
In the case of transferable technology, the supplier would be expected to inves
t more in production technology, and thus offer lower costs, which may favor mor
e outsourcing. Nevertheless, the buyer still preferred to implement more ef cient
264
TECHNOLOGY
Figure 1 Global Manufacturing Networks of Contract Manufacturer Solectron. Repro
duced with permission.
tributors for the ancillary tasks of moving their products to retailers. They of
ten interact only with a few large distributors, who are expected to push assign
ed products down to the retail channel. Their partners are more exible manufactur
ers and distributors, who have developed closer ties to end customers. For examp
le, channel marketers in the computer industry consist of original equipment man
ufacturers (OEMs) who provide basic components to distributors, who build comput
ers after orders are received. New players are now taking advantage of informati
on technology to cut costs while delegating most functions to third parties. Del
l Computers and Gateway began a new distribution model based on direct marketing
. They now rely on contract manufacturers to distribute their products. Orders r
eceived by Dell are forwarded to a third party who assembles and ships the nal pr
oducts directly to consumers. This built-to-order manufacturing model attains di
stribution objectives by outsourcing manufacturing as well as distribution funct
ions to those who can optimize their specialized functions. Distributors, in add
ition to delivering products, face the task of integrating purchasing, manufactu
ring, and supply chain management. For traditional in-house manufacturers and re
tailers, integrated distributors offer technological solutions to delegate order
ful llment and shipping tasks to outside contractors. For example, Federal Expres
s (http: / / www.fedex.com) offers Internet-based logistics solutions to online
retailers. FedEx introduced FedEx Ship, which evolved from its EDI-based FedEx P
owerShip, in 1995. FedEx Ship is free PC software that customers can download an
d use to generate shipping orders. In 1996, FedEx launched its Internet version
on its website, where customers can ll out pickup and shipping requests, print la
bels and track delivery status. As an integrated logistics operator, FedEx also
offers a turnkey solution to online retailers by hosting retailers websites or li
nking their servers to retailers sites in order to manage warehouses and invoices
coming from retailers websites, then pack and ship products directly to consumer
s. Retailers are increasingly delegating all warehousing and shipping functions
to a third party such as FedEx, while distributors expand their services into al
l aspects of order ful llment.
3.3.
Logistics Applications
Being ready for the digital economy means more than simply allowing customers to
order via the Internet. To process orders from online customers requires seamle
ss, integrated operation from man-
ELECTRONIC COMMERCE
265
ufacturing to delivery, readiness to handle continuous feedback from and interac
tion with customers, and the capability of meeting the demand and offer choices
by modifying product offerings and services. Clearly, being integrated goes beyo
nd being on the Internet and offering an online shopping basket. Manufacturing,
supply chain management, corporate nance and personnel management, customer servi
ce, and customer asset management processes will all be signi cantly different in
networked than in nonnetworked rms. Logistics management, or distribution managem
ent, aims at optimizing the movement of goods from the sources of supply to nal r
etail locations. In the traditional distribution process, this often involves a
network of warehouses to store and distribute inventoried products at many level
s of the selling chain. Manufacturers maintain an in-house inventory, which is s
hipped out to a distributor who stores its inventory in warehouses until new out
bound orders are ful lled. At each stage the inventory is logged on separate datab
ase systems and reprocessed by new orders, adding chances for error and delayed
actions. Compaq, for example, which uses an older distribution model, has to all
ow almost three months for its products to reach retailers, while Dell, a direct
-marketing rm, ful ll the order in two to three weeks. Wholesalers and retailers of
ten suffer from inef cient logistics management. Distributors may have as much as
70% of their assets in inventory that is not moving fast, while retailers receiv
e replenishments and new products long after sales opportunities have disappeare
d. Optimizing distribution cycles and lowering incurred costs are a common conce
rn for manufacturers, distributors, and retailers. A conventional logistics mana
gement model built around warehouses and distribution centers is an ef cient solut
ion when products have similar demand structure and must be moved in the same ma
nner. In this case, distribution centers minimize overall transportation costs b
y consolidating freight and taking advantage of the scale economy. This practice
closely mirrors the hub-and-spoke model of airline transportation. By consolida
ting passenger traf c around a few regional hubs, airlines can employ larger airpl
anes on major legs and save on the number of ights and associated costs. Neverthe
less, passengers often nd that they must endure extra ying time and distance becau
se ights between small but adjacent cities have been eliminated. While the hub-an
d-spoke system provides many advantages, it is too in exible to respond to individ
ual ying patterns and preferences. Similarly, warehousing and distribution center
s fail to function when products must move speedily through the pipeline. When m
arket prices change as rapidly as computer components, Compaqs computers, which s
it in distribution centers for several months, lose their value by the time they
reach consumers. Dells more responsive pricing is made possible by its fast-movi
ng distribution channel. More responsive distribution management cannot be achie
ved by simply removing several layers of warehouses and distribution centers. Ra
ther, distributors in the United States are being integrated into the whole valu
e chain of order taking, supply chain, and retailing. In the traditional logisti
cs management model, distributors remained a disjointed intermediary between man
ufacturers and retailers. In an integrated logistics model that depends heavily
on information technology and Webbased information sharing, distributors are ful
ly integrated, having access to manufacturing and sales data and their partners d
ecision making process.
4.
ELECTRONIC COMMERCE AND RETAILING
The most prominent electronic commerce application is the use of the Internet fo
r consumer retailing that will change a rms relationship with its customers. With
the growth of online retail activities, the Internet challenges not only the sur
vival of established retail outlets but also the very mode of transactions, whic
h in physical markets occur through face-to-face sellerbuyer interactions. Online
266
TECHNOLOGY
example, Amazon.com, having no physical retail outlets to worry about, has desig
ned a store improving the range of product and product information to match cust
omer needs and offering fast and ef cient shopping service. In contrast, players i
n existing retail markets are concerned with protecting their existing channels.
Most retailers open their Web stores either to keep up with competitors (28%) o
r to explore a new distribution channel (31%) (Chain Store Age 1998). About 40%
of retailers use their Web stores in an effort to extend their business into the
virtual market. The exceptional growth of online retailing in the United States
can be traced back to several favorable characteristics. First, U.S. consumers
have long and favorable previous experience with catalog shopping. Second, credi
t cards and checks are widely used as a preferred payment method, making it easy
to migrate into the online environment. More importantly, commercial infrastruc
ture and environment of the U.S. markets are transparent in terms of taxation, r
egulation, and consumer protection rules. Consumers also have access to ef cient d
elivery networks of Federal Express and United Parcel Service in order to receiv
e purchased products on time and in good condition. These auxiliary market facto
rs have been essential in the initial acceptance of Web-based commerce.
4.2.
E-Retailing of Physical Products
Online retailing often refers to a subcategory of business that sells physical produ
cts such as computers (Dell online store), automobiles (Auto-by-tel), clothing (
Lands End online), sports equipment (Golfweb), and owers and garden tools (Garden.
com). Currently, books and music CDs fall into this category, although retailers
are becoming increasingly aware of their digital characteristics. Electronic co
mmerce in physical goods is an extension of catalog selling where the Internet f
unctions as an alternative marketing channel. In this regard, online shops compe
te directly with physical retail outlets, leaving manufacturers to juggle betwee
n established and newly emerging distribution channels.
4.3.
E-Retailing of Digital Products
Retailers of digital products have a distinct advantage over other sectors in th
at they deliver their goods via the Internet. Sellers of news (e.g., The New Yor
k Times [http: / / www.nytimes.com]), magazines (BusinessWeek [http: / / www.bus
inessweek.com]), textbooks (SmartEcon [http: / / www. smartecon.com]), informati
on, databases, and software can provide products online with no need for distrib
ution and delivery by physical means. This sector also includes such search serv
ices as Yahoo, Excite, and other portals, although these rms currently rely on ad
vertising revenues rather than pricing their digital products. A number of onlin
e retailers are selling physical products that are essentially digital products
but currently packaged in physical format. For example, Amazon.com sells printed
books, which are in essence digital information products. Likewise, audio CDs a
nd videos sold by Amazon.com and CDNow (http: / / www.cdnow.com) are digital pro
ducts. Because digital products can be transferred via the network and are highl
y customizable, online retailing of these products will bring about fundamental
changes that cannot be duplicated in physical markets. For example, instead of b
eing an alternative distribution channel for books and CDs, online stores can of
fer sampling through excerpts and RealAudio les. Books and CDs are beginning to b
e customized and sold to customers in individualized con gurations (see Figure 2),
downloadable to customers digital book readers or recordable CDs. Digitized good
s can often be delivered in real time on demand. These functions add value by pr
oviding consumers more choices and satisfaction. The primary retail function is
no longer getting products to customers at the lowest cost but satisfying demand
while maximizing revenues by charging what customers are willing to pay.
4.4.
E-Retailing of Services
A growing subcategory of electronic retailing deals with intangible services suc
h as online gaming, consulting, remote education, and legal services and such pe
rsonal services as travel scheduling, investment, tax, and accounting. Online se
rvice providers are similar to digital product sellers because their service is
delivered via the Internet. Nevertheless, transactions consist of communications
and interactions between sellers and buyers, often without an exchange of any na
l product other than a receipt or con rmation of the transaction. A much anticipat
ed aspect of online service delivery involves remote services, such as telemedic
ine, remote education, or teleconsulting. The future of remote services is criti
cally dependent on the maturity of technologies such as 3D and virtual reality s
oftware, video conferencing on broader bandwidth, and speech and handwriting rec
ognition. Advances in these technologies are necessary to replicate an environme
nt where physical contact and interaction are essential for providing personal s
ervices. Despite a low representation of remote services on todays Internet, seve
ral types of service retailing are currently practiced. For example, 2 of the to
p 10 online retailing activitiestravel and nancial servicessell services rather tha
n products. Airline tickets are digital products that simply
ELECTRONIC COMMERCE
267
Figure 2 Online Textbooks Are Customized at SmartEcon.com. Reproduced with permi
ssion.
represent future uses of airline services. An airline ticket, therefore, signi es
a service to schedule ones trips. Likewise, products in online stock markets and
ancial services are entitlements to company assets and notational currencies. On
line service providers in these markets include those in investment, banking, an
d payment services.
5.
PRICING IN THE INTERNET ECONOMY
In this section, we review some issues in pricing and payment in the Internet ec
onomy. In the physical market, prices are xed for a certain period of time and di
splayed to potential customers. Such xed and posted prices are commonly observed
for mass-produced industrial goods such as books, music CDs, products sold throu
gh catalogs, and other consumer goodsproducts that are standardized and sold in m
ass quantity. Posted prices, in turn, allow both sellers and buyers to compare p
rices and often lead to highly competitive and uniform prices. In the Internet e
conomy, the menu cost of changing prices is relatively low, allowing sellers to
change prices according to uctuating demand conditions. Per-transaction costs of
clearing payments also decline signi cantly with the use of online payment systems
(see Choi et al. [1997, chap. 10] for an overview of online payment systems). I
n addition, sellers have access to an increasing amount of customer information
as Web logs and network service databases are mined and analyzed to yield detail
ed information about buyers. Such developments raise serious questions about the
use of identi able customer information and how such information is used in setti
ng prices.
5.1.
Security and Privacy in Transaction
Complete and absolute anonymity is very rare in commercial transactions because
the very nature of trade often requires some form of identi cation to verify payme
nts or any legally required quali cations. The only transactions that come close t
o being completely anonymous are cash-based. Even with these, repeat business wi
ll convey some information about the buyer to the seller, not to speak of serial
numbers of bills, ngerprints, security cameras, and so on. In many cases, anonym
ity means customer privacy is guaranteed by the seller. Technically, a Web vendo
r may assign a random number to each customer without linking the information ga
thered around that number to a physically identi able consumer. If customer inform
ation is fully disclosed for personalization, the vendor may guarantee not to us
e it for any other purposes or sell it to outsiders. Whether we are concerned wi
th
268
TECHNOLOGY
anonymity or privacy, the gist of the matter is the degree to which customer inf
ormation is conveyed to the seller.
5.1.1.
Types of Anonymity
In general, anonymity is an extreme form of privacy in that no information is tr
ansferred. But even anonymity comes in a few varieties.
Complete anonymity: With complete anonymity, sellers do not know anything about
customers
except that a transaction has occurred. Products are not personalized, and payme
nts are made in cash or in an untraceable form of payment. The market operates a
s if sellers have no means to identify customers or learn about them in any way.
Incomplete anonymity: Some anonymous customers can be traced to an identi able pe
rson. When identities are traceable, customers have incomplete anonymity. For re
asons of security and criminal justice, digital les and communications have digit
al ngerprints that enable authorities to trace them back to their originators, al
beit with dif culty. One rationale for such a compromise is that the nature of dig
ital products and their reproducibility threaten the very viability of intellect
ual property on the Internet unless unauthorized copies can be identi ed and trace
d to the culprits. Microsoft has for unknown reasons surreptitiously incorporate
d such measures in any document created by its Word program. Some computer manuf
acturers have also proposed implanting serial numbers in microprocessors so as t
o endow each computer with its own digital identity. This sort of information do
es not relate directly to any identi able persons, although they may be traceable
through usage and ownership data. Pseudonymity: A pseudonym is an alias that rep
resents a person or persons without revealing their identity. Pseudonymity opera
tes exactly the same way as anonymity: it can be untraceable and complete or tra
ceable and incomplete. Pseudonymity has particular signi cance in the electronic m
arketplace because a pseudonym may have the persistency to become an online pers
ona. In some cases, an online person known only for his or her pseudonym has bec
ome a legendary gure with a complete personality pro le, knowledge base, and other
personal characteristics recognized by everyone within an online community. Pers
istent pseudonyms are also useful in providing promotional services and discount
s such as frequent yer miles and membership discounts without disclosing identity
. Privacy is concerned with an unauthorized transfer (or collection) of customer
information as well as unauthorized uses of that information. Anonymity prevent
s such abuses by removing identity and is therefore the most effective and extre
me example of privacy. Privacy can be maintained while identi able information is
being transferred to the seller, which presents no problem in and of itself. Con
cerns arise when that information is used without the consent of its owner (the
consumer) to affect him or her negatively (e.g., junk e-mails, unwanted advertis
ements, and marketing messages). What a seller can and cannot do with collected
information is often decided by business convention but could ultimately be dete
rmined by law.
5.1.2.
Tools for Privacy
In a marketplace, whenever a consumer need or concern arises, the market attempt
s to address it. Here, too, the market has created technologies that address con
sumers concerns about lack of privacy. The most effective way to preserve privacy
is to remove information that identi es a person by physical or electronic addres
s, telephone number, name, website, or server address. Several models are availa
ble on the Internet.
Anonymity services: Anonymity systems have devised a number of ways to strip a u
sers identity
from an online connection. In one system, an anonymous remailer receives an encr
ypted message from a user. The message shows only the address to which the messa
ge should be forwarded. The remailer sends the message without knowing its conte
nt or originator. This process may continue a few times until the last remailer
in the chain delivers the message to the intended destinationa person, bulletin b
oard, or newsgroup. Proxy server: A proxy server acts as an intermediary. For ex
ample, Anonymizer.com operates a server that receives a users request for a web p
age and fetches it using its own site as the originator. Websites providing the
content will not be able to identify original users who requested their content.
Relying on numbers: Expanding on this proxy model, a group of consumers may act
as a shared computer requesting a web page. Crowds, developed by AT&T Laborator
ies, relies on the concept that individuals cannot be distinguished in a large c
rowd. Here, individual requests are randomly forwarded through a shared proxy se
rver, which masks all identi able information
ELECTRONIC COMMERCE
269
about the users. It is interesting to note that the system tries to maintain pri
vacy by relying on crowds when the word privacy suggests being apart from a crowd. P
seudonyms: Lucents Personalized Web Assistant and Zero Knowledge Systems Freedom r
ely on pseudonyms. This approach has the advantage of providing persistent onlin
e personas or identities. Persistent personas, on the other hand, can be targete
d for advertisements based on historical data, just like any real person. These
and other currently available anonymity services are effective in most cases. Bu
t whether they are secure for all occasions will depend on the businesss technica
l and operational integrity. For example, the anonymous remailer system requires
that at least one of the remailers in the chain discard the sender and destinat
ion information. If all remailers cooperate, any message can be traced back to i
ts originator. In addition, software bugs and other unanticipated contingencies
may compromise the integrity of any anonymity service. Because technologies crea
te anonymity, they can also be broken by other technologies. Commercial transact
ions conducted by anonymous or pseudonymous users still have the advantage of al
lowing sellers to observe and collect much more re ned demand data than in physica
l markets. For conventional market research, identi able information is not necess
ary as long as it results in better demand estimation and forecast. Even without
knowing who their customers are or at what address they reside, sellers can obt
ain enough useful data about purchasing behaviors, product preferences, price se
nsitivity, and other demand characteristics. In other cases, identi able informati
on may be necessary for users to receive the bene ts from using online technologie
s. Customization and online payment and delivery clearly require users personal i
nformation. To some extent, these activities may be handled through online pseud
oidentities. However, when a reasonable level of privacy is assured, customers m
ay be willing to reveal their identity just as they do by giving out credit card
information over the phone. Current industry-led measures to protect online pri
vacy are aimed at reducing consumers unwillingness to come online and mitigate an
y effort to seek legal solutions. Essentially, these measures require the seller
s to disclose clearly to consumers what type of data they collect and what they
do with it. Secondly, they also encourage sellers to offer consumers a means to
specify what they are willing to agree to. In a sense, sellers and consumers neg
otiate the terms of information collection and uses. Some websites display logos
indicating that they follow these guidelines. P3P (Platform for Privacy Prefere
nces) by the World Wide Web Consortium (W3C; http: / / www.w3.org) and TRUSTe fr
om CommerceNet (http: / / www.commercenet.org) are the two main industry-wide ef
forts toward privacy in this regard. As the trend toward privacy disclosure indi
cates, consumers seem to be concerned about selling personal information to thir
d parties. Unsolicited, unrelated junk e-mails from unknown sellers are soundly
rejected. They are, however, less upset about receiving advertisements, in the f
orm of product news, from the sellers they visit. This is largely because consum
ers recognize the need for information in ordinary commerce and the bene t from cu
stomization. Privacy is largely an issue of avoiding unwanted marketing messages
, as collecting and selling information about consumers has long been standard p
ractice in many industries.
5.2.
Real-Time Pricing
Prices change almost instantly in auctions, but posted prices may also change, a
lbeit at a slower pace. Any price that clears the market is determined by demand
and supply conditions, which uctuate. For example, when there is an excess deman
d (supply), price goes up (down) until buyers and sellers agree to make a transa
ction. Posted prices may be xed during a given period, but unless a seller posses
ses perfect information about consumer valuations, he must rely on his informati
on about costs and the market signal received from previous sales to decide pric
es. To increase sales, prices must be lowered; when there is a shortage, prices
can be raised. Through this trial-and-error process, prices converge on a market
clearing level until changes in demand or supply conditions necessitate further
price changes. In this vein, posted price selling is a long-term version of rea
l-time pricing. If we consider sales trends as bids made by market participants,
there is little qualitative difference between posted-price selling and auction
s in terms of price movements, or how market clearing prices are attained. Never
theless, fundamental differences exist in terms of the number of market particip
ants, the speed at which prices are determined, or in the transactional aspects
of a trade, such as menu costs. The Web is ideal for personalized product and se
rvice delivery that employs exible, real-time changes in pricing. Vending machine
s and toll collectors operating on a network may charge different prices based o
n outdoor temperatures or the availability of parking spaces. A soda may be sold
at different prices depending on location or time of day. User-identi ed smart ca
rds, which also provide relevant consumer characteristics, may be required for p
ayment so that prices can be further differentiated in real time based on their
identity.
270
TECHNOLOGY
5.3.
Digital Product Pricing
Pricing strategies in the digital economy undergo signi cant transformation from c
onventional costbased pricing. Because pricing strategies are an integral part o
f the overall production process, it would be folly to assume that existing econ
omic modeling and reasoning can simply be reinterpreted and applied to the digit
al economy. For instance, digital products are assumed to have zero or nearly ze
ro marginal costs relative to their high initial costs needed to produce the rst
copy. This implies that competitive (based on the marginal cost) prices will be
zero and that all types of knowledgebased products will have to be given away fo
r free. But no rm can survive by giving out its products for free. For example, a
rm invests $1000 to produce a digital picture of the Statue of Liberty. Suppose
that it costs additional $1 (marginal cost) to make a copy of the le on a oppy dis
k. Because of the overwhelming proportion of the xed cost, the average cost conti
nues to decline as we increase the level of production (see Figure 3). Its avera
ge cost will converge toward $1, its marginal cost. If the variable cost is zero
, its average cost, along with its market price, will also be zero. This ever-de
clining average cost poses serious problems in devising pro table pricing strategi
es. A leading concern is how to recover xed cost. For example, suppose that Micro
soft spends $10 million dollars for one of its software upgrades. If it nds only
10 takers, Microsoft will lose money unless it sells it at $1 million each. At $
100, its break-even sales must be no less than 100,000. Microsoft, with signi cant
market power, will nd little dif culty in exceeding this sales gure. On the other h
and, many digital product producers face the possibility of not recovering their
initial investment in the marketplace. For digital products or any other produc
ts whose cost curves are declining continuously, the average cost or the margina
l cost has little meaning in determining market prices. No rm will be able to ope
rate without charging customers for their products and services. Traditional pri
ce theory, which relies heavily on nding average or marginal cost, is nearly usel
ess in the digital economy. As a result, we must consider two possibilities. The
rst is that the cost structure of a digital product may be as U-shaped as that o
f any other physical product. In this case, the pricing dilemma is solved. For e
xample, the variable cost of a digital product may also increase as a rm increase
s its production. At the very least, the marginal cost of a digital product, for
example, consists of a per-copy copyright payment. In addition, as Microsoft tr
ies to increase its sales, it may face rapidly increasing costs associated with
increased production, marketing and advertising, distribution, and customer serv
ice. In this case, traditional price theory may be suf cient to guide e-business rm
s for pricing. Secondly, the possibility of extreme product customization implie
s that each digital product has unique features that have different cost schedul
es. In this case, the relevant cost schedule cannot be
$ 1000
500
1
2
3
4
5
Units
Figure 3 Decreasing Average Cost of a Digital Product Firm.
ELECTRONIC COMMERCE
271
tabulated on the basis of the number of units reproduced. The marginal cost to c
onsider is not that of reproduction, but that of production quality of a unique,
personalized copy. Conventional price theory based on competitive sellers and b
uyers would be inadequate to deal with customized products, whether digital or p
hysical, because of the heterogeneity of products. In such a market, cost factor
s have little meaning because the seller has some market power and the buyer has
limited capability to substitute vendors. Instead, demand-side factors such as
consumers willingness to pay and relative positions in bargaining and negotiation
determine the level of price.
6.
INTERMEDIARIES AND MARKETS
The term digital economy makes it clear that digital technologies are more than
tools that offer an alternative to conventional catalog, TV, and mall shopping a
ctivities. Digital technologies provide incentives to reorganize rms, recon gure pr
oducts and marketing strategies, and explore novel ways to make transactions. In
this section, we review new forms of market mechanisms that are based on networ
ked markets.
6.1.
Types of Intermediation
Economic activities revolve around markets where sellers, buyers, and other agen
ts must meet and interact. Therefore, the structure of the electronic marketplac
e plays an important role in achieving ultimate economic ef ciency. How the market
place is organized is a question that deals with the problem of matching sellers
with buyers in the most ef cient manner, providing complete and reliable product
and vendor information and facilitating transactions at the lowest possible cost
. The three modelsportals, cybermediaries, and auction marketsrepresent different
solutions to tackling these problems:
Portals: In portals, the objective is to maximize the number of visitors and to
present content
in a controlled environment, as in physical malls. Internet portals generate rev
enues from advertising or from payments by rms whose hypertext links are strategi
cally placed on the portals web page. Corporate portals, an extended form of intr
anet, do not produce advertising revenues but offer employees and customers an o
rganized focal point for their business with the rm. Cybermediaries: Cybermediari
es focus on managing traf c and providing accounting and payment services in the i
ncreasingly complex Web environment. Their revenues depend on actual sales, alth
ough the number of visitors remains an important variable. Many of them provide
tertiary functions that sellers and buyers do not always carry out, such as qual
ity guarantees, marketing, recommendation, negotiation, and other services. Elec
tronic auctions: Electronic auction markets are organized for face-to-face trans
actions between sellers and buyers. The market maker, although it is a form of i
ntermediary service, plays a limited role in negotiation and transaction between
agents. Conceptually, electronic markets are a throwback to medieval markets wh
ere most buyers and sellers knew about each others character and their goods and
services. This familiarity is provided by the market maker through information,
quality assessment, or guarantee. In this market, negotiations for terms of sale
s become the most important aspect of trade. We nd that the reason why more inter
mediaries are needed in electronic commerce is that the complex web of suppliers
and customersand real-time interactions with themposes a serious challenge to ful l
272
TECHNOLOGY
provides a similar service. In a grocery store, the customer collects all necess
ary items and makes one payment to the owner. To enable such a convenient buying
experience, a Web store must carry all items that customers need. Otherwise a l
ist of items in a shopping basket may come from a number of different vendors, t
he customer having collected the items after visiting the vendors individual webs
ites. Options left to stores and buyers are to:
Process multiple transactions separately Choose one seller, who then arranges pa
yment clearance among the many sellers Use an intermediary
In a distributed commerce model, multiple customers deal with multiple sellers.
An ef cient market will allow a buyer to pay for these products in one lump sum. S
uch an arrangement is natural if a website happens to carry all those items. Oth
erwise, this distributed commerce requires an equally exible and manageable mecha
nism to provide buyers the convenience of paying once, settling amounts due amon
g various vendors. The intermediary Clickshare relies on a framework for distrib
uted user management. For example, the Clickshare (http: / / www.clickshare.com)
model (see Figure 4) gives certain control to member sites that operate indepen
dently while Clickshare, as the behind-the-scenes agent, takes care of accountin
g and billing. Going beyond payment settlement, an intermediary or cybermediary not
only provides member rms with accounting and billing functions but also undertake
s joint marketing and advertising campaigns and exercises some control over prod
uct selection and positioning. In this regard, a cybermediary resembles a retail
er or a giant discount store. It differs from an online shopping mall, which off
ers location and links but little else. Unlike a portal, which is inclined to ow
n component services, this intermediation model allows members to maintain and o
perate independent businessesthus it is a distributed commercewhile at the same ti
me trying to solve management issues by utilizing an intermediary. Amazon.com is
a well-known intermediary for publishing rms. As Amazon.com expands its product
offerings, it has become an intermediary or a distributor / retailer for other p
roducts and services as well. A vertical portal such as Amazon.com or a corporat
e portal such as Dell is well positioned to expand horizontally and become a cyb
ermediary dealing with their own as well as others businesses. It must be pointed
out that Yahoo, a portal, may expand in the same way. However, portals focus on
either owning a share of other businesses or arranging advertising fees for the
referrals. In contrast, a cybermediarys main function is to manage commercial tr
ansactions of individuals in the distributed environment in addition to providin
g users with a convenient shopping location.
6.3.
Association and Alliance
Conventional web portals such as Yahoo and Excite try to maximize advertising re
venues by increasing the number of visitors to their sites and enticing them to
stay longer to view advertisements. But advertisements as a form of marketing po
se the age-old question, do the viewers really buy advertised
Clickshare
Seller A
Seller B
Subscriber
Subscriber
Subscriber
Subscriber
Figure 4 An Intermediary Mediates Cross-market Transactions.
Subscriber
Subscriber
ELECTRONIC COMMERCE
273
products? To ensure that the message presented on visitors web pages is producing
an effect, some advertisers require click-through responses from viewers. Inste
ad of relying on passive imprints, clickthrough advertisements are paid only if
the viewer clicks on the ad and connects to the advertisers website. Like hyperte
xt links referring customers to other sites, these click-through ads are aimed a
t managing visitor traf c. In this way, portals become a customer referral service
. The possibility to refer and manage traf c on the Web is fully realized at Web v
entures speci cally designed to generate revenues from referrals. For example, Ama
zon.com has over 100,000 associates on the World Wide Web, who maintain their ow
n websites where visitors may click on hypertext links that directly transport t
hem to the Amazon.com site. Associatesthat is, intermediariesin return receive ref
erral fees based on actual purchase by those referred by their sites. Through th
e Associates Program, Amazon.com has effectively opened more than 100,000 retail
outlets in a couple of years. Referral fees paid to associates have replaced ad
vertising expenses and the costs to open and manage retail stores. Referral fees
can be shared by consumers as well. SmartFrog,* for example, functions as a ref
errer to various online shops such as Amazon.com and eToys. After becoming a mem
ber, a Smart Frog customer visits vendor sites and, when he or she is ready to b
uy, goes to Smart Frogs website and clicks on the listed vendors, who pay 10% of
the purchase price to Smart Frog. Out of this 10%, Smart Frog currently rebates
half to its customers and keeps the remaining half. In this way, Smart Frog, not
the vendors, pays for the advertising and marketing efforts needed to generate
visitors. In this cybermediated economy, the costs to advertise and attract cust
omers are paid to entrepreneurs and consumers instead of marketing rms and the me
dia who carry advertisements. The intermediarys pro t and the bene t for consumersin t
he form of lowered pricesboth originate from the transaction costs previously mar
ked for advertising and marketing. This new business opportunity and enterprise
model is enabled by the distinctive nature of the networked economy.
7.
ONLINE TRADING MARKETS AND AUCTIONS
Virtually all types of products are being sold through online auctions such as O
nsale.com or eBay (see Figure 5). Firms and governments can implement sophistica
ted bidding and auction procedures for buying supplies and selling their product
s. Consumers can search and negotiate for best prices through auctions. By elimi
nating physical distance and simply increasing the number of products and tradin
g partners available to individual bidders, online auctions offer opportunities
unrealized in physical markets. Such a market arrangement, however, may produce
price levelsand thus competitiveness and pro t levelsthat may be fundamentally diffe
rent from physical markets. In this section, we present an overview of various t
ypes of auctions being implemented on the Internet and evaluate their effects on
price levels, market competition, and overall economic gains and losses.
7.1.
Types of Auctions
Auctions may be for a single object or a package of nonidentical items. Alternat
ively, auctions may be for multiple units where many units of a homogeneous, sta
ndardized good are to be sold, such as gold bullion in the auctions conducted by
the International Monetary Fund and the U.S. Treasury in the 1970s and the week
ly auctioning of securities by the Treasury. Auctions discussed in this section
are single auctions where either an item is offered for sale and the market cons
ists of multiple buyers making bids to buy or an item is wanted and the market c
onsists of multiple sellers making offers to sell. In either case, one side of t
he market consists of a single buyer or seller. Most online auctions are current
ly single auctions. On the other hand, multiple buyers and sellers may be making
bids and offers simultaneously in a double auction. An example of a double auct
ion is a stock trading pit. Because market clearing level of price may differ su
bstantially between double and single auctions, we discuss double auctions in Se
ction 7.5 below. Auctions may also be classi ed according to the different institu
tional rules governing the exchange. These rules are important because they can
affect bidding incentives and thus the type of items offered and the ef ciency of
an exchange. There are four primary types of auctions: 1. English auction. An En
glish auction, also known as an ascending bid auction, customarily begins with t
he auctioneer soliciting a rst bid from the crowd of would-be buyers or announcin
g the sellers reservation price. Any bid, once recognized by the auctioneer, beco
mes the standing bid, which cannot be withdrawn. Any new bid is admissible if an
d only if it is higher than the standing bid. The auction ends when the auctione
er is unable to call forth a
* http: / / www.smartfrog.comrecently purchased by CyberGold (http: / / www.cyber
gold.com), who pays customers for viewing advertisements.
274
TECHNOLOGY
Figure 5 eBay Online Auction Site. Reproduced with permission of eBay Inc. Copyr
ight
eBay Inc. All rights reserved.
new higher bid, and the item is knocked down to the last bidder at a price equal to
that amount bid. Examples include livestock auctions in the United States and wo
ol auctions in Australia. 2. Dutch auction. Under this procedure, also known as
a descending bid auction, the auctioneer begins at some price level thought to b
e somewhat higher than any buyer is willing to pay, and the price is decreased i
n decrements until the rst buyer accepts by shouting Mine! The item is then awarded t
o that buyer at the price accepted. A Dutch auction is popular for produce and c
ut owers in Holland, sh auctions, and tobacco. Price markdowns in department store
s or consignment stores also resemble Dutch auctions. 3. First price auction. Th
is is the common form of sealed or written bid auction, in which the highest bidder
is awarded the item at a price equal to the amount bid. This procedure is thus c
alled a sealed-bid rst price auction. It is commonly used to sell off multiple un
its, such as short-term U.S. Treasury securities. 4. Second price auction. This
is a sealed bid auction in which the highest bidder is awarded the item at a pri
ce equal to the bid of the second highest bidder. The procedure is not common, a
lthough it is used in stamp auctions. The multiple-unit extension of this second
price sealed bid auction is called a competitive or uniform price auction. If ve
identical items are sold through a competitive price auction, for example, the v
e highest bidders will each win an item but all will pay the fth-highest price. T
he distinguishing feature of an auction market is the presence of bids and offer
s and the competitive means by which the nal price is reached. A wide variety of
online markets will qualify as an auction using this de nition. Less than two year
s after eBay appeared on the Internet, revenues from online auctions have reache
d billions of dollars in the United States, projected to grow into tens of billi
ons in a few years. Major online auctions are in consumer products such as compu
ters, airline tickets, and collectibles, but a growing segment of the market cov
ers business markets where excess supplies and inventories are being auctioned o
ff.
ELECTRONIC COMMERCE
275
7.2.
B2B Trading Markets
Online auctions for business are an extension of supply chain applications of th
e Internet, by which rms seek parts and supplies from a large pool of potential b
usiness partners. The real-time interaction in these auctions differs signi cantly
from contract-based supply relationships, which may be stable but often inef cien
t in terms of costs. A more exible and responsive supply chain relationship is re
quired as the manufacturing process itself becomes more exible to meet changing d
emands in real time. Especially for digital products, the business relationship
between suppliers and producers may be de ned by the requirement for immediate del
ivery of needed components. Currently, business-to-business auctions are mainly
those that sell excess or surplus inventories. For example, FastParts (http: / /
www.fastparts.com) and FairMarket (http: / / www.fairmarket.com) offer online a
uctions for surplus industrial goods, mainly desired by corporate purchasing dep
artments. A signi cant impediment to widespread B2B auctions is the fact that onli
ne auctions allow only a single item to be exchanged at a time. However, corpora
te purchasing managers typically deal with thousands of items. This will require
them to make and monitor bids in thousands of web pages simultaneously, each ha
ndling one item. A more ef cient auction will allow them to place a bid on a combi
nation of items. Unlike auctions organized around a single item or multiple unit
s of a single item, the new market mechanism poses serious challenge in guarante
eing market clearance because it must be able to match these combinatorial bids
and offers, unbundling and rebundling them (see Section 7.5.2).
7.3.
Auctions in Consumer Markets
Auctions selling consumer goods are mainly found in used merchandise and collect
ibles markets. Both in Onsale.com and eBay, all types of consumer goods are offe
red by individuals for sale. Ordinarily these items would have been sold through
classi ed advertisements in local newspapers or in various for sale newsgroups. Onlin
e innovations stem from the size of the potential audience and the real-time and
interactive bidding process. While online markets have an inherently larger rea
ch than physical markets in terms of the number of participants the success of e
Bay and onsale.com is largely due to their allowing potential buyers and sellers
an easy way to interact in real time. eBay, for example, is simply an automated
classi ed ad. The list of products offered as well as those who buy and sell at e
Bay site resembles those found in the classi ed ad market. Want and for-sale ads a
re largely controlled by newspapers who generate as much as 30% of their total r
evenues from individual advertisers. Aware of the increasing threat posed by Int
ernet-based classi ed ads, several newspapers experimented with online classi ed adv
ertising based on their print versions. Nevertheless, large-scale initiatives by
newspapers have all but failed. Surprisingly, eBay has succeeded in the same pr
oduct category. While online classi ed ads offered by newspapers are nothing more
than online versions of their print ads, eBay offers features that consumers nd c
onvenient. Classi ed ads provide buyers only with contact information for purchasi
ng a product. The buyers must make telephone calls and negotiate with the seller
s. On eBays website, all these necessary processes are automated and made conveni
ent. Sometimes the formula of successful online business is simply to maximize t
he interactivity and real-time capabilities of the online medium itself. Besides
successful online versions of classi ed ads, another type of online auction deals
with perishable goods, which need to be sold quickly. Excess airline tickets an
d surplus manufacturing products comprise the second type of products sold in on
line auction markets. For these products, the electronic marketplace offers the
necessary global market reach and responsiveness. Despite the growing interest i
n online auctions, the majority of consumer goods, except those discussed above,
are not suitable for auctions. For these items, conventional selling such as po
sted price retailing will be more than adequate. Nevertheless, the exibility offe
red by online trading may offer innovative market processes. For example, instea
d of searching for products and vendors by visiting sellers websites, a buyer may
solicit offers from all potential sellers. This is a reverse auction, discussed
below. Such a buying mechanism is so innovative that it has the potential to be
used for almost all types of consumer goods.
7.4.
Reverse Auctions
A reverse auction is where a buyer solicits offers from sellers by specifying te
rms of trade that include product speci cation, price, delivery schedule and so on
. Once interested sellers are noti ed and assembled, they may compete by lowering
their offers until one is accepted by the buyer. Alternatively, offers may be ac
cepted as sealed bids until one is chosen. In this regard, a reverse auction is
more akin to a buyer auction, commonly found in business procurement and governm
ent contracting. But an online implementation of a reverse auctione.g., Priceline
.comdeals with more mundane varieties of consumer goods and services. At Pricelin
e.com website (http: / / www.priceline.com),
276
TECHNOLOGY
consumers can specify the maximum price they are willing to pay for airline tick
ets, hotel rooms, automobiles and home mortgage (see Figure 6). Then Priceline.c
om acts as a reverse auction market as it searches and nds a seller who has the g
ood that matches the buyers requirements and is willing to provide the good or se
rvice at the speci ed terms. Like classi ed ads, the reverse auction mechanism is co
mmonly found in physical markets. For example, it is used to determine suppliers
and contractors in large-scale projects. In some sellers markets where products
are perishable, sellers compete to unload their products before they become spoi
led or unserviceable. Not surprisingly, Priceline.coms main source of revenues is
the airline industry, where unsold seats are perishable products that cannot be
saved and resold at a later date. As in the case of building contractors and bi
dders, the advantage of reverse auction hinges on the limited time span of certa
in products and services and the existence of competition among sellers. However
, it is seldom used for manufactured consumption goods. An obvious logistical pr
oblem is that there are many more buyers than sellers. This will render reverse
auctions almost impossible to handle. On the other hand, online markets may offe
r an opportunity for reverse auctions to be used more frequently, even for most
consumer goods. In a sense, reverse auctions are a form of customerpulled market
ing and an ideal selling mechanism for the digital age. Consumer-initiated, or p
ulled, searches may produce some potential vendors who have matching products fo
r sale, but contacts have to be made separately from the search result. Online r
everse auctions combine search process with direct contact and negotiation with
the sellers. The prospect of using a reverse auction as an online business model
depends on its ability to enable consumers to specify various aspects of the go
od or service they intend to purchase because individual preferences may not be
matched by existing products and services. In such cases, interested sellers may
engage in ex post manufacturing to satisfy customer requirements. Online search
es, reverse auctions, and ex post manufacturing represent the continuing innovat
ion of the digital economy to satisfy personalized needs.
7.5.
Emerging Market Mechanisms
Auctions are typically used when sellers are uncertain about market demand but w
ant to maximize selling prices. Conversely, buyers use auctions to obtain lowest
prices for contracts and supplies. If this is the case, why would sellers use o
nline auctions when they can obtain higher prices through posed-price selling? R
evenues are maximized, because since traditional auctions usually involve one se
ller with multiple buyers or one buyer with multiple sellers. In each of these c
ases, the individual who sells an object or awards a contract is always better o
ff as the number of bidders increases
Figure 6 Customers Name Their Prices at Priceline.com. Reproduced with permissio
n.
ELECTRONIC COMMERCE
277
(Wang 1993; Bulow and Klemperer 1996). However, if the auction consists of many
sellers and buyers simultaneously, the result changes dramatically. Such an auct
ion is called a double auction market.
7.5.1.
Double Auction
In a typical auction, a single seller receives bids from multiple buyers or one
buyer collects offers from multiple sellers. In a double auction, both multiple
sellers and buyers submit bids and offers simultaneously, similar to trading in
security markets. Multiple units of different products may be auctioned off at t
he same time. A double auction closely resembles supply-and-demand interactions
in physical markets. Because of this simple fact, a double auction results in a
very different price level from single auctions, described above. In a single au
ction, the selling price may be far above the competitive level due to competiti
on among the buyers. With many sellers and buyers, however, double auction marke
ts tend to generate competitive outcomes. A double auction is simply an interact
ive form of market where both buyers and sellers are competitive. Ideally, any e
ffort to promote competitiveness should include expanding double auctions and si
milar market mechanisms in the digital economy because they offer an opportunity
to raise economic ef ciencies unsurpassed by any physical market organizations. F
or auctioneers, however, single auctions generate substantially more revenues th
an double auctions. Where will the incentives be for them to participate in such
a competitive market? Both sellers and buyers will prefer single auctions. On t
he other hand, a variant type of double auction may present a unique opportunity
in the digital marketplace. For example, when buyers are looking for a bundle o
f products and services, a double auction with many sellers and buyers is necess
ary to clear the market.
7.5.2.
Bundle Trading
Bundle trading is an essential market mechanism when products are customized and
buyers needs must be satis ed by many sellers. Customized and personalized product
s often consist of a collection of complementary goods and services. For example
, a combination of airline tickets, hotel rooms, a rental car, meals, and amusem
ent park admission tickets can be bundled as a packaged leisure product. Some pr
oducts that are vertically related, such as a computer operating system and a we
b browser, may be provided by different vendors, requiring buyers to deal with m
ultiple sellers. While a purchase that involves multiple sellers may be carried
out through a series of transactions or auctions, bundle trading offers a simpli e
d and ef cient solution. In addition, products and services are increasingly bundl
ed and integrated rather than being sold as separate units. As a result, a diffe
rent kind of problem arises, namely how to facilitate markets that allow conveni
ent buying and selling of a wide range of products and services in one basket or
transaction. A technological solution to bundle trading is to provide an auctio
n that allows buyers and sellers to trade any number of goods in any combination
. For example, stock trading markets such as NYSE or Nasdaq are double auctions
that clear individual assets one by one. On the other hand, investors usually ho
ld their assets in a portfolio that consists of diverse assets, consistent with
their investment objectives on the overall returns and values. Nevertheless, phy
sical markets are unable to carry out unbundling and rebundling of assets offere
d and demanded in the market. The networked environment on the Internet offers a
possibility of allowing such portfolio-based transactions. In essence, the desi
red auction mechanism must unbundle and rebundle offers and bids presented by bo
th sellers and buyers. A prototype of such a mechanism is a portfolio trading al
gorithm developed for the investment community (Fan et al. 1999; Srinivasan et a
l. 1999). Its basic setup, however, can extend into procurement process in manuf
acturing as well as bundle trading in physical products and personal services. A
lternatively, third-party intermediaries may provide an agent-based solution to
trading bundles. Like a travel agent, an intermediary can assemble a package of
products and services to match a customers need. Because of the increasing trend
toward integrated products, intermediaries or agentbased service providers will
play a greater role in the Internet economy.
8.
OUTLOOK AND CHALLENGES
New technologies and applications are continually developed and applied to busin
ess processes on the Internet. The basic infrastructure of the digital economy c
learly consists of computers and networking technologies. However, underlying co
mponent technologies and applications are evolving rapidly, allowing us to make
only a haphazard guess as to which will turn out to be most critical or most wid
ely accepted in the marketplace. But not all technologies are created equal: tho
se that aid commerce in a smart way will become critical in meeting the increasi
ng demand for more exibility and responsiveness in a networked commercial environ
ment. The drive toward interoperable and distributed computing extends the very
advantage of the Internet and World Wide Web technologies. Broadband networking,
smart cards,
278
TECHNOLOGY
and mobile network applications bring convenience and manageability in the netwo
rk-centered environment. New technologies and computing models such as XML and m
ultitier distributed computing help rms to implement more ef cient management solut
ions. These technologies, when combined, improve and reinvent existing business
processes so as to meet exibility and responsiveness demanded by customers. Never
theless, regulatory as well as technical variables may become an important facto
r that hinders future growth of the Internet economy. Despite a high level of ef c
iency enabled by technologies, free markets are sometimes unable to produce ef cie
nt results when:
Lack of information and uncertainty about products and vendors results in market
failures. Goods and services have characteristics of public goods that private
economies do not produce
suf ciently.
An industry is dominated by a monopoly or a few oligopolistic
re reduced
ELECTRONIC COMMERCE
279
Bulow, J., and Klemperer, P. (1996), Auctions Versus Negotiations, American Economic
Review, Vol. 86, No. 1, pp. 180194. Chain Store Age (1998), Sticking to the Web, Jun
e 1998, pp. 153155. Choi, S.-Y., and Whinston, A. B. (2000), The Internet Economy
: Technology and Practice, SmartEcon Publishing, Austin, TX. Choi, S.-Y., Stahl,
D. O., and Whinston, A. B. (1997), The Economics of Electronic Commerce, Macmil
lan Technical, Publishing, Indianapolis. Fan, M., Stallaert, J., and Whinston, A
. B. (1999), A Web-Based Financial Trading System, IEEE Computer, April 1999, pp. 647
0. Geng, X., and Whinston, A. B. (2000), Dynamic Pricing with Micropayment: Using
Economic Incentive to Solve Distributed Denial of Service Attack, Center for Resea
rch in Electronic Commerce, University of Texas at Austin, Austin, TX. Lewis, T.
R., and Sappington, D. E. (1991), Technological Change and the Boundaries of the
Firm, American Economic Review, Vol. 81, No. 4, pp. 887900. Srinivasan, S., Stallae
rt, J., and Whinston, A. B. (1999), Portfolio Trading and Electronic Networks, Cente
r for Research in Electronic Commerce, University of Texas at Austin, Austin, TX
. Wang, R. (1993), Auctions versus Posted-price Selling, American Economic Review, V
ol. 83, No. 4, pp. 838851.
178 5.6.2. Architecture for Knowledge Management 222 5.7. Future Trends
TECHNOLOGY 222 223
REFERENCES
1.
INTRODUCTION
The rapidly growing globalization of industrial processes is de ning the economic
structure worldwide. The innovative power of a company is the deciding factor fo
r success in gaining market acceptance. Strengthening innovation, however, relie
s on the ability to structure the design phase technically so that innovative pr
oducts are generated with consideration of certain success factors such as time,
expense, quality, and environment. Furthermore, these products should not only
be competitive but also be able to maintain a leading position on the world mark
et. The opportunity to increase innovation lies in deciding on novel approaches
for generating products based on futureoriented, computer-integrated technologie
s that enable advances in technology and expertise. In the end, this is what gua
rantees market leadership. In recent years, progress in product development has
been made principally through the increasing use of information technology and t
he bene ts inherent in the use of these systems. In order to ful ll the various requ
irements on the product development process in respect to time, costs, and quali
ty, a large number of different approaches have been developed focusing on the o
ptimization of many aspects of the entire development process. This chapter prov
ides an overview of the computer integrated technologies supporting product deve
lopment processes and of the current situation of knowledge management. Section
2 discusses the design techniques affected by computer-aided technologies and th
e corresponding tools. Section 3 gives a general introduction of concepts of com
puter integrated technologies and describes basic technologies and organizationa
l approaches. Section 4 explains possible architectures for software tools that
support product development processes. Section 5 gives a wide overview of the ap
plication and development of knowledge management.
2.
2.1.
DESIGN TECHNIQUES, TOOLS, AND COMPONENTS
Computer-Aided Design 2D Design
2.1.1.
Like traditional design, 2D geometrical processing takes place through the creat
ion of object contours in several 2D views. The views are then further detailed
to show sections, dimensions, and other important drawing information. As oppose
d to 3D CAD systems, 2D CAD design occurs through conventional methods using sev
eral views. A complete and discrepancy-free geometrical description, comparable
to that provided by 3D CAD systems, is not possible. Therefore, the virtual prod
uct can only be displayed to a limited extent. Two-dimensional pictures still ha
ve necessary applications within 3D CAD systems for hand-sketched items and the
creation of drawing derivations. Despite their principal limitation compared to
3D CAD systems, 2D systems are widely used. A 3D object representation is not al
ways necessary, for example, when only conventional production drawings are requ
ired and no further computer aided processing is necessary. In addition, a decis
ive factor for their widespread use is ease of handling and availability using r
elatively inexpensive PCs. The main areas of 2D design are the single-component
design with primitives geometry, sheet metal, and electrical CAD design for prin
ted circuit boards. Depending on the internal data structure of the computer, cl
assical drawing-oriented and parametrical design-oriented 2D CAD systems can be
180
TECHNOLOGY
Geometry macros are internal computer representations of dimension and form-vari
able component geometries of any complexity. The macros are easily called up fro
m a database and copied by the user. The user may then easily insert the copied
macro into the current object representations. The advantages of the geometry ma
cros are seen especially when working with standard and repeat parts. A spatial
representation is desirable for a variety of tasks where primarily 2D representa
tions are utilized. Therefore, in many systems an existing function is available
for isometric gures that provide an impression of the objects spatial arrangement
without the need for a 3D model. This approach is suf cient for the preparation o
f spare parts catalogs, operating manuals, and other similar tasks. Prismatic bo
dies and simpli ed symmetric bodies may also be processed. For the use of this fun
ction it is necessary to present the body in front and side views as well as pla
n and rear views where required. After the generation of the isometric view, the
body must be completed by blending out hidden lines. This happens interactively
because the system actually does not have any information available to do this
automatically. So-called 212 systems generate surface-oriented models from 2D pla
nes through translation or rotation of surface copies. The main application for
these models lies in NC programming and documentation activities. To reduce redu
ndant working steps of geometrical modi cations, the dimensioning of the part must
be nished before the computer-supported work phases. This approach, which underl
ies many CAD systems today, is connected to the softwares technical structure. In
geometry-oriented dimensioning, the measurement alone is not inferred but rathe
r the totality of geometrical design of the entity. The supporting role played b
y the system for interactive functioning is limited to automatic or semiautomati
c dimension generation. The dimensional values are derived from the geometry spe
ci ed by the designer. The designer is forced to delete and redo the geometry part
ially or completely when he or she wishes to make changes. This creates a substa
ntial redundancy in the work cycle for the project. The principal procedures of
dimension-oriented modeling in parametric 2D CAD systems often correspond to con
ventional 2D systems. For the generation of geometrical contours, the previously
mentioned functions, such as lines, circles, tangents, trimming, move, and halv
ing, are used. The essential difference is the possibility for free concentratio
n on the constructive aspects of the design. In parametrical systems, on the oth
er hand, the measurements are not xed but rather are displayed as variable parame
ters. Associative relationships are built between the geometry and measurements,
creating a system of equations. Should the geometry require modi cation, the para
meters may be varied simply by writing over the old dimensions directly on the d
rawing. This in turn changes the geometrical form of the object to the newly des
ired values. The user can directly in uence the object using a mouse, for example,
by clicking on a point and dragging it to another location, where the system th
en calculates the new measurement. The changes cause a veri cation of the geometri
cal relationships as well as the consistency of the existing dimension in the sy
stem. Another advantageous element of the parametrical system is the ability to
de ne the abovementioned constraints, such as parallel, tangential, horizontal, ve
rtical, and symmetrical constraints. Constraints are geometrical relationships xe
d by formula. They specify the geometrical shape of the object and the relevant
system of equations necessary for geometry description. Thus, a tangent remains
a tangent even when the respective arcs dimension or position is changed. With th
e constraints, further relationships can be established, such as that two views
of an object may be linked to one another so that changes in one view occur the
related view. This makes the realization of form variations fairly simple. Moder
n CAD systems include effective sketching mechanisms for the support of the conc
eptual phase. For the concept design, a mouse- or pen-controlled sketching mecha
nism allows the user to portray hand-drawn geometries relatively quickly on the
computer. Features aiding orientation, such as snap-point grids, and simplifying
performing the alignment and correction of straight, curved, orthogonal, parall
el, and tangential elements within predetermined tolerances. This relieves the u
ser of time-consuming and mistake-prone calculations of coordinates, angles, or
points of intersection for the input of geometrical elements. When specifying sh
ape, the parametrical system characteristics come into play, serving to keep the
design intent. The design intent containing the relative positioning is embodie
d by constraints that de ne position and measurement relationships. Component vari
ations are easily realizable through changes of parameter values and partially w
ith the integration of calculating table programs in which table variations may
be implemented. The existing equation system may be used for the simulation of k
inematic relations. Using a predetermined, iterative variation of a particular p
arameter and the calculation of the dependent parameter, such as height, simple
kinematic analysis of a geometrical model is possible. This simpli es packaging an
alysis, for example.
2.1.2.
3D Design
2.1.2.1. Surface Modeling The geometrical form of a physical object can be inter
nally represented completely and clearly using 3D CAD systems, is contrast to 2D
CAD systems. Therefore,
COMPUTER INTEGRATED TECHNOLOGIES AND KNOWLEDGE MANAGEMENT 181 the user has the a
bility to view and present reality-like, complex entities. Thus, 3D CAD systems
provide the basis for the representation of virtual products. Through 3D modelin
g and uniform internal computer representation, a broader application range resu
lts so that the complete product creation process can be digitally supported. Th
is starts with the design phase and goes through the detailing, calculation, and
drawing preparation phases and on to the production. Because of the complete an
d discrepancy-free internal computer representation, details such as sectional d
rawings may be derived automatically. Also, further operations such as nite eleme
nt analysis and NC programs are better supported. The creation of physical proto
types can be avoided by using improved simulation characteristics. This in turn
reduces the prototype creation phase. Due to the required complete description o
f the geometry, a greater design expenditure results. Because of the greater exp
enditure, special requirements for user-friendliness of component geometry gener
ation of components and their links to assemblies and entire products are necess
ary. 3D CAD systems are classi ed according to their internal computer representat
ions as follows (Gra tz 1989):
Edge-oriented models (wire frames) Surface-oriented models Volume-oriented model
s
The functionality and modeling strategy applied depend on the various internal c
omputer representations. Edge-oriented models are the simplest form of 3D models
. Objects are described using end points and the connections made between these
points. As in 2D CAD systems, various basic elements, such as points, straight l
ines, circles and circular elements, ellipses, and free forming, are available t
o the user and may be de ned as desired. Transformation, rotation, mirroring, and
scaling possibilities resemble those of 2D processing and are related to the 3D
space. Neither surfaces nor volumes are recognizable in wire frame models. There
fore, it is not possible to make a distinction between the inside and outside of
the object, and section cuts of the object cannot be made. Furthermore, quanti ab
le links from simple geometrical objects to complex components are not realizabl
e. This is a substantial disadvantage during the practical design work. An arbit
rary perspective is possible, but in general a clear view is not provided. Among
other things, the blending out of hidden lines is required for a graphical repr
esentation. Because of the lack of information describing the objects surfaces, t
his cannot take place automatically but must be carried out expensively and inte
ractively. Models of medium complexity are already dif cult to prepare in this man
ner. The modeling and drawing preparation are inadequately supported. Another pr
oblem is the consistency of the geometry created. The requirements for the appli
cation of a 3D model are indeed given, but because of the low information conten
t the model does not guarantee a mistake-free wire frame presentation (Gra tz 19
89). Surface-oriented systems are able to generate objects, known as free-form s
urfaces, whose surfaces are made up of numerous curved and analytically indescri
bable surfaces. One feature visible in an internal computer representation of fr
ee-form surfaces is their interpolated or approximated nature. Therefore, variou
s processes have been developed, such as the Be zier approximation, Coons surface
s, the NURBS representations, and the B-spline interpolation (see Section 4.1).
As with all other models, the computer internal representation of an object impl
ies characteristic methods and techniques used in generating free-form surfaces.
In general, basis points and boundary conditions are required for the surface d
escription. Free-form surfaces, such as the one portrayed in Figure 2, can be ge
nerated using a variety of techniques. A few examples for the generation of free
form surfaces will be described later. In addition to the simple color-shaded re
presentations, photorealistic model representations may be used for the review a
nd examination of designs. The models are portrayed and used in a realitylike en
vironment and are supported by key hardware components. This allows for the cons
ideration of various environmental conditions such as type and position of the l
ight source, the observers position and perspective, weather conditions, and surf
ace characteristics such as texture and re ectivity. Also, animation of the object
during design is possible for the review of movement characteristics. Through t
182
TECHNOLOGY
Figure 2 Modeling Processes for Free-Form Surfaces.
because of missing information describing the solid during cutting operations. T
he intersections are represented only by contour lines that are interactively ha
tched. Surface models are applied when it is necessary to generate a portrait of
free-formed objects. Such models are used in the automobile, shipbuilding, and
consumer goods industries, for example. The surface formations of the model, whi
ch are distinguished into functional and esthetic surfaces, can be used as groun
d-level data for various applications employed later on. This includes later app
lications such as the generation of NC data from multiaxis processing. Another a
rea of application is in FEM analysis, where the surface structure is segmented
into a nite element structure that is subsequent to extensive mechanical or therm
al examinations. With surface models, movement simulations and collision analyse
s can be carried out. For example, mechanical components dependent on each other
can be reviewed in extreme kinematic situations for geometrical overlapping and
failure detection. 2.1.2.2. Solid Modeling A very important class of 3D modeler
s is generated by the volume systems. The appropriate internal computer represen
tation is broken down into either boundary representation models (B-reps) or Con
structive Solid Models (CSGs). The representation depends on the basic volume mo
deling principles and the deciding differences between the 3D CAD system classes
. Volume modelers, also known as solid modelers, can present geometrical objects
in a clear, consistent, concise manner. This provides an important advantage ov
er previously presented modeling techniques. Because each valid operation quasiimplicitly leads to a valid model, the creativity of the user is supported and t
he costly veri cation of geometric consistency is avoided. The main goal of volume
modeling is the provision of fundamental data for virtual products. This data i
ncludes not only the geometry but also all information gathered throughout the d
evelopment process, which is then collected and stored in the form of an integra
ted product model. The information allows for a broad application base and is av
ailable for later product-related processes. This includes the preparation of dr
awings, which consumes a large portion of the product development time. The gene
ration of various views takes place directly, using the volume model, whereby th
e formation of sections and the blending out of hidden lines occurs semiautomati
cally. Normally, volume models are created using a combination of techniques. Th
e strengths of the 3D volume modeling systems lie in the fact that they include
not only the complete spectrum of wellknown modeling techniques for 2D and 3D wi
re frames and 3D surface modeling but also the new, volume-oriented modeling tec
hniques. These techniques contribute signi cantly to the reduction of costly input
efforts for geometrically complex objects. The volume model can be built up by
linking basic solid bodies. Therefore, another way of thinking is required in or
der to describe a body. The body must be segmented into basic bodies or provided
by the system as basic elements (Spur and Krause 1984). Each volume system come
s with a number of prede ned simple geometrical objects that are automatically gen
erated. The objects are generated by a few descriptive system parameters. Sphere
s, cubes, cones, truncated cones, cylinders, rings, and tetrahedrons are example
s of the basic objects included in the system.
COMPUTER INTEGRATED TECHNOLOGIES AND KNOWLEDGE MANAGEMENT 183 Many complex compo
nents can be formed from a combination of positioning and dimensioning of the ba
sic elements. Complex components are created in a step-by-step manner using the
following connecting operations:
Additive connection (uni cation) Substractive connection (differential) Section co
nnection (average) Complementing
These connections are known as Boolean or set-theoretical operations. For object
generation using Boolean operations, the user positions two solids in space tha
t touch or intersect one another. After the appropriate functions are called up
(uni cation, average, complementing, or differential), all further steps are carri
ed out by the system automatically and then present the geometrical object. The
sequence of the operations when the set-theoretical operations are applied is of
decisive importance. Some CAD systems work with terms used for manufacturing te
chniques in order to provide the designer with intuitive meanings. This allows f
or the simple modeling of drilling, milling, pockets, or rounding and chamfering
. This results because each process can be represented by a geometrical object a
nd further de ned as a tool. The tool is then de ned in Boolean form and the operati
on to perform the removal of material from the workpiece is carried out. To corr
ect the resulting geometry, it is necessary that the system back up the complete
product creation history. This is required to make a step by step pattern that
can be followed to return the set-theoretical operations to their previous forms
. Set-theoretical operations provide an implementation advantage where complex m
odel de nitions must be created with relatively few input commands. These de nitions
are, if even realizable, extremely costly and time consuming by conventional me
thods. The disadvantage is that a relatively high conceptual capability is requi
red of the user. Solid models with a number of free-form surfaces are realized u
sing surface-oriented modeling techniques that correspond to the surface model p
resented. With one of these techniques, sweeping, one attains a 3D body through
an intermediate step when creating a 2D contour. The 2D contour is expanded to t
hree dimensions along a set curve. The desired contours are generated with known
2D CAD system functions and are called up as bases for sweep operations. In rot
ational sweeping, objects are generated through the rotation of surfaces, as wel
l as closed or open, but restricted, contour lines around a prede ned axis. The ax
is may not cut the surface or contour. A prismatic body originates from the expa
nsion of a closed contour line. Shells or full bodies can be generated with swee
p operations. With sweep-generated shells, movement analysis and assembly inspec
tion can occur. The analysis takes place by the bodies movement along a curve in
space. The modeling strategy implemented depends heavily on the actual task and
the basic functionality offered by the CAD system. This means that the strategy
implemented cannot be arbitrarily speci ed, as most CAD systems have a variety of
alternative approaches within the system. For the user, it is important that a C
AD system offer as many modeling methods as possible. Modeling, such as surface
and solid modeling, is important, and the ability to combine and implement these
methods in geometrical models is advantageous to the user. Because of the numer
ous modeling possibilities offered, the CAD system should have as few restrictio
ns as possible in order to allow the user to develop a broad modeling strategy.
Figure 3 shows a surface-oriented model of a bumper that was created using a sim
ple sweep operation of a spline along a control curve. Using a variable design h
istory, the air vent may be positioned anywhere on the bumpers surface without ti
me-consuming design changes during modeling. The cylindrical solid model that cu
ts and blends into the surface of the bumper demonstrates that no difference exi
sts between a solid and surface model representation. Both models may be hybridi
zed within components. A CAD system supporting hybridized modeling creates a maj
or advantage because the designer realizes a greater freedom of design.
2.1.3.
Parametrical Design
Another development in CAD technology is parametrical modeling (Frei 1993). The
function of parametrical systems displays basic procedure differences when compa
red to conventional CAD systems. The principal modeling differences were explain
ed in the section on parametrical 2D CAD systems. In parametrical model descript
ions, the model can be varied in other areas through the selection and character
ization of additional constraints. Boundary conditions can be set at the beginni
ng of the design phase without xing the nal form of the component. Therefore, gene
ration of the geometry takes place through exact numerical input and sketching.
The designer can enter the initial designs, such as contour, using a 2D or 3D sk
etcher. In this manner, geometrical characteristics, such as parallel or perpend
icular properties, that lie within
184
TECHNOLOGY
Figure 3 Hybrid Surface and Solid Model. (CATIA System, Dassault Syste
mes)
COMPUTER INTEGRATED TECHNOLOGIES AND KNOWLEDGE MANAGEMENT 185 For the designer,
it is important that both parametric and nonparametric geometry can be combined
and implemented during the development of a component. Therefore, complex compon
ents must not be completely described parametrically. The designer decides in wh
ich areas it is advantageous to present the geometry parametrically.
2.1.4.
Feature Technology
Most features in commercial CAD systems are built upon a parametrical modeler. T
he ability to de ne ones own standard design elements also exists. These elements c
an be tailored to the designers way of thinking and style of expression. The elem
ents are then stored in a library and easily selected for implementation in the
design environment. Additional attributes such as quantity, size, and geometrica
l status can be predetermined to create logical relationships. These constraints
are then considered and maintained with every change made to an element. An exa
mple of a design feature is a rounding llet. The llet is selected from the feature
s library and its parameters modi ed to suit the required design constraints. The
library, containing standard elements, can be tailored to the users requirements
for production and form elements. This ensures consistency between product devel
opment and the ability to manufacture the product. Therefore, only standard elem
ents that match those of available production capabilities are made available. F
eature de nition, in the sense of concurrent engineering, is an activity in which
all aspects of product creation in design, production, and further product life
phases are considered. In most of todays commercially available CAD systems, the
term feature is used mainly in terms of form. Simple parametric and positional g
eometries are made available to the user. Usually features are used for complica
ted changes to basic design elements such as holes, recesses, and notches. Thus,
features are seen as elements that simplify the modeling process, not as elemen
ts that increase information content. With the aid of feature modeling systems,
user-de ned or company-speci c features can be stored in their respective libraries.
With features, product design capabilities should be expanded from a pure geome
tric modeling standpoint to a complete product modeling standpoint. The designer
can implement fully described product parts in his or her product model. Throug
h the implementation of production features, the ability to manufacture the prod
uct is implicitly considered. A shorter development process with higher quality
is then achieved. By providing generated data for subsequent applications, featu
res represent an integration potential.
2.1.5.
Assembly Technology
Complex products are made up of numerous components and assemblies. The frequent
ly required design-in-context belongs to the area of assembly modeling. Thereby,
individual parts that are relevant to the design of the component are displayed
on the screen. For instance, with the support of EDM functionality, the parts c
an be selected from a product structure tree and subsequently displayed. The par
t to be designed is then created in the context already de ned. If the assembly is
then worked on by various coworkers, the methods for shared design are employed
. For example, changes to a component can be passed on to coworkers working on r
elated and adjacent pieces through systemsupported information exchange. An exam
ple of a complex assembly in which shared design processes are carried out is th
e engine in Figure 4. The goal is to be able to build and then computer analyze
complex products. The assembly modeling will be supported by extensive analysis
and representation methods such as interference checks as well as exploded views
and section views. Besides simple geometrical positioning of the component, ass
embly modeling in the future will require more freedom of modeling independent o
f the various components. Exploiting the association between components permits
changes to be made in an assembly even late in the design phase, where normally,
because of cost and time, these modi cations must be left out. Because the amount
of data input for assemblies and the processing time required to view an assemb
ly on the screen are often problematic, many systems allow a simpli ed representat
ion of components in an assembly, where suppression of displaying details is pos
sible. But often the characteristics of simpli ed representation are to be de ned by
the user. It is also possible for the CAD system to determine the precision wit
h which the component is displayed, depending on the model section considered. Z
ooming in on the desired location then provides a representation of the details.
A representation of the assembly structure is helpful for the modeling of the a
ssembly. Besides the improved overview, changes to the assembly can be carried o
ut within the structure tree and parts lists are also easily prepared. The combi
nation of assemblies as a digital mock-up enables control over the entire struct
ure during the design process. Thereby, interference checks and other assembly i
nspections are carried out using features that make up the so-called packaging a
nalyses of the software. For the check of collision, voxel-based representations
are suitable as well. Importing and portraying assemblies from other CAD system
s is also advantageous. This is possible with the creation of adequate interface
s.
186
TECHNOLOGY
Figure 4 An Engine as a Complex Assembly. (CATIA System, Dassault Syste
mes)
COMPUTER INTEGRATED TECHNOLOGIES AND KNOWLEDGE MANAGEMENT 187 For example, the ythrough function allows the user to navigate in real time through complex assemb
lies. This enables visual inspections, which among other things, help in the rec
ognition of component interference. The design of complex assemblies requires sh
aring of the design processes through teamwork. This shared group work is suppor
ted by software that enables conferences over the Internet / intranet in which t
ext, audio, and video information may be exchanged.
2.1.6.
Further Technologies
Further applications in the process chain of product development can be carried
out based on the 3D models created in early phases. For example, most commercial
CAD systems are composed of modules and are implemented for the realization of
a geometric model and other development-oriented solutions. Because the CAD syst
ems are compatible with one another, an unhindered exchange of data is possible
right from the beginning of the design through production stages. The systems mu
st meet the requirements for continuity and openness, which are indispensable fo
r the development of complex products. Continuity means that the product data ar
e entered only once. They are saved in a central location and are available for
use in carrying out other tasks without extensive conversion being required betw
een the various forms of data. Openness means that, besides the ability to proce
ss data on various hardware platforms, it is possible to connect with other prog
ram systems within the same company and to external sites such as suppliers and
other service providers. Further differentiation results from homogeneous and he
terogeneous system environments. Using a common database for the application mod
ules, for example in STEP format, means that conversion or interface problems be
tween the individual applications are avoided. External application programs and
data exchange between the purchaser and the supplier are simpli ed by the use of
data exchange formats such as IGES, SET, and VDAFS. A uniform user interface imp
roves the usability and comfort of the user. The application modules can be rela
ted to the various tasks within the development process. In the various phases o
f the virtual product development, different modules come into play depending on
the task involved. The following principal tasks are presented:
Drawing preparation: Drawings can be derived from the solid model and displayed
in various
views and measurements (Figure 6). This includes the representation of details a
s well as component arrangements. For higher work ef ciency, it should be possible
to switch between the 2D drawing mode and the 3D modeling mode at any time with
out suffering from conversion time and problems. A further simpli cation of the wo
rk results if the drawing and measurement routines provide and support the drawi
ngs in standardized formats, for example. Parts list generator: The preparation
of a parts list is important for product development, procurement, and manufactu
ring. This list is automatically generated from the CAD component layout drawing
. The parts list can then be exchanged through interfaces with PPS systems. Anal
ysis and simulation applications: Analysis and simulation components enable the
characteristics of a product to be attained and optimized earlier in the develop
ment phase. The costly and time-consuming prototype production and product devel
opment are thereby reduced to a minimum. The strength characteristics can be cal
culated with nite element methods (FEMs). Besides stress strain analyses, thermody
namic and uid dynamic tests can be carried out. Preprocessors enable automated ne
t generation based on the initial geometric model. Most CAD systems have interfa
ces for common FEM programs (e.g., NASTRAN, ANSYS) or have their own equation so
lver for FEM tests. Based on the substantial data resulting from an FEM test, it
is necessary for postprocessing of results. The results are then portrayed in d
eformation plots, color-coded presentations of the stress strain process, or anim
ations of the deformation. Material constants, such as density, can be de ned for
the components of the solid model. Then other characteristics, such as volume, m
ass, coordinates of the center of gravity, and moment of inertia can be determin
ed. These data are essential for a dynamic simulation of the product. For kinema
tic and dynamic simulations, inelastic solid elements with corresponding dynamic
characteristics are derived from the solid model (Figure 7). Various inelastic
solid elements can be coupled together from a library. The complete system is bu
ilt up of other elements such as a spring and damper and then undergoes a loadin
g of external forces. The calculation of the system dynamics is usually carried
out using a commercial dynamics package. A postprocessor prepares a presentation
of the results for display on a screen. An animation of the system can then be
viewed.
Piping: Piping can be laid within the 3D design using conduit application module
s, as shown
in Figure 8. Design parameters such as length, angle, and radius can be controll
ed. Intersections or overlapping of the conduit with adjacent assemblies can als
o be determined and repaired. The design process is supported through the use of
libraries containing standard piping and accessories. The parts lists are autom
atically derived from the piping design.
188
TECHNOLOGY
Figure 6 3D Model and the Drawing Derived from the Model. (AutoCAD System. Repro
duced with permission of Autodesk )
Weld design: Weld design can also be supported by using appropriate application
modules
(Figure 9). Together, designers and production engineers can determine the requi
red joining techniques and conditions for the assemblies. Thereby, the weld type
and other weld parameters such as weld spacing and electrodes are chosen for th
e material characteristics of the components to be joined. Process information s
uch as weld length and cost and time requirements can be derived from the weld m
odel. Integration of ECAD design: The design of printed circuit boards and cable
layouts are typical application examples of 2D designs that are related to the
surrounding 3D components. With the design of printed circuit boards, layout inf
ormation, such as board size and usable and unusable area, can be exchanged and
ef ciently stored between and within the MCAD and ECAD systems. For the design of
cable layouts (Figure 10), methods similar to those used for conduit design are
implemented. This simpli es the placement of electrical conductors within assembli
es.
190
TECHNOLOGY
Figure 9 Example of a Weld Design. (System Pro / WELDING, PTC) Reprinted by perm
ission of PTC.
Figure 10 2D Design of a Wiring Harness. (Pro / DIAGRAM System, PTC) and Paramet
ric Cable Layout in the 3D Assembly. (Pro / CABLING, PTC) Reprinted by permissio
n of PTC.
dardized application interfaces allow access to the methods employed by the vari
ous CAD systems. Product data represent all data such as geometry, topology, tec
hnological information and organizational data to de ne the product. This data is
generated during the design process, enhanced by the application and then intern
ally mapped.
2.2.2.
Standardization of CAD Interfaces
The primary objectives for the standardization of interfaces are:
The uni cation of interfaces The prevention of dependencies between system produce
r and user The prevention of repeated working of redundant data
192
TECHNOLOGY
Figure 11 CAD Components and Interfaces.
The standardization of interfaces leads to the following advantages (Stanek 1989
):
The possibility of an appropriate combination of system components Selection bet
ween various systems Exchange of individual components Expandability to include
new components Coupling of various systems and applications Greater freedom in t
he combining of hard and software products
2.2.2.1. Interfaces for Product Data Exchange The description of product data ov
er the entire product life cycle can be effectively improved through the use of
computer-supported systems. For this reason, uniform, system-neutral data models
for the exchange and archiving of product data are desired. These enable the ex
change of data within CAD systems and other computer-aided applications. The int
ernal computer models must be converted into one another through so-called preand postprocessor conversion programs. To minimize the costs of implementing suc
h processors, a standardized conversion model is applied. The conversion models
standardized to date are still of limited use because they can only display extr
acts of an integrated product model. Interfaces such as
initial graphics exchange speci cation (IGES) standard dechange et de transfert (SE
T) Verband der Automobilindustrie-Fla chenSchnittstelle (VDA-FS)
have been designed and to some extent nationally standardized for geometry data
exchange, mainly in the area of mechanical design (Anderl 1989). IGES is recogni
zed as the rst standardized format for product-de ned data to be applied industrial
ly (Anderl 1993; Grabowski and Anderl 1990; Rainer 1992). The main focus is the
transfer of design data. IGES is for the mapping of:
194
TECHNOLOGY
Figure 12 Overview of the ISO 10303 Structure.
196
TECHNOLOGY
for the EDM system to process every demand and provide the right information at
the right time. The storage, management, and provision of all product-related da
ta create the basis for:
The integration of application systems for technical and commercial processes as
well as for
of ce systems in a common database
The task-oriented supply of all operations with actual and consistent data and d
ocumentation The control and optimization of business processes
Metadata are required for the administration of product data and documentation,
enabling the identi cation and localization of product data. Metadata represent th
e information about the creator, the data of generation, the release status, and
the repository. The link between all relevant data is made by the work ow object,
which is a document structure in which documents of various formats are lled out
along a process chain. This structure is equivalent to part of the product stru
cture and, after release, is integrated into the product structure. In general,
this is the last step of a work ow. The modeling and control of a process chain ar
e the task of the work ow management, whereas access control is the task of docume
nt management. The users concerned in the work ow are managed by groups, roles and
rights. The systems applied are integrated in the EDM system. Linking a special
document with the editing system is the responsibility of the document manageme
nt program.
2.4.
Architecture and Components
The reference architecture, from Ploenzke (1997) (Figure 13) describes the indiv
idual components of an EDM system from a functional point of view and illustrate
s the connections between the respective system components. The core of the refe
rence architecture is the engineering data. The application-oriented and systemo
verlapping functions are applied to the core data. The engineering data are foun
d within the model data and broken down into product-de ned data and metadata. Mod
el data, such as drawing data, parts lists, text les, raster data from document,
and data in other formats, are not interpreted by the EDM system but are still m
anaged by the EDM System. Metadata is interpreted by the EDM system and contains
information regarding the management and organization of the product data and d
ocuments. The application-oriented functions support the management of the model
data. Although the model data may be made up of various kinds of data, the mana
gement of these data, with the help of application oriented functions, must be a
ble to summarize the data logically in maps and folders. Also, relationships bet
ween the documents and product data can then be determined. Important applicatio
n-oriented functions are (Ploenzke 1997):
General data management Drawing data management
Figure 13 Reference Architecture of an EDM System. (From Ploenzke 1994. Reprinte
d by permission of CSC Ploenzke.)
198
TECHNOLOGY
Figure 14 Architecture of an EDM System with Integrated Applications. (From Step
han 1997)
Figure 15 Availability of Function Modules in EDM Systems. (From Ploenzke 1997.
Reprinted by permission of CSC Ploenzke.)
ler elements. The FEM process enables the breakdown of a larger structure into e
lements, thus enabling the description of component behavior. Therefore, very co
mplex entities are solvable. Calculation problems from real-world appli-
200
TECHNOLOGY
cations are usually quite complex. For example, in crash testing it is necessary
to reproduce or simulate the complete vehicle structure even when it consists o
f many different components and materials (Figure 16). The nite elements are desc
ribed geometrically using a series of nodes and edges, which in turn form a mesh
. The formation of the mesh is calculable. The complete behavior of a structure
can then be described through the composition of the nite elements. An FEM comput
ation can be divided into the following steps:
Breakdown of the structure into nite elements Formulation of the physical and mat
hematical description of the elements Composition of a physical and mathematical
description for the entire system Computation of the description according to r
equirements Interpretation of the computational results
The FEM programs commercially available today process these steps in essentially
three program phases: 1. Preprocessing: Preparation of the operations, mesh gen
eration 2. Solving: Actual nite element computation 3. Postprocessing: Interpreta
tion and presentation of the results Preprocessing entails mainly geometric and
physical description operations. The determination of the differential equations
is implemented in the FEM system, whereby the task of the user is limited to se
lecting a suitable element type. In the following sections, the three phases of
an FEM computation will be presented from a users point of view. 2.5.2.1. FEM Pre
processing The objective of the preprocessing is the automation of the meshing o
peration. The following data are generated in this process:
Nodes Element types Material properties Boundary conditions Loads
Figure 16 Opel Omega B Crash Model, 76,000 elements. (From Kohlhoff et al. 1994.
Reprinted by permission of VDI Verlag GmbH.)
COMPUTER INTEGRATED TECHNOLOGIES AND KNOWLEDGE MANAGEMENT 201 The data sets may
be generated in various ways. For the generation of the mesh, there are basicall
y two possibilities:
Manual, interactive modeling of the FEM mesh with a preprocessor Modeling of the
structure in a CAD system with a subsequent automated or semiautomated
mesh generation The generation of the mesh should be oriented towards the expect
ed or desired calculation results. This is meaningful in order to simplify the g
eometry and consider a ner meshed area sooner in the FEM process. In return, the
user must have some experience with FEM systems in order to work ef ciently with t
he technology available today. Therefore, completely automatic meshing for any c
omplex structure is not yet possible (Weck and Heckmann 1993). Besides interacti
ve meshing, commercially available mesh generators exist. The requirements of an
FE mesh generator also depend on the application environment. The following req
uirements for automatic mesh generators, for both 2D and 3D models, should be me
t (Boender 1992):
The user of an FE mesh generator should have adequate control over the mesh dens
ity for the
various parts of the component. This control is necessary because the user, from
experience, should know which areas of the part require a higher mesh density.
The user must specify the boundary conditions. For example, it must be possible
to determine the location of forces and xed points on a model. The mesh generator
should require a minimum of user input. The mesh generator must be able to proc
ess objects that are made up of various materials. The generation of the mesh mu
st occur in a minimal amount of time.
The mesh created from the FE mesh generator must meet the following requirements
:
The mesh must be topologically and geometrically correct. The elements may not o
verlap one
another.
The quality of the mesh should be as high as possible. The mesh can be compared
to analytical
or experimental examinations.
Corners and outside edges of the model should be mapped exactly using suitable n
ode positioning.
The elements should not cut any surfaces or edges. Further, no unmeshed areas sh
ould exist.
At the end of the mesh re nement, the mesh should match the geometrical model as c
losely as possible. A slight simpli cation of the geometry can lead to large error
s (Szabo 1994). The level of re nement should be greater in the areas where the gr
adient of the function to be calculated is high. This is determined with an auto
matic error estimation during the analysis and provides a new point for further
re nement. Various processes have been developed for automatic mesh generation. Th
e motivation to automate the mesh generation process results from the fact that
manual generation is very time consuming and quite prone to mistakes. Many proce
sses based on 2D mesh generation, however, are increasingly being suggested for
3D processes. The most frequently used nite element forms for 2D are three- and f
our-sided elements and for 3D are tetrahedrons and hexahedrons. For automatic me
sh generation, triangles and tetrahedrons are suitable element shapes, whereas h
exahedrons provide better results in the analysis phase (Knothe and Wessels 1992
). 2.5.2.2. FEM Solution Process In the solution process, equation systems are s
olved that are dependent on the type of examination being carried out. As a resu
lt, various algorithms must be provided that allow for ef cient solution of the pr
oblem. The main requirements for such algorithms are high speed and high accurac
y. Linear statistical examinations require only the solution of a system of line
ar equations. Dynamically nonlinear problems, on the other hand, require the app
lication of highly developed integration methods, most of which are based on fur
ther developments of the RungeKutta method. To reduce the computing time, matrice
s are often converted. This permits a more effective handling of the calculation
s. The objective is to arrange the coef cients with a resulting diagonal matrix. A
n example of an FEM calculation sequence is shown in Figure 17. 2.5.2.3. FEM Pos
tprocessing Because the calculation results of an FEM computation only deliver n
odes and their displacement and elements with the stresses or eigenforms in nume
rical rep-
202
TECHNOLOGY
Figure 17 Sequence of a Finite Element Calculation.
resentation, postprocessors are required in order to present the results in a gr
aphical form (Figure 18). The postprocessing manages the following tasks:
Visualization of the calculated results Plausibility control of the calculation
The performance and capability of postprocessors are constantly increasing and o
ffer substantial presentation possibilities for:
Stress areas and main stresses Vector elds for forces, deformation and stress cha
racteristics Presentation of deformations Natural forms Temporary deformation an
alysis Temperature elds and temperature differences Velocities
It is possible to generate representations along any curve of the FEM model or,
for example, to present all forces on a node. Stress elds of a component can be s
hown on the surface or within the component. The component can thus be reviewed
in various views.
204
TECHNOLOGY
Figure 19 Layout of a Computer-Supported Process Chain.
The exchange of data within a coupled process chain takes place directly between
the supporting systems. Thereby, information transferred between two respective
systems is aided with the help of system-speci c interfaces. This realization of
information transfer quickly reaches useful limits as the complexity of the proc
ess chains increases. This is because the fact that the necessary links grow dis
proportionately and the information from the various systems is not always trans
ferable (Anderl 1993). Characteristic of integrated process chains is a common d
atabase in the form of an integrated product model that contains a uniform infor
mation structure. The information structure must be able to present in a model a
ll relevant product data in the process chain and to portray the model with the
use of a homogenous mechanism. A corresponding basis for the realization of such
information models is offered by the Standard for the Exchange of Product Model
Data (ISO 10303STEP) (ISO 1994). Basic to a process chain with common product da
ta management based on an EDM system is the compromise between coupled and integ
rated process chains. With the use of common data management systems, organizati
onal de ciencies are avoided. Thus, the EDM system offers the possibility of arran
ging system-speci c data in comprehensive product structures. The relationship bet
ween documents such as CAD models and the respective working drawing can be repr
esented with corresponding references. A classi cation system for parts and assemb
lies aids in locating the required product information (Jablonski 1995). On the
other hand, with joint product data management, conversion problems for system-s
peci c data occur in process chains based on an EDM system. These problems exist i
n a similar form in coupled process chains. This makes it impossible to combine
individual documents into an integrated data model. References within various sy
stems, related to objects within a document rather than the document itself, can
not be made. Besides requiring support for the processing of partial tasks and t
he necessary data transfer, process chains also require organized, sequential su
pport. This includes planning tasks as well as process control and monitoring. P
rocess planning, scheduling, and resource planning are fundamental planning requ
irements. The subject of process planning involves segmenting the complete proce
ss into individual activities and de ning the sequence structure by determining th
e order of correlation. In the use of process plans, the following points are di
fferentiated:
One-time, detailed process planning (all processes are executed similarly) Caseby-case planning of individual processes
Therefore, using product models derived from information within both the design
and manufacturing data, it is possible to integrate speci c work functions. The mo
dels then serve as a basis for all system functions and support in a feature app
roach-type form. The feature approach is based on the idea that product formatio
n boundaries or limits are explicitly adhered to during design and production pl
anning stages. This ensures that the end result of the product envisioned by the
designer is not lost or overlooked during the modeling phase. To realize the fo
rm and de nition incorporated by the designer in a product, it is essential that a
n integrated design and production planning feature exist. Accelerating the prod
uct development process is achievable through the minimization of unnecessary da
ta retrieval and editing. Efforts to parallel the features of concurrent enginee
ring tasks require integrated systems of design and production planning.
206
TECHNOLOGY
Higher product and documentation quality is achievable with the use of integrate
d systems. Through integration, all functions of a product model are made availa
ble to the user. Any errors that may occur during data transmission between sepa
rate systems are avoided. Because of the high responsibility for cost control du
ring product development, estimates for design decisions are extremely necessary
. Cost estimates reveal the implications of design decisions and help in avoidin
g costly design mistakes. For effective estimation of costs to support the desig
n, it is necessary to be able to access production planning data and functions e
f ciently. Integrated systems provide the prerequisites for the feedback of inform
ation from the production planners to the designers, thus promoting qualitativel
y better products. The following forms of feedback are possible:
Abstract production planning experience can be made available to the designer in
the form of
rules and guidelines. A constant adaptation or adjustment of production planning
know-how has to be guaranteed. Design problems or necessary modi cations discover
ed during the production planning phases can be directly represented in an integ
rated model. Necessary modi cations can be carried out in both production planning
and design environments.
3.3.
Methodical Orientation
The virtualization of product development is a process for the acquisition of in
formation in which, at the end of the product development, all necessary informa
tion generated is made available. Assuming that humans are at the heart of infor
mation gathering and that the human decision making process drives product devel
opment, the virtual product-creation methods must support the decision maker thr
oughout the product creation process. Product-creation processes are in uenced by
a variety of variables. The form the developed product takes is determined by co
mpany atmosphere, market conditions, and the designers own decisions. Also in uenti
al are the type of product, materials, technology, complexity, number of model v
ariations, material costs, and the expected product quantity and batch sizes. Pr
oduct development is not oriented toward the creation of just any product, but r
ather a product that meets the demands and desires of the consumer while ful lling
the market goals of the company. The necessary mechanisms must provide a correl
ation between the abstract company goals and the goals of the decisions made wit
hin the product development process. For example, to ensure the achievement of c
ost-minimization objectives, product developers can use mechanisms for early cos
t estimation and selection, providing a basis for the support of the most cost-e
ffective solution alternatives (Hartung and Elpet 1986). To act upon markets cha
racterized by individual consumer demands and constant preference changes, it is
necessary to be able to supply a variety of products within relatively short de
velopment times (Rathnow 1993; Eversheim 1989). This means that product structur
ing must take into account the prevention of unnecessary product variation and t
hat parallel execution of concurrent engineering is of great importance. Methods
that are intended to support the product developer must be oriented toward not
only the contents of the problem but also the designers way of thinking. Therefor
e, a compromise must be made between the methodical problem solving strategy and
the designers creative thought process. The product-development methods can be s
eparated into process-oriented and product-oriented categories. The main concern
in process-oriented product development is the design. The objective is the ind
irect improvement of the design or, more precisely, a more ef cient design process
. Productoriented methods concern the product itself and the direct improvement
of the product. Various process-oriented methods are discussed below. Fundamenta
Due to the ever-increasing demand for shorter product development cycles, a new
technology known as rapid prototyping (RP) has emerged. RP is the organizational
and technical connection of all processes in order to construct a physical prot
otype. The time required from the date the order is placed to when the prototype
is completed can be substantially reduced with the application of RP technology
(Figure 20). RP is made up of generative manufacturing processes, known as RP p
rocesses, and conventional NC processes as well as follow-up technologies. In co
ntrast to conventional working processes, RP processes such as stereo lithograph
y, selective laser sintering, fused deposition modeling, and laminated object ma
nufacturing enable the production of models and examples without the use of form
ing tools or molds. RP processes are also known as free-form manufacturing or la
yer manufacturing. Prototypes can be divided into design, function, and technica
l categories. To support the development process, design prototypes are prepared
in which proportion and ergonomic models are
208
TECHNOLOGY
Figure 20 Acceleration Potential through CAD and RP Application.
incorporated. Design prototypes serve for the veri cation of haptic, esthetic, and
dimensional requirements as well as the constructive conception and layout of t
he product. RP-produced examples are made from polycarbonates, polyamides, or wo
od-like materials and are especially useful for visualization of the desired for
m and surface qualities. For function veri cation and optimization, a functional p
roduct example is required. The application of production series materials is no
t always necessary for this purpose. Functional prototypes should, however, disp
lay similar material strength characteristics. On the other hand, technical prot
otypes should be produced using the respective production series materials and,
whenever possible, intended production line equipment. The latter serve for purp
oses such as customer acceptance checks and the veri cation of manufacturing proce
sses. Prototypes can be utilized for individual parts, product prototypes, and t
ool prototypes. Normally, product prototypes consist of various individual proto
types and therefore require some assembly. This results in higher levels of dime
nsion and shape precision being required. The geometric complexity represents an
essential criterion for the selection of the suitable prototype production proc
ess. If, for example, the prototype is rotationally symmetric, the conventional
NC turning process is suf cient. In this case, the presently available RP processe
s provide no savings potential. However, the majority of industry-applied protot
ypes contain complex geometrical elements such as free-form surfaces and cut-out
s. The production of these elements belongs to some of the most demanding tasks
in the area of prototype production and is, because of the high amount of manual
work involved, one of the most time-consuming and cost-intensive procedures. RP
processes, however, are in no way subjected to geometrical restrictions, so the
build time and costs of producing complex geometrical gures are greatly reduced
(Ko nig et al. 1994). Another criterion for process selection is the required co
mponent measurements and quality characteristics such as shape, dimensional accu
racy, and surface quality.
3.4.1.
Systemization of Rapid Prototyping Processes
With the implementation of CAD / CAM technology, it is possible to produce proto
types directly based on a virtual model. The generation of the geometry using RP
processes takes place quickly without the requirement of molds and machine tool
s. The main feature of the process is the formation of the workpiece. Rather tha
n the conventional manufacturing process of a clamped workpiece and material rem
oval techniques, RP processes entail the layering of a uid or powder in phases to
form a solid shape. Using CAD models, the surfaces of the components are fragme
nted into small triangles through a triangulation process. The fragments are the
n transformed into the de facto RP standard format known as STL (stereo lithogra
phy format). The STL format describes the component geometry as a closed surface
composed of triangles with the speci cation of a directional vector. Meanwhile, m
ost
COMPUTER INTEGRATED TECHNOLOGIES AND KNOWLEDGE MANAGEMENT 209 CAD systems now pr
ovide formatting interfaces as part of the standard software. Another feature of
this process is that CAD-generated NC code describing the component geometry al
lows for a slice process in which layers of the object may be cut away in desire
d intervals or depths. Starting with the basic contour derived from the slicing
process, the workpiece is subsequently built up in layers during the actual form
ing process. Differences exist between the RP processes in process principles an
d execution. RP processes can be classi ed by either the state of the raw material
or the method of prototype formation. The raw materials for RP processes are in
either uid, powder, or solid states (Figure 21). As far as the method of prototy
pe creation, the component forming can either be processed into direct, 3D objec
ts or undergo a continuous process of layers built upon one another (Figure 22).
3.5.
Digital Mock-up
Today the veri cation and validation of new products and assemblies relies mainly
on physical mockups. The increasing number of variants and the need for higher p
roduct and design quality require a concurrent product validation of several des
ign variants that are based on digital mock-ups. A digital mock-up (DMU) can be
de ned as a computer-internal model for spatial and functional analysis of the struc
ture of a product model, its assembly and parts respectively (Krause et al. 1999).
The primary goal of DMU is to ensure the ability to assemble a product at each
state of its development and simultaneously to achieve a reduction in the number
of physical prototypes. Oriented to the product development cycle, tools for DM
U provide methods and functionality for design and analysis of product component
s and their function. Modeling methods are divided into several categories: spac
e management, multiple views, con guration management of product variants and vers
ions, management of relations between components, and also incomplete models. Si
mulations are targeted to analyze the assembly and disassembly of a product as w
ell as the investigation and veri cation of ergonomic and functional requirements.
Constituent parts of a DMU tool are distinguished into either components or app
lications. Components are the foundation of applications and consist of modules
for data organization, visualization, simulation models, and DMU / PMU correlati
ons. Main applications are collision checks, assembly, disassembly, and simulati
on of ergonomics, functionality, and usage aspects. Because a complex product li
ke an airplane can consist of more than 1 million parts, handling a large number
of parts and their attributes ef ciently is necessary. This requires that DMU met
hods are optimized data structures and algorithms. Besides basic functions such
as the generation of new assemblies, the modi cation of existing assembles and the
storage of assemblies as product structures with related parts in heterogeneous
databases, advanced modeling methods are required in order to ensure process-or
iented modeling on the basis of DMU. The main methods in this context are (BRITE
-EURAM 1997):
Figure 21 Classi cation of RP Processes with Respect to the Material-Generation Pr
ocess. (From Kruth 1991)
210
TECHNOLOGY
Figure 22 Classi cation of RP Processes with Regard to the Form-Generation Process
. (From Kruth 1991)
Organization of spaces: Allocating and keeping open functional and process-orien
ted spaces in
DMU applications. The major problem is the consideration of concurrent requireme
nts concerning the spaces. Organization of views: A view is an extract of the en
tire structure in a suitable presentation dedicated to particular phases and app
lications. Handling of incomplete models: Incomplete and inconsistent con guration
s during development process are allowable in order to ful ll the users requirement
s regarding exibility. Examples are symbolically and geometrically combined model
s with variable validity. Con guration management of product structure: Management
of versions, variants, and multiuse. Besides these methods in a distributed, co
operative environment, consequent safety management has to be taken into account
. Therefore, a role- and process-related access mechanism must be implemented th
at allows the administrator to de ne restrictions of modeling related to roles. Th
e application of such technologies enables a company to manage outsourced develo
pment services.
4.
4.1.
ARCHITECTURES
General Explanations
Product modeling creates product model data and is seen as a decisive constituen
t of the computersupported product development activities. Product creation incl
udes all the tasks or steps of product development, from the initial concept to
the tested prototypes. During product modeling, a product model database is crea
ted and must support all relevant data throughout the products life cycle. Produc
t modeling is made up of interconnected parts: the product model and the process
chain. The product model is related to the product database and the management
and access algorithms. The process chain, besides usually being related to the o
perational sequence of the product development, is in this context all the neces
sary product modeling processes required to turn the initial idea into a nished p
roduct. The product modeling processes consist of technical and managementrelate
d functions. The product model data are the most important factor determined fro
m the development and planning activities. The term product model can be logical
ly interpreted to mean the accumulation of all productrelated information within
the product life cycle. This information is stored in the form of digitalized p
roduct model data and is provided with access and management functions. Modeling
systems serve for the processing and handling of the product model data. As wit
h many other systems, the underlying architecture for CAD systems is important f
or extensibility and adaptability to special tasks and other systems. Compatibil
ity with other systems, not just other CAD systems, is extremely important for p
ro table application. Product data should be
212
TECHNOLOGY
Figure 25 Modeler with Application Modules. (From Krause et al. 1990)
The application modules can be subdivided, according to their area of employment
, into general and specialty modules. General application modules for widespread
tasks, such as FEM modules or interfaces to widely used program systems, are co
mmercially marketed by the supplier of the CAD system or by a system supplier of
complementary software. Specially adapted extensions, on the other hand, are us
ually either created by the user or outsourced. A thoroughly structured and func
tionally complete application programming interface (API) is a particular requir
ement due to the inability of the user to look deeper into the system. Honda imp
lemented a CAD system with around 200 application software packages (Krause and
Pa tzold 1992), but the user was presented with a Honda interface as a uniform m
edium for accessing the new plethora of programs. This enabled relative autonomy
from the underlying software in that as the user, switching between systems, al
ways dealt with the same user interface. A further step towards autonomy, stemmi
ng from the basic system in use, is met only when application modules use an int
erface already provided. If this does not occur, the conversion to another base
system is costly and time consuming. Sometimes it is even cheaper to install a n
ew application module. Often special application modules are required that canno
t be integrated seamlessly into the CAD system. Therefore, various automobile ma
nufacturers, such as Ford, General Motors, Nissan, VW, Audi, and Skoda, have imp
lemented special surface modelers besides their standard CAD systems to enable f
ree-form surface designs to be carried out.
4.3.
Shared System Architectures
The scope and time restrictions of most design projects demand the collaboration
of many designers. The implementation of shared CAD systems signi cantly simpli es
this teamwork. Here it is possible for several designers to work on the same CIR
(computer internal representation). It is not necessary to copy the CIR to vari
ous computers in order to work on it and then manually integrate the changes aft
erwards. The management of this process is taken over by the CAD system. These t
asks, however, remain widely concealed from the user. Before this work is begun,
only the design area of the respective designer must be de ned in order to avoid
overlapping. Such shared CAD systems use a common database. This is usually acce
ssible from the individual CAD stations through a clientserver architecture (Figu
re 26). Usually an Engineering Data Management System (EDMS), implemented in con
junction with a CAD system, is used. This technique is often found applied in lo
cal area networks (LANs), but new problems emerge when a changeover is made to w
ide area networks (WANs):
The bandwidth available in a WAN is signi cantly lower than in a LAN and is often
too small
for existing shared systems.
The data security must be absolutely ensured. Therefore, six aspects must be con
sidered: Access control: exclusive retrieval of data by authorized personnel Con d
entiality: prevention of data interception during transmission
214
TECHNOLOGY
bodied, encoded, embrained, embedded, event and procedural. Holsapple and Whinst
on (1992) discuss six types of knowledge that are important for knowledge manage
ment and decision support systems: descriptive, procedural, reasoning, linguisti
c, assimilative, and presentation. Moreover, Schreiber et al. (2000) ask the que
stion, Why bother? because even physicists will often have dif culty giving an exact d
e nition of energy. This does not prevent them, however, producing energy and othe
r products. The concepts most often mentioned by authors in the context of knowl
edge management are data, information, and knowledge. Some even add wisdom. This
classi cation, if not properly understood and used, could lead to a philosophical
discussion of the right distinction between the categories. The transition from one
to the other is not always clear-cut. Instead of a hierarchy, a continuum rangi
ng from data via information to knowledge has proved to be the most practical sc
heme for knowledge management (Probst et al. 1998; Heisig 2000). Data means the
individual facts that are found everywhere in a company. These facts can be easi
ly processed electronically, and gathering of large amounts of data is not probl
ematic today. However, this process alone does not lead to appropriate, precise,
and objective decisions. Data alone are meaningless. Data become information wh
en they are relevant and ful ll a goal. Relevant information is extracted as a res
ponse to a ood of data. However, deciding which knowledge is sensible and useful
is a subjective matter. The receiver of information decides whether it is really
information or just noise. In order to give data meaning and thus change it int
o information, it is necessary to condense, contextualize, calculate, categorize
, and correct (Tiwana 2000). When data are shared in a company, their value is i
ncreased by different people contributing to their meaning. As opposed to data,
knowledge has a value that can be anywhere between true and false. Knowledge can
be based on assumption, preconception, or belief. Knowledge-management tools mu
st be able to deal with such imprecision (e.g., documentation of experiences).
Knowledge is simply actionable information. Actionable refers to the notion of r
elevant, and nothing but the relevant information being available in the right p
lace at the right time, in the right context, and in the right way so anyone (no
t just the producer) can bring it to bear on decisions being made every minute.
Knowledge is the key resource in intelligent decision making, forecasting, desig
n, planning, diagnosis, analysis, evaluation, and intuitive judgment making. It
is formed in and shared between individual and collective minds. It does not gro
w out of databases but evolves with experience, successes, failures, and learnin
g over time. (Tiwana 2000, p. 57)
Taking all these aspects into consideration, knowledge is the result of the inte
raction between information and personal experience. Typical questions for data
and information are Who? What? Where? and When? Typical questions for knowledge
are How? and Why? (Eck 1997). One important differentiation is often made betwee
n tacit and explicit knowledge. Tacit knowledge is stored in the minds of employ
ees and is dif cult to formalize (Polanyi 1962; Nonaka and Takeuchi 1995). Explici
t knowledge is the kind that can be codi ed and transferred. Tacit knowledge becom
es explicit by means of externalization. With the introduction of CNC machines i
n mechanical workshops, experienced and highly skilled workers often felt insecu
re about their ability to control the process. They missed the right sound of the me
tal and the good vibrations of the machine. These signals were absorbed by the new C
NC machines and hence workers were not able to activate their tacit knowledge in
order to produce high-quality products (Martin 1995; Carbon and Heisig 1993). S
imilar problems have been observed with the introduction of other CIM technologi
es, such as CAD / CAM in the design and process-planning department and MRP syst
ems for order management. The information supply chain could not fully substitut
e the informal knowledge transfer chain between the different departments (Merti
ns et al. 1993; Fleig and Schneider 1995). A similar observation is quoted from
a worker at a paper manufacturing plant: We know the paper is right when it smells
right (Victor and Boynton 1998, p. 43) However, this kind of knowledge is not fou
nd only in craftwork or industrial settings. It can be found in high-tech chip p
roduction environments (Luhn 1999) as well as social settings. From the noise of
the pupils, experienced teachers can distinguish what they have to do in order
to progress (Bromme 1999).
5.3.
Knowledge Management Is Business and Process Oriented
Nearly all approaches to knowledge management emphasize the process character of
interlinked tasks or activities. The wording and number of knowledge-management
tasks given by each approach differ markedly. Probst (1998) proposes eight buil
ding blocks: the identi cation, acquisition, development, sharing, utilization, re
tention, and assessment of knowledge and the de nition of knowledge goals. Another
difference is the emphasis given by authors to the steps of the process- or kno
wledgemanagement tasks. Nonaka and Takeuchi (1995) describe processes for the cr
eation of knowledge, while Bach et al. (1999) focus on the identi cation and distr
ibution of the explicit, electronically documented objects of knowledge.
The second important step is to set up the link between knowledge management and
the general organizational design areas, such as business processes, informatio
n systems, leadership, corporate culture, human resource management, and control
(Figure 28).
The business processes are the application areas for the core process of knowled
ge management.
Existing knowledge has to be applied and new knowledge has to be generated to fu
l ll the needs of internal and external customers. The core activities have to be
permanently aligned with the operating and value-creating business processes. Fu
rthermore, knowledge-management activities could be linked with existing process
-documentation programs (e.g., ISO certi cation) and integrated into business proc
ess reengineering approaches. Information technology is currently the main drivi
ng factor in knowledge management. This is due to considerable technological imp
rovements in the eld of worldwide data networking through Internet / intranet tec
hnologies. IT builds the infrastructure to support the core activities of storin
g and distributing knowledge. Data warehouses and data mining approaches will en
able
216
TECHNOLOGY
Figure 27 Where Companies Start with Knowledge Management and Where They Locate
Their Core Competencies.
companies to analyze massive databases and therefore contribute to the generatio
n of new knowledge. The success of knowledge-management strategies is to a large
degree determined by the support through top and mid-level managers. Therefore,
leadership is a critical success factor. Each manager has to promote and person
ify the exchange of knowledge. He has to act as a multiplier and catalyst within
day-to-day business activities. Special leadership training and change programs
have to be applied to achieve the required leadership style. If the knowledge-m
anagement diagnosis indicates that the current corporate culture will not sustai
n knowledge management, wider change-management measures have to be implemented.
Figure 28 Core Process and Design Fields of Knowledge Management.
COMPUTER INTEGRATED TECHNOLOGIES AND KNOWLEDGE MANAGEMENT 217 The required compa
ny culture could be characterized by openness, mutual trust, and tolerance of mi
stakes, which would then be considered necessary costs of learning. Personnel-ma
nagement measures have to be undertaken to develop speci c knowledgemanagement ski
lls such as the ability to develop and apply research and retrieval strategies a
s well as adequately structure and present knowledge and information. Furthermor
e, incentives for employees to document and share their knowledge have to be dev
eloped. Career plans have to be redesigned incorporating aspects of knowledge ac
quisition of employees. Performanceevaluation schemes have to be expanded toward
s the employees contribution to knowledge generation, sharing, and transfer. Each
management program has to demonstrate its effectiveness. Therefore, knowledgeco
ntrolling techniques have to be developed to support the goal-oriented control o
f knowledge creation and application with suitable control indicators. While str
ategic knowledge control supports the determination of knowledge goals, operativ
e knowledge control contributes to the control of short-term knowledge activitie
s. Empirical results con rmed the great potential for savings and improvements tha
t knowledge management offers (Figure 29). Over 70% of the companies questioned
had already attained noticeable improvements through the use of knowledge manage
ment. Almost half of these companies had thus saved time and money or improved p
roductivity. About 20% of these companies had either improved their processes, s
igni cantly clari ed their structures and processes, increased the level of customer
satisfaction, or facilitated decisions and forecasts through the use of knowled
ge management (Heisig and Vorbeck 2001). However, some differences were apparent
between the answers provided by service and by manufacturing companies. Twentyeight percent of the service rms indicated an improvement in customer satisfactio
n due to knowledge management, as compared with only 16% of the manufacturing co
mpanies. Twenty-three percent of manufacturing companies stressed improvements i
n quality, as compared to only 15% of the service companies. Answers to question
s about the clarity of structures and processes showed yet another difference. T
wenty-six percent of the service companies indicated improvement with the use of
knowledge management, as opposed to only 14% of manufacturing companies.
5.4.
Approaches to the Design of Business Process and Knowledge Management
One primary design object in private and public organizations are the business p
rocesses that structure work for internal and external clients. Known as busines
s process reengineering (BPR) (Hammer 1993), the design of business processes be
came the focus of management attention in the 1990s. Various methods and tools f
or BPR have been developed by research institutes, universities, and consulting
companies. Despite these developments, a comparative study of methods for busine
ss process redesign conducted by the University of St. Gallen (Switzerland) conc
ludes: To sum up,
Figure 29 Improvements through Knowledge Management.
218
TECHNOLOGY
we have to state: hidden behind a more or less standard concept, there is a mult
itude of the most diverse methods. A standardized design theory for processes ha
s still not emerged (Hess and Brecht 1995, p. 114).
BPRs focus is typically on studying and changing a variety of factors, including
work ows and processes, information ows and uses, management and business practice
s, and staf ng and other resources. However, most BPR efforts have not focused muc
h on knowledge, if at all. This is indeed amazing considering that knowledge is
a principal success factoror in many judgment, the major driving force behind suc
cess. Knowledge-related perspectives need to be part of BPR. (Wiig 1995, p. 257)
Nearly all approaches to knowledge management aim at improving the results of th
e organization. These results are achieved by delivering a product and / or serv
ice to a client. This again is done by ful lling certain tasks, which are linked t
o each other, thereby forming processes. These processes have been described as
business processes. Often knowledge is understood as a resource used in these pr
ocesses. Nevertheless, very few approaches to knowledge management have explicit
ly acknowledged this relation. And even fewer approaches have tried to develop a
systematic method to integrate knowledge-management activities into the busines
s processes. The following approaches aim to support the analysis and design of
knowledge within business processes:
CommonKADS methodology (Schreiber et al. 2000) The business knowledge management
approach (Bach et al. 1999) The knowledge value chain approach (Weggemann 1998)
The building block approach (Probst et al. 1998) The model-based knowledge-mana
gement approach (Allweyer 1998) The reference model for knowledge management (Wa
rnecke et al. 1998).
None of the approaches presented to knowledge management has been developed from
scratch. Their origins range from KBS development and information systems desig
n to intranet development and business process reengineering. Depending on their
original focus, the approaches still show their current strengths within these
particular areas. However, detailed criteria for the analysis and design of know
ledge management are generally missing. Due to their strong link to information
system design, all approaches focus almost exclusively on explicit and documente
d knowledge as unstructured information. Their design scope is mainly limited to
technology-driven solutions. This is surprising because the analysis of 30 know
ledge workimprovement projects suggests a modi ed use of traditional business proc
ess design approaches and methods including nontechnical design strategies (Dave
nport et al. 1996). Only the business knowledge management approach (Bach et al.
1999) covers aspects such as roles and measurements.
5.5.
A Method for Business Process-Oriented Knowledge Management
Since the late 1980s, the division of Corporate Management at the Fraunhofer Ins
titute for Production Systems and Design Technology (Fraunhofer IPK) has develop
ed the method of integrated enterprise modeling (IEM) to describe, analyze, and
design processes in organizations (Figure 30) (Spur et al. 1993). Besides tradit
ional business process design projects, this method has been used and customized
for other planning tasks such as quality management (Mertins and Jochem 1999) (
Web and processbased quality manuals for ISO certi cation) for the design and intr
oduction of process-based controlling in hospitals and benchmarking. The IEM met
hod is supported by the software tool MO2GO (Methode zur objektorientierten Gesc
ha ftsprozessoptimierungmethod for object-oriented business process optimization)
220
TECHNOLOGY
tasks of knowledge management. The scope is then extended towards the analysis o
f the relations between the knowledge-processing tasks within the business proce
ss. This step contains the evaluation of the degree of connectivity inherent in
the core process activities of knowledge management within the selected business
process. The result shows whether the business processes supporting knowledge m
anagement are connected in a coherent manner. The optimization and new design of
business processes aim at closing the identi ed gaps within the underlying core p
rocesses and sequencing the core tasks of knowledge management. One design princ
iple is to use available procedures, methods, tools, and results from the proces
s to design the solution. In the last step of the analysis, the focus shifts fro
m the actions towards the resources used and the results produced within the pro
cess. The results of the analysis not only demonstrate which kind of knowledge i
s applied, generated, stored, and distributed but also the other resources, such
as employees, databases, and documents. Due to the project aim of the improveme
nt, the user will be able to evaluate whether the required knowledge is explicit
ly available or available only via the internal expert using the experts implicit
or tacit knowledge. The identi ed weaknesses and shortcomings in the business pro
cess will be addressed by knowledge-management building blocks consisting of pro
cess structures. The improvement measures have to integrate not only actions dir
ected to a better handling of explicit knowledge but elements to improve the exc
hange of implicit knowledge.
5.6.
Knowledge-Management Tools
Information technology has been identi ed as one important enabler of knowledge ma
nagement. Nevertheless, transfer of information and knowledge occurs primarily t
hrough verbal communication. Empirical results show that between 50% and 95% of
information and knowledge exchange is verbal (Bair 1998). Computer-based tools f
or knowledge management improve only a part of the exchange of knowledge in a co
mpany. The richness and effectiveness of face-to-face communication should not b
e underestimated. Computer tools promote knowledge management. The access to kno
wledge they enable is not subject to time or place. A report can be read in anot
her of ce a second or a week later. Therefore, a broad de nition of knowledge-manage
ment tools would include paper, pencils, and techniques such as brainstorming. A
ccording to Ruggles (1997, p. 3), knowledge management tools are technologies, whi
ch automate, enhance and enable knowledge generation, codi cation and transfer. We
do not look at the question if tools are augmenting or automating the knowledge
work. E-mail and computer videoconference systems can also be understood as tools
for knowledge management. However, we consider this kind of software to be the
basic technology, that is, the building blocks for a knowledge-management system
. Initially, groupware and intranets are only systems for the management of info
rmation. They become knowledge-management tools when a structure, de ned processes
, and technical additions are included, such as a means of evaluation by users.
This is not the place for a discussion about whether software can generate, codi
fy, and transfer knowledge alone or can only aid humans in these activities. For
the success of knowledge management, the social aspects of its practical use ar
e very important. For example, a sophisticated search engine alone does not guar
antee success as long as the user is not able to search effectively. It is not i
mportant for this study whether employees are supported in their knowledge manag
ement or whether the tool generates knowledge automatically on its own. This is
an important point in the arti cial intelligence discussion, but we do not need to
go into detail here. Syed (1998) adopts a classi cation from Hoffmann and Patton
(1996) that classi es knowledge techniques, tools, and technologies along the axes
complexitysophistication and intensity along the humanmachine continuum, indicati
ng whether certain tools can handle the complexity of the knowledge in question
and what kind of workload this means for the user (Figure 32).
5.6.1.
Technologies for Knowledge Management
The following is an overview of the basic technologies used in every knowledge-m
anagement solution. The following explanation of the basic technologies helps to
examine and classify tools more precisely. These are the technologies that we nd
today in knowledge management (Bottomley 1998). Different knowledge management
tasks can be processed using these basic technologies. Intranet technology: Intr
anets and extranets are technologies that can be used to build a knowledge-manag
ement system. The uni ed surface and access to various sources of information make
this technology perfect for the distribution of knowledge throughout a company.
Groupware: Groupware is a further substantial technology that is used for knowl
edge-management systems (Tiwana 2000). Groupware offers a platform for communica
tion within a rm and cooperation between employees.
222
TECHNOLOGY
Machine learning: This technology from the eld of arti cial intelligence allows new
knowledge to be generated automatically. In addition, processes can be automati
cally optimized with time with little necessity for human intervention. Computer
-based training: This technology is used to pass on knowledge to colleagues. The
spread of implicit knowledge is possible with multimedia applications.
5.6.2.
Architecture for Knowledge Management
Historical classi cation explains the special use of a certain product or how the
manufacturer understands its use. The following historical roots are relevant (B
air and OConnor 1998):
Tools that are further developments of classical information archives or the ret
rieval of
information.
Solutions from the
rti cial
eld of a
intelligence come into play in the analysis of documents and in automated search
es.
Approaches to saving and visualizing knowledge also come from classical informat
ion archives. Tools for modeling business processes. Software that attempts to c
ombine several techniques and support different tasks in knowledge
management equally. The Ovum (Woods and Sheina 1998) approach is an example of a
well-structured architectural model. The initial level of the model consists of
information and knowledge sources (Figure 33). These are delivered to the upper
levels through the infrastructure. Next comes the administration level for the
knowledge base where the access control is handled, for example. The corporate t
axonomy de nes important knowledge categories within the company. The next layer m
akes services available for the application of knowledge, such as through visual
izing tools, and for collaboration, such as through collaborative ltering. The us
er interface is described as a portal through which the user can access the know
ledge to use it in an application. A further possibility is categorization accor
ding to the basic technology from which knowledgemanagement systems are construc
ted. Most knowledge-management tools use existing technologies to provide a coll
aborative framework for knowledge sharing and dissemination. They are implemente
d using e-mail and groupware, intranets, and information-retrieval and documentmanagement systems. Applications from data warehousing to help desks can be used
to improve the quality of knowledge management (Woods and Sheina 1998).
5.7.
Future Trends
Knowledge management is currently a buzzword on the agenda of top management and
marketing and sales departments of software providers and consulting companies.
Nevertheless, decision makers awareness is increasing. Knowledge is regarded as
one or even the main factor for private and public organizations to gain competi
tive advantage. First experience from knowledge-management projects show that a
win-win situation for companies and employees is possible. By reducing double wo
rk, the company saves costs. Moreover, employees increase their experience throu
gh continuous learning and their satisfaction through solving new problems and n
ot reinventing the wheel.
224
TECHNOLOGY
COMPUTER INTEGRATED TECHNOLOGIES AND KNOWLEDGE MANAGEMENT 225 Knothe, K., and We
ssels, H. (1992), Finite Elemente, Springer, Berlin. Kohlhoff, S., Bla er, S., an
d Maurer, D. (1994), Ein rechnerischer Ansatz zur Untersuchung von Fahrzeug-Fahr
zeug-Frontalkollisionen zur Entwicklung von Barrierentests, VDI-Berichte No. 115
3, VDI, Du sseldorf. Ko nig, W., Celi, I., Celiker, T., Herfurth, H.-J., and Son
g, Y. (1994), Rapid Metal Prototyping VDI-Z, Vol. 136, No. 7 / 8, pp. 5760. Krause, F
.-L., Jansen, H., Bienert, M., and Major, F. (1990), System Architectures for Flex
ible Integration of Product Development, in Proceedings of the IFIP WG 5.2 / GI In
ternational Symposium (Berlin, November 1989), Elsevier Science Publishers, Amst
erdam, pp. 421440. Krause, F.-L. (1992), Leistungssteigerung der Produktionsvorbere
itung, in Proceedings of the Produktionstechnischen Kolloquiums (PTK ) Berlin Mark
t, Arbeit und Fabrik (Berlin), pp. 166184. Krause, F.-L. (1996), Produktgestaltung, i
n Betriebshu tte Produktion und Management, 7th Ed., W. Eversheim and G. Schuh, Eds.
, Springer, Berlin, pp. 7-347-73. Krause, F.-L., and Pa tzold, B. (1992), Automot
ive CAD / CAM-Anwendungen im internationalen Automobilbau, Teil 1: Japan, Gemein
schaftsstudie von IPK-Berlin und Daimler Benz F&E. Krause, F.-L., Bock, Y., and
Rothenburg, U. (1999), Application of Finite Element Methods for Digital Mock-up T
asks, in Proceedings of the 32nd CIRP International Seminar on Manufacturing Syste
ms: New Supporting Tools for Designing Products and Production Systems (Leuven,
May 2426), H. Van Brussel, J.-P. Ruth, and B. Lauwers, Eds., Katholieke Universti
teit Leuven, Hevelee, pp. 315321. Krause, F.-L., Ciesla, M., and Stephan, M. (199
4), Produktionsorientierte Normung von STEP, ZwF, Vol. 89, No. 11, pp. 536539. Krause
, F.-L., Jansen, H., and Vollbach, A. (1996), Modularita t von EDM-Systemen, ZwF, Vo
l. 91, No. 3, pp. 109111. Krause, F.-L., Ulbrich, A., and Mattes, W. (1993), Steps
towards Concurrent Engineering, in Proceedings & Exhibition Catalog, ILCE93 (Montpe
llier, March 2226), pp. 3748. Kriwet, A. (1995), Bewertungsmethodik fu r die recycli
nggerechte Produktgestaltung, Dissertation, Technical University of Berlin, Reihe
ProduktionstechnikBerlin, Vol. 163, Carl Hanser, Munich. Kruth, J. P. (1991), Mater
ial Incress Manufacturing by Rapid Prototyping Techniques, Annals of the CIRP, Vol
. 40, No. 2, pp. 603614. Luhn, G. (1999), Implizites Wissen und technisches Hande
ln am Beispiel der Elektronikproduktion, Meisenbach, Bamberg. Martin, H. (1995),
CeA: Computergestu tzte erfahrungsgeleitete Arbeit, Springer, Berlin. McKay, A.
, Bloor, S., and Owen, J. (1994), Application Protocols: A Position Paper, in Procee
dings of European Product Data Technology Days, International Journal of CAD / C
AM and Computer Graphics, Vol. 3, No. 9, pp. 377338. Mertins, K., Heisig, P., and
Vorbeck, J. (2001), Knowledge Management. Best Practice in Europe. Springer, Be
rlin. Mertins, K., and Jochem, R. (1999), Quality-Oriented Design of Business Pr
ocesses, Kluwer Academic Publishers, Boston. Mertins, K., Schallock, B., Carbon,
M., and Heisig, P. (1993), Erfahrungswissen bei der kurzfristigen Auftragssteueru
ng, Zeitschrift fu r wirtschaftliche Fertigung, Vol. 88. No. 2, pp. 7880. Nonaka, I
., and Takeuchi, H. (1995), The Knowledge-Creating Company, Oxford University Pr
ess, Oxford. Nowacki, H. (1987), Schnittstellennormung fu r gegenstandsde nierenden
Datenaustausch, DINMitteilungen, Vol. 66, No. 4, pp. 182186. Pahl, G., and Beitz, W
. (1993), Konstruktionslehre: Methoden und Anwendungen, 3rd Ed., Springer, Berli
n. Pawellek, G., and Schulte, H., Logistikgerechte Konstruktion: Auswirkungen der
Produktgestaltung auf die Produktionslogistik, Zeitschrift fu r Logistik, Vol. 8,
No. 9. Ploenzke A. G. (1994), Engineering Data Management Systems, 3. Edition, T
echnology Report, Ploenzke InformatikCompetence Center Industrie, Kiedrich / Rhei
ngau. Ploenzke A. G. (1997), Mit der Schlu sseltechnologie EDM zum Life Cycle Mana
gement, in Proceedings of the CSC PLOENZKE Congress (Mainz). Polanyi, M. (1966), T
he Tacit Dimension, Routledge and Kegan Paul, London. Probst, G., Raub, S., and
Romhardt, K. (1998), Wissen managen: Wie Unternehmen ihre wertvollste Ressource
optimal nutzen, 2nd Ed., Frankfurt Allgemeine Zeitung GmbH, Thomas Gabler, Frank
furt am Main.
226
TECHNOLOGY
Rainer, G. (1992), IGES: Einsatz bei rmenu bergreifenden Entwicklungskonzepten, CAD-C
AM Report, Vol. 11, No. 5, pp. 132140. Rathnow, P. J. (1993), Integriertes Varian
tenmanagement, Vandenhoeck & Ruprecht, Go ttingen. Reinwald, B. (1995), Work ow-Mana
gement in verteilten Systemen, Dissertation, University of Erlangen, 1993, Teubner
-Texte zur Informatik, Vol. 7, Teubner, Stuttgart. Ruggles, R. L. (1997), Knowle
dge Management Tools, Butterworth-Heinemann, Boston. berblick, CAD-CAM Report, Vol
. 10, No. 10, pp. Scheder, H. (1991), CAD-Schnittstellen: ein U 156159. Scho nbach,
T. (1996), Einsatz der Tiefziehsimulation in der Prozekette Karosserie, VDI-Beri
chte No. 1264, VDI, Du sseldorf, pp. 405421. Scholz-Reiter, B. (1991), CIM-Schnit
tstellen: Konzepte, Standards und Probleme der Verknu pfung von Systemkomponente
n in der rechnerintegrierten Produktion, 2nd Ed., Oldenbourg, Munich. Schreiber,
G., Akkermans, H., Anjewierden, A., de Hoog, R., Shadbolt, N., Van de Welde, W.
, and Wielinga, B. (2000), Knowledge Engineering and Management: The CommonKADS
Methodology, MIT Press, Cambridge. Skyrme, D. J., and Amidon, D. M. (1997), Crea
ting the Knowledge-Based Business, Business Intelligence, London. Spur, G., and
Krause, F.-L. (1984), CAD-Technik, Carl Hanser, Munich. Spur, G., Mertins, K., a
nd Jochem, R. (1993), Integrierte Unternehmensmodellierung, Beuth, Berlin. Stane
k, J. (1989), Graphische Schnittstellen der offenen Systeme, Technische Rundschau, V
ol. 81, No. 11, pp. 3843. Stephan, M. (1997), Failure-Sensitive Product Development
, in Proceedings of the 1st IDMME Conference: Integrated Design and Manufactureng
in Mechanical Engineering (Nantes, April 15 17, 1996), P. Chedmail, J.-C. Bocquet
, and D. Dornfeld, Eds., Kluwer Academic Publishers, Dordrecht, pp. 1322. Syed, J
. R. (1998), An Adaptive Framework for Knowledge Work, Journal of Knowledge Manageme
nt, Vol. 2, No. 2, pp. 5969. Szabo, B. A. (1994), Geometric Idealizations in Finite
Element Computations, Communications in Applied Numerical Methods, Vol. 4, No. 3,
pp. 393400. Tiwana, A. (2000), The Knowledge Management Toolkit, Prentice Hall,
Upper Saddle River, NJ. VDI Gesellschaft Entwicklung Konstruktion Vertrieb (BDIEKV) (1992), Wissensbasierte Systeme fu r Konstruktion und Arbeitsplanung, Gesellsch
aft fu r Informatik (GI), VDI, Du sseldorf. VDI-Gesellschaft Entwicklung Konstru
ktion (1993), VDI-Handbuch Konstruktion, Methodik zum Entwickeln und Konstruiere
n technischer Systeme und Produkte, VDI, Du sseldorf. Victor, B., and Boynton, A
. C. (1998), Invented Here: Maximizing Your Organizations Internal Growth and Pro t
ability, Harvard Business School Press, Boston. von Krogh, G., and Venzin, M. (1
995), Anhaltende Wettbewerbsvorteile durch Wissensmanagement. Die Unternehmung, Vol.
1, pp. 417436. Wagner, M., and. Bahe, F. (1994), Austausch von Produktmodelldaten:
zukunftsorientierte Prozessoren auf Basis des neuen Schnittstellenstandards STE
P, in CAD 94: Produktdatenmodellierung und Prozemodellierung als Grundlage neuer CAD
-Systeme, Tagungsband der Fachtagung der Gesellschaft fu r Informatik (Paderborn
, March 1718), J. Gausemeier, Ed., Carl Hanser, Munich, pp. 567582. Warnecke, G.,
Gissler, A., and Stammwitz, G. (1998), Wissensmanagement: Ein Ansatz zur modellbas
ierten Gestaltung wissensorientierter Prozesse, Information Management, Vol. 1, pp
. 24 29. Weck, M., and Heckmann, A. (1993), Finite-Elemente-Vernetzung auf der Basi
s von CADModellen, Konstruktion, Vol. 45, No. 1, pp. 3440. Weggeman, M. (1998), Ken
ntnismanagement: Inrichtig en besturing van kennisintensieve organisaties, Scrip
tum, Schiedam. Wiig, K. M. (1995), Knowledge Management Methods: Practical Appro
aches to Managing Knowledge, Vol. 3, Schema Press, Arlington, TX. Wiig, K. M. (1
997), Knowledge Management: Where Did It Come from and Where Will It Go?, Expert Sys
tems with Applications, Vol. 13, No. 1, pp. 114. Willke, H. (1998), Systemisches
Wissensmanagement, Lucius & Lucius, Stuttgart. Woods, M. S., and Sheina M. (1998
), Knowledge Management: Applications, Markets and Technologies, Ovum, London.
II.B
Manufacturing and Production Systems
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
CHAPTER 10 The Factory of the Future: New Structures and Methods to Enable Trans
formable Production
RGEN WARNECKE HANS-JU
Fraunhofer Society
WILFRIED SIHN
Fraunhofer Institute for Manufacturing Engineering and Automation
RALF
VON
BRIEL
Fraunhofer Institute for Manufacturing Engineering and Automation
1. 2.
THE CURRENT MARKET SITUATION: NOTHING NEW TRANSFORMABLE STRUCTURES TO MASTER INC
REASING TURBULENCE CORPORATE NETWORK CAPABILITY 3.1. Internal Self-Organization
and Self-Optimization
4.1. 311 4.2. 314 314 315 5. 4.3. 4.4.
Process Management through Process Modeling, Evaluation, and Monitoring
317
Integrated Simulation Based on Distributed Models and Generic Model Building Blo
cks 320 Participative Planning in the Factory of the Future The Integrated Evalu
ation Tool for Companies 320 321 322 322 323
3.
4.
NEW METHODS FOR PLANNING AND OPERATING TRANSFORMABLE STRUCTURES 317
CONCLUSION
REFERENCES ADDITIONAL READINGS
1.
THE CURRENT MARKET SITUATION: NOTHING NEW
The renowned American organization scientist Henry Mintzberg has discovered that
companies and their managers have been complaining of market turbulence and hig
h cost and competition pressures for more than 30 years (Mintzberg 1998). Market
turbulence is thus not a new phenomenon, he concludes, and companies should be
able to cope with it. However, there are only a few practical examples to back M
intzbergs claim. Practical experience and research results differ not because mar
ket turbulence forces enterprises to adapt but because the speed of change adds
a new dimension to market turbulence. The results of the latest Delphi study sho
w that the speed of change has increased considerably in the last 10 years (Delp
hi-Studie 1998). The ve charts in Figure 1 show the primary indicators used to me
asure the growing turbulence in the manufacturing environment. The change in mea
n product life shows that innovation speed has increased considerably in the las
t 15 years. This change can be validated with concrete gures from several industr
ies. A recent Siemens survey shows that sales of products older than 10 years ha
ve dropped by two thirds over the last 15 years and now amount to only 7% of the
companys total turnover. In contrast, the share of products younger than 5 years
has increased by more than 50% and accounts for almost 75% of the current Sieme
ns turnover (see Figure 2).
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
311
312
TECHNOLOGY
design
necess. competencies
speed of innovation
up-/downcycling
production
disassembly
assembly
t
t
use
indicator 1: innovation growing speed of innovation
indicator 3: complexity growing product complexity through integration of functi
ons or higher quality standards
indicator 5: importance of life cycle aspects change in life cycle relevance for
product and production
price
indicator 2: price decline in prices after product launch
product C sales
product B product A t t
indicator 4: forecasts capability to forecast the future market demand and the p
roduction capacity
Figure 1 Indicators Measuring Market Turbulence.
However, long-range studies are not the only means to identify and verify the ch
anges expressed by the indicators. A short-term analysis, for example, can also
serve to prove that change is the main cause of turbulence. The aim of one such
analysis is to predict the uctuations in sales for a mediumsized pump manufacture
r. During the rst half of the year, the sales trend of some product groups
1979/80
1984/85
1995/96
74%
Sales of products younger than five years.
48%
55%
Sales of products between six and ten years.
30%
29% 19% 22%
Sales of products older than ten years.
16% 7%
Figure 2 The Changing Life Cycle of Siemens Products. (From Kuhnert 1998)
314
TECHNOLOGY
2. TRANSFORMABLE STRUCTURES TO MASTER INCREASING TURBULENCE
Recent restructuring projects have shown that a companys competitiveness can be e
nhanced through improved technology and, in particular, through the ef cient combi
nation of existing technology with new company structures. Factories of the futu
re must have transformable structures in order to master the increased turbulenc
e that began in the 1990s. Factories must also possess two important features re e
cting two current trends in structural change: they must be able to enable for e
xternal structure networking and self-organize and self-optimize structures and
processes internally. External networking helps to dissolve company borders that
are currently insurmountable and integrate individual enterprises into company
networks. Self-organization and self-optimization are intended to enhance the co
mpetencies of the value-adding units in accordance with the corporate goal and t
hus speed up the decision making and change processes of the enterprise.
3.
CORPORATE NETWORK CAPABILITY
Corporate network capability describes the capacity of an enterprise to integrat
e both itself and its core competencies into the overall company network. This c
ooperation is continually optimized to bene t both the individual company and the
network. Companies with this capacity nd that their transformability increases in
two respects. First, they can focus on their core competencies and simultaneous
ly pro t from the company networks integrated and comprehensive service range. Seco
nd, the information ow in the company network is sped up by the continual network
ing of suppliers and customers. The advantage of the latter is that companies ar
e provided with information concerning market changes and adjustments in consume
r and supplier markets in due time. They can thus respond proactively to technol
ogical and market changes. These company networks can take four forms, as shown
in Figure 5. Companies are not limited to only one of the four types. Instead, e
ach company can be involved in several company network types on a horizontal as
well as a vertical level. In the automobile industry, for example, it is common
practice for companies to cooperate in regional networks with companies on the s
ame value-adding level and at the same time be part of the global supply chain o
f an automobile manufacturer. In other words, companies do not necessarily have
to focus on one cooperative arrangement. Different business units might establis
h different cooperative arrangements. Behr is an example of an automobile suppli
er with high network capability. The companys focus is vehicle air conditioning a
nd motor cooling. Its customers include all the large European car manufacturers
. In order to offer its customers a comprehensive service range, Behr is involve
d in nustrategic network distributor logistics provider central enterprise (manufacture
r) logistics provider supplier supplier regional network
virtual enterprise designer manufacturer
broker supplier, logistics provider marketer, distributor
operative network supplier logistics provider distributor manufacturer manufactu
rer
co-ordinating company, if necessary Bild 1 Figure 5 Basic Types of Company Netwo
rks.
316
TECHNOLOGY
Company
purchase
Production System
sales/ marketing goal agreement Customer sales/distribution
Supplier
component manufacturing
manufacturing
material flow relation
Figure 7 Common Target System to Coordinate Semiautonomous Organizational Units.
these projects have been further analyzed, showing increased corporate transform
ability and a signi cant change in organizational parameters. A manufacturer of pa
ckaging machinery was among the companies analyzed. It exempli es the changes in t
he organizational structure and the transition to self-optimizing structures. Be
fore the organizational changes were implemented, the company had a classical fu
nctional organization structure (see Figure 9). This structure led to a high deg
ree of staff specialization, which was unsatisfactory in terms of the value-addi
ng processes. For example, six departments and a minimum of 14 foremen and their
teams were involved in shipping a single packaging machine. In the course of th
e change process, semiautonomous units were created based on the idea of the fra
ctal company. These units performed all functions necessary for processing indiv
idual customer orders. Part of the new structure included organizational units t
hat focused on product groups for customized assembly processes. The interfaces
of the latter no longer serve to carry out individual
job assignment 10 mainly teamwork most important organizational unit delegation
and participation teams planning and control description of operational sequence
1 most important means of organization definition of output high rate of job sp
ecialization control authority on process level
executive authority on process level
provides for long communication channels
detailed description of operational sequence authorised power
definition of output
determination of operations
provides for short communication channels communication
power through identification power
Figure 8 Basic Change in Self-optimizing Structures. (From Kinkel and Wengel 199
8)
and maintaining th
in the planning pr
principles to the
producing a machin
318
TECHNOLOGY
integrated evaluation tool
integrated process management
participative planning
distributed simulation through generic building blocks
change of corporate organization network capability self-organization and optimi
zation
Figure 11 The Structure of the Methods Presented.
managed, it is not suf cient to map the process once but rather use an integrated
and holistic approach. It is necessary that process modeling be understood by al
l those involved in the process. At the process evaluation and optimization stag
e, the structures of the process model are evaluated and optimized from differen
t points of view before being passed on to the continuous process monitoring sta
ge (see Figure 12). A great number of modeling tools for process modeling are cu
rrently available on the market and applied within companies. These tools all ha
ve the capacity to map the corporate process ow using prede ned process building bl
ocks. Depending on the necessary degree of speci cation, the processes can be depi
cted in greater detail over several levels. Each process module can be equipped
with features such as frequency, duration, and speci c conditions to enable evalua
tion of the work ow as a whole. In addition, the process steps can be linked thro
ugh further modules such as required resources and required and created document
s. However, none of the tools can provide speci c instructions as to the extent of
the models detail. The speci cation level depends, for the most part, on the inter
est of the persons concerned, including both the partners in the corporate netwo
rk and the semiautonomous organizational units within the individual companies.
In any case, the interfaces between the organizational units and the network par
tners must be adequately speci ed so that the required results to be delivered at
the interfaces are known. However, a detailed description of individual activiti
es in the organizational units is not necessary. The evaluation and optimization
of processes can be based on the performance results provided at the interfaces
. Performance results means primarily to the expected quality, agreed deadlines,
quantities and prices, and maximal costs. So that the process ef ciency and the p
rocess changes with regard to the provided results can be evaluated, the immedia
te process gures have to be aggregated and evaluated in terms of the gures of othe
r systems. This leads to new cost tasks. An important leverage point in this res
pect is the practical use of process cost calculation and its integration into p
rocess management(von Briel and Sihn 1997). At the monitoring stage, it is neces
sary to collect up-to-date process gures. Based on these key gures, cost variance
analyses can be carried out that allow the relevant discrepancies between desire
d and actual performance to be determined and illustrated by means of intelligen
t search and lter functions. The relevant gures are then united and aggregated to
create global performance characteristics. Due to the locally distributed data a
nd their heterogeneous origins, monitoring processes that encompass several comp
anies is problematic. To solve this problem, the Fraunhofer Institute developed
the supply chain information system (SCIS), which is based on standard software
components and uses the Internet to enable the continuous monitoring of inventor
y, quality, and delivery deadlines in a multienterprise supply chain. The person
responsible for the product stores the data of each supplier separately in a da
tabase. This product manager is therefore in a position to identify and mark cri
tical parts on the basis of objective criteria (ABC analyses, XYZ analyses) and
practical experience. It is then possible to determine dates (e.g., inventory at
a certain point in time) for critical parts and carry out time analyses (e.g.,
deadline analyses for a certain period of time, trend analyses). These analyses
are used to identify logistical bottlenecks in the supply chain. Moreover, the a
nalyses can be passed on to the supply chain partners, thus maintaining the info
rmation ow within the supply chain.
320
TECHNOLOGY
4.2. Integrated Simulation Based on Distributed Models and Generic Model Buildin
g Blocks
Simulation is another important method for increasing the transformability of sy
stems. With the help of simulation, production systems can be evaluated not only
during standard operation but during start-up and uctuation periods. Simulation
is therefore an ideal planning instrument for a turbulent environment characteri
zed by speci c phenomena. However, in practice, the use of simulation is increasin
g only slowly due to the need for accompanying high-performance computer systems
and reservations regarding the creation of simulation models, pooling of knowho
w, and maintenance of the models. The main problem remains the nonrecurring crea
tion of simulation modules and the need to create a problem-speci c overall model.
Two simulation methods can be used to speed up the planning process. The rst met
hod generates company- and industry-speci c modules that can be put together to cr
eate individual simulation models. This is achieved by simply changing the param
eters without having to build a completely new planning model. The modules can b
e continuously created until the user no longer needs to adapt the individual mo
dules but merely has to modify the parameters of the overall model (see Figure 1
3). The model of a production system developed by the Fraunhofer Institute shows
that two basic building blocks suf ce to create a structured model of 10 differen
t types and 25 pieces of machinery. Within two weeks, 25 variations of the produ
ction system could be analyzed in collaboration with the production system staff
. The second method enhances the creation of simulation models by allowing distr
ibuted models to be built that interact through a common platform or protocol. T
hus, various problem-speci c tools can be put into action so that there is no need
to apply a single tool for all problems. Moreover, the modeling effort and main
tenance costs are shared by all participants. Accordingly, every semiautonomous
corporate unit and partner in the corporate network can maintain and adapt its o
wn model. On the other hand, because a common platform ensures that the models a
re consistent and executable, corporate management doesnt lose control over the o
verall model. The introduction of HLA communication standards for an increasing
number of simulation systems ful lls the principal system requirements for network
ed simulation.
4.3.
Participative Planning in the Factory of the Future
The participative planning method is used to reduce the number of interfaces tha
t arise when a complex planning task is solvedthat is, one that involves many pla
nning partners and their individual knowhow. It also improves information ow at t
he interfaces, allows planning processes to be carried out simultaneously, and p
revents double work in the form of repeated data input and data transfer. Partic
ipative planning is based on the theory that cooperation considerably increases
ef ciency in nding solutions for complex problems. The basic principle of participa
tive planning is not new. It was used in factory and layout planning long before
the introduction of computer assistance. Previously, team meetings used paper o
r metal
structural planning mapped sub-models configured simulation models
production planning external files production program bill of materials work sch
edule plant parameter An+3 definition of manufacturing structure Dn Cn
A B C
plants control systems Cn+1 Cn An+2 configuration of factory structure An+1 An
D
simulation software environment standard format (e.g. txt -file)
Figure 13 Modeling with Prede ned Building Blocks.
322
earning power What performance measures should be used to identify the corporate
profitability?
go al s m ea s. ta val rg ue e s ac t va tio lu ns es go al s m ea s ta . va rg
lu es e ac t va tio lue ns s
TECHNOLOGY
customer How should we present ourselves to our customers to ensure our earning
power?
processes What resources have to be accelerated to ensure our earning power? inn
ovation What can we do to maintain and enhance our innovative power in the futur
e?
go al s m ea s. ta va rg lu et es ac va tio lue ns s
Figure 15 The Principal Structure of the Balanced Scorecard.
How does the current pro t situation of the company compare to the market situatio
n? How is the companys market attractiveness rated by its customers? What is the
current situation in the companys value-adding processes? What improvements can b
e made in corporate processes and products, and what is the innovative power of
the company?
This tool helps a company recognize changes in the corporate situation at an ear
ly stage and initiate appropriate actions due to the depth and variety of the qu
estions and their differing time horizon. The balanced scorecard thus serves as
an early warning system for a company so that it can actively respond to market
changes. The tool also allows the current pro t and market situation of the compan
y to be taken into consideration. Thus, the balanced scorecard helps the company
avoid the short-range view of the pro t situation and expand it to include vital
aspects of corporate management.
5.
CONCLUSION
Based on Peter Druckers now generally recognized thesis that only the uncertain i
s certain in the future markets of manufacturing enterprises (Drucker 1992), it
appears safe to forecast that manufacturing structures will change fundamentally
. In this context, the transformability of future manufacturing structures will
become an important feature, enabling quick and proactive response to changing m
arket demands. In the scenario of transformable manufacturing structures, the fo
cus will no longer be on the computer-directed factory without man of the 1980s but
on the accomplishment of factory structures that employ of the latest technology
. Human beings, with their unique power for associative and creative thinking, w
ill then take a crucial part in guaranteeing the continuous adaptation of factor
y structures. Implementing transformability requires the application of new manu
facturing structures in factories and enterprises. These structures are distingu
ished by their capacity for external networking and increased internal responsib
ility. Combining these new structures with new design and operation methods will
lead to factories exceeding the productivity of current factory structures by q
uantum leaps. However, this vision will only become reality if both strategiesnew
structures and methods are allowed to back up each other, not regarded as isolat
ed elements.
REFERENCES
Delphi-Studie (1998), Studie zur globalen Entwicklung von Wissenschaft und Techn
ik. Drucker, P. (1992), Managing for the Future, Dutton, New York. Kaplan, R. S.
, and Norton, D. P. (1996), The Balanced Scorecard: Translating Strategy into Ac
tion, Harvard Business School Press, Boston.
go al s m ea s ta . va rg lu et es ac val tio ue ns s
Kinkel, S., and Wengel, J. (1998), Produktion zwischen Globalisierung und regional
er Vernetzung, in Mitteilung aus der Produktionsinnovationserhebung, Fraunhofer-In
stitut fu r Systemtechnik und Innovationsforschung, No. 10. Kuhnert, W. (1998), In
strumente einer Erfolgsorganisation, Proceedings of 6th Stuttgarter Innovationsfor
um: Wettbewerbsfaktor Unternehmensorganisation (October, 1415, 1998). Mintzberg,
H. (1994), Thats not Turbulence, Chicken Little, Its Really Opportunity, Planning Rev
, November / December, pp. 79. von Briel, R., and Sihn, W. (1997), Process Cost Cal
culation in a Fractal Company, International Journal of Technology Management, Vol
. 13, No. 1, pp. 6877. Westka mper, E. (1997), Wandlungsfa hige Unternehmensstruktu
ren, Logistik Spektrum, Vol. 9 No. 3, pp. 1011 and No. 4, pp. 78. Westka mper, E., H
u ser, M., and von Briel, R. (1997), Managing Restructuring Processes, in Challenges
to Civil and Mechanical Engineering in 2000 and Beyond, Vol. 3: Proceedings of
the International Conference (Wroclaw, Poland, June 25, 1997), Breslau.
ADDITIONAL READINGS
Preiss, K., Ed., Agile Manufacturing: A Sampling of Papers Presented at the Sixt
h National Agility Conference, San Diego, CA, (March 57, 1997). Warnecke, H. J.,
Aufbruch zum fraktalen Unternehmen, Springer, Berlin, 1995.
ENTERPRISE MODELING
281
processes. Such subsystems of the real world are called mini world or model domain. T
, models are descriptions of the most important characteristics of a focused dom
ain. The creative process of building such an abstracted picture of a real-world
system is called modeling (see Figure 1). In addition to the actual purpose of
modeling, the modeling methods applied also determine the focus of the model. Th
is is particularly true when a certain type of method has already been de ned. Mod
els reproduce excerpts of reality. They are created by abstracting the propertie
s of real objects, whereas their essential structures and behavior remain intact
(homomorphy). Not only the contentrelated purpose of the model but also the per
missible illustration methods determine to what extent nonessential characterist
ics may be abstracted. For example, if an object-oriented modeling approach or a
system-theoretic approach is selected, modeling only leads to objects applicabl
e to the syntax or the semantics of these particular methods. In order to avoid
lock-in by certain methods, architectures and methodologies are developed indepe
ndently of any particular method, while supporting a generic business process de n
ition. The following sections we will discuss modeling methods and architectures
in greater detail.
1.2.
Levels of Abstraction
Abstraction is the basic concept of modeling. In system theory, we know many per
spectives of abstraction. Thus, we can concentrate either on the structure or be
havior of enterprises; on certain elements of the enterprise, such as data, empl
oyees, software, products; or on a bounded domain, such as sales, accounting, ma
nufacturing. Typically, in enterprise modeling projects, several of these perspe
ctives are combined. For instance, a typical modeling project might be Describe th
e sales data structures. However, the term level of abstraction describes the rela
tionships between a model system and the respective real-world system. That is,
it describes the steps of abstraction. From low to high abstraction we can disti
nguish at least three levels in modeling: instances, application classes, and me
ta classes. At the instance level, each single element of the real world is repr
esented through one single element in our model. For example, in the case of mod
eling an individual business process, every element involved in the business pro
cess is instantiated by the af xed name, such as customer (no.) 3842, material M 3
2, or completed order O 4711, etc. (see Figure 2). Enterprise modeling at the in
stance level is used for controlling individual business processes. In manufactu
ring, this is for creating work schedules as the manufacturing process descripti
ons for individual parts or manufacturing orders. In of ce management, individual
business processes are executed through work ow systems. Therefore, they must have
access to information regarding the respective control structure and responsibl
e entities or devices for every single business case. At the application class l
evel, we abstract from instance properties and de ne classes of similar entities.
For example, all individual customers make up the class customer, all instances of o
rders constitute the class order, and so on (see Figure 2). Every class is character
ized by its name and the enumeration of its attributes, by which the instance is
described. For example, the class cusFigure 1 Modeling.
282
TECHNOLOGY
Figure 2 Levels of Abstraction.
tomer is characterized by the attributes customer number, customer name, and payme
nt period. The instances of these characteristics are the focus of the descripti
on at Level 1. Finding classes is always a creative task. It thus depends on the
subjective perspective of the modeler. Therefore, when de ning, for example, orde
r designations, we will only abstract speci c properties of cases 4711 or 4723, re
spectively, leading to the classes completed order or nished order. At Level 2, we w
bstract the completed and nished properties and create the parent class order fr
. This operation is known as generalization and is illustrated by a triangular s
ymbol.
ENTERPRISE MODELING
283
When quantities are generalized, they are grouped to parent quantities. This mak
es order instances of Level 1 instances of the class order as well. The class order i
esignated as the property order status, making it possible to allocate the process s
tate completed or nished to every instance. Materials and items are also generalized
ing them parts and resources. Thus, Level 2 contains application-related classes of e
rprise descriptions. On the other hand, with new classes created from similar cl
asses of Level 2 by abstracting their application relationships, these are alloc
ated to Level 3, the meta level, as illustrated in Figure 2. Level 2 classes the
n become instances of these meta classes. For example, the class material output con
tains the instances material and item as well as the generalized designation part.
information services contains the class designation order, along with its two child
ignations, and the class designation certi cate. The creation of this class is also a
function of its purpose. Thus, either the generalized classes of Level 2 or thei
r subclasses can be included as elements of the meta classes. When classes are c
reated, overlapping does not have to be avoided at all costs. For example, from
an output ow point of view, it is possible to create the class information services f
rom the classes order and certi cate. Conversely, from the data point of view, these
lso data objects, making them instances of the class data objects as well. The class
es at modeling Level 3 de ne every object necessary for describing the facts at Le
vel 2. These objects make up the building blocks for describing the applications
at Level 2. On the other hand, because the classes at Level 2 comprise the term
inology at Level 1, objects at Level 3 are also the framework for describing ins
tances. This abstraction process can be continued by once again grouping the cla
sses at Level 3 into classes that are then allocated to the meta2 level. Next, t
he content-related modeling views are abstracted. In Figure 2, the general class
object type is created, containing all the meta classes as instances.
1.3.
Principles of Modeling
Enterprise modeling is a creative process and can therefore not be completely di
rected by rules. However, if certain standards are observed, it is indeed possib
le to classify and understand thirdparty models. Furthermore, it is also a good
idea to establish certain quality standards for enterprise models. Figure 3 illu
strates the relationship between principles and enterprise modeling. A principle
is a fundamental rule or strategy that guides the entire process of enterprise
modeling. Thus, modeling
Figure 3 Principles of Modeling.
284
TECHNOLOGY
principles concern the association between the domain and the respective model a
s well as the selection of appropriate modeling methods and tool support. A mode
ling method is a set of components and a description of how to use these compone
nts in order to build a model. A modeling tool is a piece of software that can b
e used to support the application of a method. We will discuss modeling methods
and tools in more detail in the following chapters.
1.3.1.
Principle of Correctness
The correctness of models depends on correct semantics and syntax, that is, whet
her syntax of the respective metamodel is complete and consistent. Semantic corr
ectness of a model is measured by how closely it complies with the structure and
behavior of the respective object system. In real-world applications, complianc
e with these requirements can be proven only after simulation studies have been
carried out or other similar efforts have been made. Some modeling tools provide
a simulation function that can be used for this purpose.
1.3.2.
Principle of Relevance
Excerpts of the real-world object system should only be modeled provided they co
rrespond with the purpose of the model. Models should not contain more informati
on than necessary, thus keeping the cost vs. bene t ratio down to an acceptable le
vel.
1.3.3.
Principle of Cost vs. Bene t
One of the key factors ensuring a good cost vs. bene t ratio is the amount of effo
rt necessary to create the model, the usefulness of modeling the scenario, and h
ow long the model will be used.
1.3.4.
Principle of Clarity
Clarity ensures that a model is understandable and usable. It also determines how pr
agmatic the relationship between the model and the user is. Because models conta
in a large amount of information regarding technical and organizational issues,
only specialists are usually able to understand them quickly. Once models are br
oken down into subviews, individual views are easier to comprehend.
1.3.5.
Principle of Comparability
Models created in accordance with a consistent conceptual framework and modeling
methods are comparable if the objects have been named in conformity with establ
ished conventions and if identical modeling objects as well as equivalent degree
s of detailing have been used. In models created with different modeling methods
, it is important to make sure that their metamodels can be compared.
1.3.6.
Principle of Systematic Structure
This principle stipulates that it should be possible to integrate models develop
ed in various views, such as data models, organizational models, and business pr
ocess models. This requires an integrated methodology, that is, an architecture
providing a holistic metamodel (see Section 4).
2.
MODELING BENEFITS
Enterprise models are used as instruments for better understanding business stru
ctures and processes. Thus, the general bene t of enterprise modeling is in busine
ss administration and organizational processes. Focusing computer support in bus
iness, we can use enterprise models additionally to describe the organizational
impacts of information technology. Consequently, the second major bene t of enterp
rise modeling is for developing information systems.
2.1.
Bene ts for Business Administration and Organizational Processes
Corporate mission statements entail the production and utilization of material o
utput and services by combining production factors. Multiple entities and device
s are responsible for ful lling these tasks. In accordance with corporate objectiv
es, their close collaboration must be ensured. In order for human beings to be a
ble to handle complex social structures such as enterprises, these structures mu
st be broken down into manageable units. The rules required for this process are
referred to as organization. Structural or hierarchical organizations are character
ized by time-independent (static) rules, such as by hierarchies or enterprise to
pologies. Here, relationships involving management, output, information, or comm
unication technology between departments, just to name a few, are entered. Org c
harts are some of the simple models used to depict these relationships. Process
organizations, on the other hand, deal with time-dependent and logical (dynamic)
behavior of the processes necessary to complete the corporate mission. Hierarch
ical and process organizations are closely correlated. Hierarchical organization
s have been a key topic in business theory for years.
ENTERPRISE MODELING
285
However, due to buzzwords such as business process reengineering (BPR), process
organizations have moved into the spotlight in recent years. Reasons for creatin
g business process models include:
Optimizing organizational changes, a byproduct of BPR Storing corporate knowledg
e, such as in reference models Utilizing process documentation for ISO-9000 and
other certi cations Calculating the cost of business processes Leveraging process
information to implement and customize standard software solutions or work ow syst
ems (see Section 2.2).
Within these categories, other goals can be established for the modeling methods
. In business process improvement, we must therefore identify the components tha
t need to be addressed. Some of the many issues that can be addressed by busines
s process optimization are:
Changing the process structure by introducing simultaneous tasks, avoiding cycle
s, and streamlining the structure
Changing organizational reporting structures and developing employee quali cation
by improving processing in its entirety
Reducing the amount of documentation, streamlining and accelerating document and
data ow Discussing possible outsourcing measures (shifting from internal to exte
rnal output creation) Implementing new production and IT resources to improve pr
ocessing functions
In these examples, we are referring to numerous modeling aspects, such as proces
s structures, hierarchical organizations, employee quali cation, documents (data),
and external or internal output as well as production and IT resources. Obvious
ly, an enterprise model, particularly a business process model for the purpose o
f optimization, must be fairly complex. Moreover, it should address multiple asp
ects, for which numerous description methods are necessary. These various purpos
es determine the kind of modeling objects as well as the required granularity.
2.2.
Bene ts for Developing Information Systems
Information systems can be designed as custom applications or purchased as off-t
he-shelf standard solutions. After the initial popularity of custom applications
, integrated standard solutions are now the norm. With the advent of new types o
f software, such as componentware (where software components for certain applica
tion cases are assembled to form entire applications), a blend between the two a
pproaches has recently been making inroads. The development of custom applicatio
ns is generally expensive and is often plagued by uncertainties, such as the dur
ation of the development cycle or the dif culty of assessing costs. Thus, the tend
ency to shift software development from individual development to an organizatio
nal form of industrial manufacturingin software factoriesis not surprising. In this co
ntext, multiple methods for supporting the software development process have bee
n developed. They differ according to their focus on the various software develo
pment processes and their preferred approach regarding the issue at hand, such a
s data, event, or function orientation, respectively. Due to the wide range of m
ethods that differ only slightly from one another, this market is cluttered. In
fact, the multitude of products and approaches has actually impeded the developm
286
TECHNOLOGY
The purpose of these questions is to classify and evaluate the various modeling
methods. After these issues are addressed, there is, however, a second group of
reasons for dealing with information system design methodologies (ISDMs), result
ing from the fact that usually several business partners are involved in complex
development projects. Sometimes they use different development methods, or the
results of their work might overlap. Only an architecture integrating the indivi
dual methods, con rming agreement or pointing out any overlap, can lead to mutual
understanding. The alternative to individual software development is to buy a st
andardized business administration software solution. Such solutions include mod
ules for accounting, purchasing, sales, production planning, and so on. Financia
l information systems are characterized by a markedly high degree of complexity.
Many corporate and external business partners are involved in the implementatio
n of information systems. This becomes apparent in light of seamlessly integrate
d data processing, where data is shared by multiple applications. Examples inclu
de comprehensive IS-oriented concepts implemented in enterprises, CIM in manufac
turing companies, IS-supported merchandise management systems for retailers, and
electronic banking in nancial institutions. Until the mid-1990s, the ratio betwe
en the effort of implementing nancial packaged applications in organizations and
their purchase price was frequently more than 5:1. This ratio is so high because
off-the-shelf systems are more or less easy to install, yet users must also det
ermine which goals (strategies) they wish to reach with the system, how the func
tionality of the system can achieve this, and how to customize, con gure, and tech
nically implement the package. With hardware and software costs rapidly decreasi
ng, that ratio became even worse. Small and medium-sized enterprises (SMEs) are
not able to pay consultants millions of dollars for implementation. Hence, archi
tectures, methods, and tools have become increasingly popular because they can h
elp reduce the cost of software implementation and at the same time increase use
r acceptance of standard software solutions. Several modeling approaches are pos
sible:
Reduce the effort necessary for creating the target concept by leveraging best-pra
ctice case
knowledge available in reference models.
Create a requirements de nition by leveraging modeling techniques to detail the de
scription. Document the requirements de nition of the standard software by means o
f semantic modeling
methods, making the business logic more understandable.
Use semantic models to automate reconciliation of the requirements de nition of th
e target
concept with the standard software as much as possible, cutting down on the need
for speci c IS skills. Leverage semantic models as a starting point for maximum a
utomation of system and con guration customizing.
3.
3.1.
MODELING VIEWS AND METHODS
Process-Oriented Enterprise Modeling
Generally speaking, a business process is a continuous series of enterprise acti
vities, undertaken for the purpose of creating output. The starting point and nal
product of the business process are the output requested and utilized by corpor
ate or external customers. Business processes often enable the value chain of th
e enterprise as well as focusing on the customer when the output is created. In
the following, we explain the key issues in modeling business processes with a s
imple example from customer order processing. First, let us outline the scenario
:
A customer wants to order several items that need to be manufactured. Based on c
ustomer and item information, the feasibility of manufacturing this item is stud
ied. Once the order has arrived, the necessary materials are obtained from a sup
plier. After arrival of the material and subsequent order planning, the items ar
e manufactured according to a work schedule and shipped to the customer along wi
th the appropriate documentation.
This scenario will be discussed from various points of view. As we have already
seen, in system theory we can distinguish between system structures and system b
ehavior. We will begin by describing the responsible entities and relationships
involved in the business process. Then, by means of function ow, we will describe
the dynamic behavior. Output ows describe the results of executing the process,
and information ows illustrate the interchange of documents involved in the proce
ss. Functions, output producers (organizational units), output, and information
objects are illustrated by various symbols. Flows are depicted by arrows.
3.1.1.
Organization Views
Figure 4 depicts the responsible entities (organizational units) involved in the
business process, along with their output and communication relationships, illu
strated as context or interaction diagrams. The
ENTERPRISE MODELING
287
Figure 4 Interaction Diagram of the Business Process Order Processing.
sequence in which processes are carried out is not apparent. Nevertheless, this
provides an initial view of the business process structure. In complex processes
, the myriad interchanges among the various business partners can become somewha
t confusing. In addition to the various interactions, it is also possible to ent
er the activities of the responsible entities. This has been done in only a few
places. The class of organization views also includes the hierarchical organizat
ion structure (org charts). Org charts are created in order to group responsible
entities or devices that are executing the same work object. This is why the re
sponsible entities, responsible devices, nancial resources, and computer hardware
are all assigned together to the organization views.
3.1.2.
Function Views
Figure 5 describes the same business process by depicting the activities (functi
ons) to be executed, as well as their sequence. The main issue is not responsibl
e entities, as with the interaction diagram, but rather the dynamic sequence of
activities. For illustration purposes, the organizational units are also depicte
d in Figure 5. Due to redundancies, their interrelationship with the interaction
diagram is not as obvious. As function sequences for creating output, function o
ws characterize the business process. The output ows themselves will be displayed
individually. The class of function views also includes the hierarchical struct
ure of business activities transforming input into output. According to the leve
l of detail, they are labeled business processes, processes, functions, and ele
ns. Because functions support goals, yet are controlled by them as well, goals are
also allocated to function viewsbecause of the close linkage. In application sof
tware, computer-aided processing rules of a function are de ned. Thus, application
software is closely aligned with functions and is also allocated to function views.
3.1.3.
Output Views
The designation output is very heterogeneous. Business output is the result of a pro
duction process, in the most general sense of the word. Output can be physical (
material output) or nonphysical (services). Whereas material output is easily de n
ed, such as by the delivery of material, manufactured parts, or even the nished p
roduct, the term services is more dif cult to de ne because it comprises heterogeneo
us services, such as insurance services, nancial services, and information broker
ing services. Figure 6 illustrates this simpli ed classi cation of outputthat is, also
tas a hierarchical diagram. Concerning our business process example, the result of
the function manufacture item in Figure 7 is the material output, de ned by the manufa
ctured item. Likewise, quality checks are carried out and documented during the
manufacturing process. All data pertinent to the customer are captured in order do
cuments, themselves a service by means of the information they provide. After ever
y intercompany function, output describing the deliverable is de ned, which in tur
n is entering the next
288
TECHNOLOGY
Figure 5 Function Flow of the Business Process Order Processing.
process as input. To avoid cluttering the diagram, the organizational units invo
lved are not depicted. It is not possible to uniquely derive the function sequen
ce from the illustration of the output ow.
3.1.4.
Data Views
The designations data and information are used synonymously. In addition to informati
services, other information, used as environment descriptions during the busines
s processes, constitutes
Figure 6 Types of InputOutput.
ENTERPRISE MODELING
289
Figure 7 Output Flow of the Business Process Order Processing.
process components. Figure 8 illustrates the information objects of our business
process example, along with the data interchanged among them. Objects listed as
information services have double borders. Information objects describing the en
vironment of the business process are shown as well, such as data regarding supp
liers, items, or work schedules. These data are necessary to create inforFigure 8 Information Flow of the Business Process Order Processing.
290
TECHNOLOGY
mation services. For example, when orders are checked, the customers credit is ch
ecked and inventory is checked for availability. Because data ow is triggered by
the functions that are linked to the information objects, it is more or less pos
sible to read the function ow in Figure 8. However, if multiple functions are app
lied to an information object or multiple data ows are requested by a function, t
he function process cannot be uniquely deduced. Besides information ow modeling,
the (static) description of data structures is a very important modeling task. S
tatic enterprise data models are used to develop proper data structures in order
to implement a logically integrated database. Chens entity relationship model (E
RM) is the most widespread method for the conceptual modeling of data structures
.
3.1.5.
Process View
Building various views serves the purpose of structuring and streamlining busine
ss process modeling. Splitting up views has the added advantage of avoiding and
controlling redundancies that can occur when objects in a process model are used
more than once. For example, the same environmental data, events, or organizati
onal units might be applied to several functions. View-speci c modeling methods th
at have proven to be successful can also be used. Particularly in this light, vi
ew procedures differ from the more theoretical modeling concepts, where systems
are divided into subsystems for the purpose of reducing complexity. In principle
, however, every subsystem is depicted in the same way as the original system. T
his is why it is not possible to use various modeling methods in the same system
. It is important to note that none of the ows (organization, function, output, a
nd information ow, respectively) illustrated above is capable of completely model
ing the entire business process. We must therefore combine all these perspective
s. To this end, one of the views should be selected as a foundation and then be
integrated into the others. The function view is closest to the de nition of a bus
iness process and is therefore typically used as a starting point. However, in t
he context of objectoriented enterprise modeling information, ows can serve as a
starting point as well. Figure 9 provides a detailed excerpt of our business pro
cess example, focusing the function manufacture item with all ows described above. Th
e method used to describe the process in Figure 9 is called event-driven process
chain (EPC). The EPC method was developed at the Institute for Information Syst
ems (IWi) of the Saarland
Figure 9 Detailed Excerpt of the Business Process Order Processing.
ENTERPRISE MODELING
291
University, Germany, in collaboration with SAP AG. It is the key component of SA
P R / 3s modeling concepts for business engineering and customizing. It is based
on the concepts of stochastic networks and Petri nets. Simple versions exclude c
onditions and messages and include only E(vent) / A(ction) illustrations. Multip
le functions can result from an event. On the other hand, multiple functions som
etimes need to be concluded before an event can be triggered. Logical relationsh
ips are illustrated by and ( ), inclusive-or ( ) and exclusive-or (XOR) symbols.
some typical examples of event relationships. When there are more complex relat
ionships between completed functions and functions that have just been launched
(such as different logical relationships between groups of functions), decision
tables for incoming and outgoing functions, respectively, can be stored in an ev
ent.
3.2.
Object-Oriented Enterprise Modeling
Object-oriented enterprise modeling starts with analyzing the entities of the re
al world. In the objectoriented model, these entities will be described through
objects. The object data (attributes) represent characteristics of the real syst
em and are accessible only by means of object methods. By their de nition, attribu
tes and methods objects are entirely determined. Objects that share the same cha
racteristics are instances of the respective object class. Classes can be specia
lized to subclasses as well as generated to superclasses. This is called inherit
anceeach subclass will inherit the attributes and methods from its superclass. We
can equate the term method used in object-oriented analysis with the term funct
ion. Due to the fact that classes are frequently data classes (such as customers, supp
liers, orders, etc.), they represent the link between data and function view. We have
already experienced object-oriented class design when we discussed the levels of
abstraction and the examples of the modeling views (see Section 1.2 as well as
Section 3.1), so we can skim the properties of creating object-oriented classes.
Object-oriented modeling is not based on a standardized method. Rather, a numbe
r of authors have developed similar or complementary approaches. The various gra
phical symbols they use for their approaches make comparisons dif cult. The Uni ed M
odeling Language (UML), introduced by
Figure 10 Event Relationships in EPC.
292
TECHNOLOGY
Figure 11 Object and Object Class.
Rumbaugh, Booch, and Jacobsen aims to streamline and integrate various object-or
iented approaches, which is why we will use these symbols. Objects, each with it
s own identity and indicated by an ID number, are described by properties (attri
butes). The functions (methods) that can be applied to the object de ne their beha
vior. Objects represent instances and are illustrated by rectangles. Objects wit
h identical attributes, functionality, and semantics are grouped into object cla
sses or regular classes. The quantity of customers thus forms the class customer (se
e Figure 11). By naming attributes and methods, classes de ne the properties and t
he behavior of their instances, that is, objects. Because attributes and methods
form a unit, classes realize the principle of encapsulation. In addition to att
ribute and method de nitions for objects, we can also use class attributes and cla
ss methods that are valid only for the classes themselves, not for the objects.
An example would be number of customers and creating a new customer. One signi cant p
ty of the object-oriented approach is inheritance, giving classes access to the
properties (attributes) and the behavior (methods) of other classes. Inherited a
ttributes and methods can be overwritten and rede ned by the inheriting class. Inh
eritance takes place within a class hierarchy with two types of classes, namely
overriding classes and subordinate classes, as is shown by generalizing and spec
ializing operations in data modeling. A class can also inherit properties from s
everal overriding classes (multiple inheritance), with the resulting class diagr
am forming a network. Figure 12 gives an example of inheritance among object cla
sses.
Figure 12 Inheritance.
ENTERPRISE MODELING
293
Figure 13 Association.
In addition to the generalizing relationships between classes, there are also re
lationships (i.e. associations) between objects of equal class ranking or betwee
n objects of the same class. These associations equate to relationships in entit
y relationship models, although here they are illustrated by only one line. Thes
e illustrations should be read from left to right. Cardinalities are allocated t
o an association. At each end of the association, role names can be allocated to
the associations. If associations contain attributes, they are depicted as clas
ses (see the class purchasing process in Figure 13). Aggregations are a special kind
of association describing the part of relationships between objects of two differen
t classes. Role names can be applied to aggregations as well. If attributes are
allocated to an aggregation, this leads to a class, as shown by the class structur
e, in an aggregation relation for a bill of materials (see Figure 14). Class de niti
on, inheritance, associations, and aggregations make up the key structural prope
rties of object-oriented modeling. Overall, the UML provides seven methods for d
escribing the various static and dynamic aspects of enterprises. Nevertheless, e
ven with methods such as use case diagrams and interaction diagrams, it is dif cul
t to depict process branching, organizational aspects, and output ows. Therefore,
one of the main disadvantages of the object-oriented approach is it does not il
lustrate business process in a very detailed manner.
4.
4.1.
MODELING ARCHITECTURES
Architecture of Integrated Information Systems (ARIS)
In the architecture of integrated information systems (ARIS), we can distinguish
two different aspects of applications:
Figure 14 Aggregation.
294
TECHNOLOGY
1. The ARIS concept (ARIS house), an architecture for modeling enterprises, part
icularly describing business processes 2. The ARIS house of business engineering
(HOBE), representing a concept for comprehensive computer-aided business proces
s management. Both applications are supported by the ARIS Toolset software syste
m, developed by IDS Scheer AG. We will discuss tool support for enterprise model
ing in Section 5. The ARIS concept consists of ve modeling views and a three-phas
e life cycle model. The integration of both leads to 15 building blocks for desc
ribing enterprise information systems. For each block the ARIS concept provides
modeling methods, meta structures of which are included in the ARIS information
model. As we have already discussed static and dynamic modeling views (see Secti
on 3.1), we can skim over the creation of modeling views. ARIS views are created
according to the criterion of semantic correlation similarity, that is, enterpr
ise objects that are semantically connected are treated in the same modeling vie
w. The ARIS modeling views are (see Figure 15):
Data view: This view includes the data processing environment as well as the mes
sages triggering functions or being triggered by functions. Events such as customer order re
ceived,
Figure 15 Views of the ARIS Concept.
ENTERPRISE MODELING
295
completion notice received, and invoice written are also information objects represen
by data and therefore modeled in the data view. Function view: The business act
ivities to be performed and their relationships form the function view. It conta
ins the descriptions of each single activity (function) itself, the associations
between super- and subordinate functions, and the logical sequence of functions
. Enterprise goals that guide the performance of the functions and the applicati
on systems that support their execution are also components of this view. Organi
zation view: Departments, of ce buildings, and factories, are examples of organiza
tional units that perform business functions. They are modeled according to crit
eria such as same function and same work object. Thus, the structure as well as the i
raction between organizational units and the assigned resources (human resources
, machine resources, computer hardware resource) are part of the organization vi
ew. Output view: This view contains all physical and nonphysical output that is
created by business functions, including services, materials, and products as we
ll as funds ows. Because each output is input for another functioneven if this fun
ction is probably carried out by an external partnerinputoutput diagrams are also
components of this view. Control view / process view: In the views described abo
ve, the classes are modeled with their relationships relative to the views. Rela
tionships among the views as well as the entire business process are modeled in
the control or process view. This enables a framework for the systematic inspect
ion of all bilateral relationships of the views and the complete enterprise desc
ription using a process-oriented perspective.
The ARIS concept provides a set of methods that can be used to model the enterpr
ise objects and their static and dynamic relationships. For each view at least o
ne well-proven method is provided. Because ARIS is an open architecture, new mod
eling methods, such as UML methods, can easily be integrated into the meta struc
ture. Figure 16 gives examples of ARIS methods for conceptual modeling. Up to no
w, we have discussed enterprises, particularly business processes, from a manage
ment point of view, that is, without any particular focus on information technol
ogy. The aforementioned application programs (components of the function view),
computer hardware (a component of the organization view), and data media (compon
ents of the data view) contain only system names, not IT descriptions. These are
included in the ARIS concept by evaluating the IT support provided by each ARIS
view.
Function views are supported by the application programs, which may be described
in more
detail by module concepts, transactions, or programming languages.
Organization views, along with their production resources and the computer resou
rces responsible, may be detailed further by listing network concepts, hardware components,
and other technical speci cations. Data views may be detailed more precisely by l
isting data models, access paths, and memory usage. Output views group the vario
us types of output, such as material output and information services. Here again
there is a close alignment with the supporting information technology. In mater
ial output (e.g., entertainment technology, automobiles, and machine tools), mor
e and more IT components (e.g., chip technology), along with the necessary hardw
are, are used. Other service industries, such as airline reservations, are close
ly linked with IT as well. The fact that the respective views can be combined wi
thin the control view means that there is a de nite link with IT, as demonstrated
by the above arguments. Using a phase model, enterprise models are thus transfor
med step by step into information and communication technology objects (see Figu
re 17). The starting point of systems development is the creation of an IS-orien
ted initial strategic situation in Phase 1. IS-oriented means that basic IT effects
on the new enterprise concepts are already taken into account. Some examples of
these relationships might be creating virtual companies through communication ne
tworks, PC banking, integrated order processing and product development in indus
try (CIM), or integrated merchandise management systems (MMS) in retail. Strateg
ic corporate planning determines long-term corporate goals, general corporate ac
tivities, and resources. Thus, planning has an effect on the long-term de nition o
f enterprises, in uencing corporate goals, critical success factors, and resource
allocation. The methods in question are similar to management concepts for strat
egic corporate planning. Provided actual business processes have already been de
scribed, this occurs in a general fashion. At this stage, it is not advisable to
split up functions into ARIS views and then describe them in detail.
296
TECHNOLOGY
Figure 16 ARIS Methods for Conceptual Modeling.
In Phase 2, the requirements de nition, individual views of the application system
are modeled in detail. Here as well, business-organizational content is key. Ex
amples of business processes should be included at this level. However, in this
phase more conceptual modeling methods should be used than in the strategic appr
oach because the descriptions for the requirements de nition are the starting poin
t for IT implementation. Modeling methods that are understandable from a busines
s point of view should be used, yet they should be suf ciently conceptual to be a
suitable starting point for a
ENTERPRISE MODELING
297
Figure 17 ARIS Phase Model.
consistent IT implementation. Therefore, it makes sense to include general IT ob
jects, such as databases or programs, at this level. Phase 3 calls for creating
the design speci cation, where enterprise models are adapted to the requirements o
f the implementation tool interfaces (databases, network architectures, or progr
amming languages, etc.). At this time, actual IT products are still irrelevant.
Phase 4 involves the implementation description, where the requirements are impl
emented in physical data structures, hardware components, and real-world IT prod
ucts. These phases describe the creation of an information system and are theref
ore known as buildtime. Subsequently, the completed system becomes operable, mea
ning it is followed by an operations phase, known as runtime. We will not addres
s the operation of information systems, that is, runtime, in great detail. The r
equirements de nition is closely aligned with the strategic planning level, illust
rated by the width of the arrow in Figure 17. However, it is generally independe
nt of the implementation point of view, as depicted in the narrow arrow pointing
to the design speci cation.
298
TECHNOLOGY
Implementation description and operations, on the other hand, are closely linked
with the IT equipment and product level. Changes in the systems IT have an immed
iate effect on its type of implementation and operation. The phase concept does
not imply that there is a rigid sequence in the development process, as in the w
aterfall model. Rather, the phase concept also includes an evolutionary prototyp
ing procedure. However, even in evolutionary software development, the following
description levels are generally used. Phase models are primarily used because
they offer a variety of description objects and methods. The ARIS concept in Fig
ure 18 is enhanced by the phases of the buildtime ARIS phase model. After a gene
ral conceptual design, the business processes are divided into ARIS views and do
cumented and modeled from the requirements de nition to the implementation descrip
tion. These three description levels are created for controlling purposes as wel
l. This makes it possible to create the links to the other components at each of
the description levels.
Figure 18 ARIS Concept with Phase Model.
ENTERPRISE MODELING
299
The ARIS concept paves the way for engineering, planning, and controlling enterp
rises, particularly business processes. The ARIS house of business engineering (
HOBE) enhances the ARIS concept by addressing comprehensive business process man
agement from not only an organizational but an IT perspective. We will outline h
ow ARIS supports business management in the design and development stages, using
ARIS-compatible software tools. Because business process owners need to focus o
n the one-shot engineering and description aspects of their business processes, ARIS
HOBE provides a framework for managing business processes, from organizational
engineering to real-world IT implementation, including continuous adaptive impro
vement. HOBE also lets business process owners continuously plan and control cur
rent business procedures and devote their attention to continuous process improv
ement (CPI). Comprehensive industry expertise in planning and controlling manufa
cturing processes is a fundamental component of HOBE. Objects such as work schedul
e and bill of material provide detailed description procedures for manufacturing proce
sses, while production planning and control systems in HOBE deliver solutions fo
r planning and controlling manufacturing processes. Many of these concepts and p
rocedures can be generalized to provide a general process management system.
At level I (process engineering), shown in Figure 19, business processes are mod
eled in accordance with a manufacturing work schedule. The ARIS concept provides a framewo
rk covering every business process aspect. Various methods for optimizing, evalu
ating, and ensuring the quality of the processes are also available. Level II (p
rocess planning and control) is where business process owners current business pr
ocesses are planned and controlled, with methods for scheduling and capacity and
(activitybased) cost analysis also available. Process monitoring lets process m
anagers keep an eye on the states of the various processes. At Level III (work ow
control), objects to be processed, such as customer orders with appropriate docu
ments or insurance claims, are delivered from one workplace to the next. Electro
nically stored documents are delivered by work ow systems. At Level IV (applicatio
n system), documents delivered to the workplaces are speci cally processed, that i
s, functions of the business process are executed using computer-aided applicati
on systems (ranging from simple word processing systems to complex standard soft
ware solution modules), business objects, and Java applets. HOBEs four levels are
linked with one another by feedback loops. Process control delivers information
on the ef ciency of current processes. This is where continuous adaptation and im
proveFigure 19 ARIS House of Business Engineering.
300
TECHNOLOGY
ment of business processes in accordance with CPI begins. Work ow control reports
actual data on the processes to be executed (amounts, times, organizational allo
cations) to the process control level. Application supporting modules are then s
tarted by the work ow system. In the fth component of the HOBE concept, Levels I th
rough IV are consolidated into a framework. Frameworks contain information on th
e respective architecture and applications, con guring real-world applications fro
m the tools at levels II and III as well as application information from the ref
erence models (level I). Frameworks contain information on the composition of th
e components and their relationships.
4.2.
Information Systems Methodology (IFIP-ISM)
Olle et al. (1991) give a comprehensive methodology for developing more traditio
nal information systems. The designation methodology is used at the same level as arch
itecture. The seven authors of the study are members of the International Federati
on for Information Processing (IFIP) task group, in particular of the design and
evaluation of information systems working group WG 8.1 of information systems t
echnical committee TC 8. The research results of the study are summarized in the
guideline Information Systems Methodology. The design of the methodology does not f
ocus on any particular IS development methods. Rather, it is based on a wide ran
ge of knowledge, including as many concepts as possible: IDA (interactive design
approach), IEM (information engineering methodology), IML (inscribed high-level
Petri nets), JSD (Jackson system development), NIAM (Nijssens information analys
is method), PSL / PSA (problem statement language / problem statement analyzer),
SADT (structured analysis and design technique) as well as Yourdons approach of
object-oriented analysis. This methodology is described by metamodels of an enti
ty relationship concept. It features the point of view and stages of an informat
ion system life cycle, distinguishing data-oriented, processoriented, and behavi
or-oriented perspectives (see Figure 20). Creating these perspectives is less a
matter of analytical conclusion than simply of re ecting a goal of addressing the
key issues typical in traditional IS developing methods. Entity types and their
attributes are reviewed in the data-oriented perspective. The process-oriented p
erspective describes events (business activities), including their predecessor o
r successor relationships. Events and their predecessor or successor relationshi
ps are analyzed in the behavior-oriented perspective. From a comprehensive 12-st
ep life-cycle model we will select three steps, information systems planning, bu
siness planning, and system design, and then examine the last two in detail in t
erms of their key role in the methodology. Information systems planning refers t
o the strategic planning of an information system. In business analysis, existin
g information systems of an entire enterprise or of a subarea of the enterprise
are
Figure 20 Perspectives of the IFIP Architecture. (From Olle et al. 1991, p. 13)
ENTERPRISE MODELING
301
analyzed. The respective information system is designed in the step system desig
n. This concept also includes a comprehensive procedural model, including a role
concept for project organization. With regard to ARIS, this concept has some ov
erlapping areas. In others there are deviations. What both concepts have in comm
on is their 2D point of view, with perspectives and development steps. There are
differences in their instances, however. For example, Olle et al. do not explic
itly list the organization view but rather review it along with other activities
, albeit rudimentarily. The process de nition more or less dovetails with ARISs fun
ction de nition. Data and functions or events and functions are also strictly sepa
rated from one another. The three perspectives linked together are only slightly
comparable to ARIS control view. The step system design blends together the ARI
S phases of requirements de nition and design speci cation, with the emphasis on the
latter.
4.3.
CIM Open System Architecture (CIMOSA)
The ESPRIT program, funded by the European Union (EU), has resulted in a series
of research projects for developing an architecture, Computer Integrated Manufac
turing Open System Architecture (CIMOSA), for CIM systems. CIMOSA results have b
een published by several authors, including Vernadat (1996). This project origin
ally involved 30 participating organizations, including manufacturers as the act
ual users, IT vendors, and research institutes. Although the project focused on
CIM as an application goal, its mission was to provide results for general enter
prise modeling. One of CIMOSAs goals was also to provide an architecture and a me
thodology for vendor-independent, standardized CIM modules to be plugged together, c
reating a customer-oriented system (plug and play). The CIMOSA modeling framework is
based on the CIMOSA cube (see Figure 21). CIMOSA distinguishes three different
dimensions, described by the three axes of the cube. The vertical direction (ste
pwise derivation) describes the three description levels of the phase concept:
Figure 21 The CIMOSA Modeling Architecture. (CIMOSA cube). (From Vernadat 1996,
p. 45)
302
TECHNOLOGY
requirements de nition, design speci cation, and implementation description. These l
evels are for the most part identical with those of the ARIS life cycle. In the
horizontal dimension (stepwise instantiation), concepts are individualized step
by step. First, basic requirements (generic requirements, building blocks) are d
e ned, then particularized in the next step according to industry speci c requiremen
ts (partial requirements). In Step 3, they are broken up into enterprise-speci c r
equirements (particular requirements). This point of view makes it clear that in
itially, according to CIMOSA, general building blocks should be used to de ne stan
dards, after which the building blocks are grouped into industry speci c reference
models. In the last step, they are used for developing enterprise-speci c solutio
ns. In ARIS, the degree of detailing an information model is de ned while the gran
ularity issues are addressed. By directly entering content-related reference mod
els, the CIMOSA architecture, it becomes clear, combines general methodological
issues regarding information systems and application-related CIM domains. The th
ird dimension, stepwise generation, describes the various views of an informatio
n system. This point of view has goals similar to ARIS regarding the creation of
views, although not all the results are the same. CIMOSA divides description vi
ews into function view, information view, resource view, and organization view.
Function view is the description of events, although it also includes a combinat
ion of other elements such as events and processes, including performance and ex
ception handling. Information view refers to the data view or object de nition. Re
source view describes IT and production resources, and organization view implies
the hierarchical organization. CIMOSA also breaks up the entire context into va
rious views, although it lacks a level for reassembling them, as opposed to ARIS
with its control and process views. This results in the fact that in CIMOSA, de
scriptions of the individual views are combined with one another. For example, w
hen resources are being described, they are at the same time also allocated to f
unctions. The CIMOSA modeling concept does not feature an output view. The CIMOS
A concept develops an architecture suitable for describing information systems,
into which content in the form of standardized reference models, all the way to
actual software generation, can be entered. Despite the above-mentioned drawback
s, it considers important points. Based on this concept, modeling methods are cl
assi ed in CIMOSA and described by metamodels, all the while adhering to an eventdriven, business process-oriented view. Furthermore, enterprises are regarded as
a series of multiple agents communicating with one another. Despite the conside
rable nancial and intellectual efforts spent on CIMOSA, its practical contributio
n so far has been minimal. Business users involved in the project have so far re
ported few special applications resulting therefrom, with the exception of the c
ar manufacturer Renault with a repair service application for manufacturing plan
ts and the tooling company Traub AG with an application for optimizing individua
l development of tools. To date, a CIMOSA-based modeling tool has not been used
much in practice. The main reason for the lack of success in real-world applicat
ions is presumably its very theoretical design, which does not incorporate comme
rcially available IT solutions (standard software, for example). Considering the
general lack of interest in CIM concepts, the extremely specialized focus of th
is approach seems to be working to its disadvantage.
4.4.
Zachman Framework
A framework for describing enterprises, quite popular in the United States, was
developed by J. A. Zachman. This concept is based on IBMs information systems arc
hitecture (ISA) but has been enhanced and presented by Zachman in numerous talks
and seminars. This approach (see Figure 22) consists of six perspectives and si
x description boxes. In the ARIS terminology, Zachmans description boxes would eq
uate to views and perspectives would equate to the levels of the life-cycle mode
l. Perspectives are listed in brackets along with the respective role designatio
ns of the party involved: scope (planner), enterprise model (owner), system mode
l (designer), technology model (builder), components (subcontractor), and functi
oning system (user). The areas to be viewed are designated by interrogatives wit
h the respective actions listed in brackets: what (data), how (function), where
(network), who (people), when (time), and why (rationale). Perspectives and les t
o be viewed are at a right angle to one another. Every eld in the matrix is descr
ibed by a method. Contrary to ARIS, the Zachman framework is not capable of bein
g directly implemented into an information system and the relationships between
the description elds are not entered systematically. Furthermore, the relationshi
p of Zachmans framework with the speci c creation of output within the business pro
cess is not apparent. Initial approaches for supporting tools are becoming appar
ent, thanks to cooperation with Framework Software, Inc.
ENTERPRISE MODELING
303
Figure 22 Zachman Framework. (From Burgess and Hokel 1994, p. 26,
ware, Inc. All rights reserved)
Framework Soft
5.
5.1.
MODELING TOOLS
Bene ts of Computerized Enterprise Modeling
Modeling tools are computerized instruments used to support the application of m
odeling methods. Support in designing enterprise models through the use of compu
terized tools can play a crucial role in increasing the ef ciency of the developme
nt process. The general bene ts of computer support are:
All relevant information is entered in a structured, easy-to-analyze form, ensur
ing uniform,
304
TECHNOLOGY
Figure 23 Modeling Tools.
The rst group that can be seen as modeling tools in a broader sense are programmi
ng environments. Programming environments such as Borland JBuilder and Symantec
Visual Cafe for Java clearly emphasize the phase of implementation description.
They describe a business process by the language of the software application tha
t is used to support the process performance. In few cases, programming language
s provide limited features for design speci cation. CASE tools such as ADW, on the
other hand, support a speci c design methodology by offering data models and func
tion models (e.g., the entity relationship method (ERM) and structured analysis
and design technique [SADT]). Their metastructure has to re ect this methodology.
Thus, they automatically provide an information model for the repository functio
nality of this system. Because CASE tools cannot really claim to offer a complet
e repository, they are also called encyclopedias. In contrast, drawing tools suc
h as VISIO and ABC FlowCharter support the phases of documentation and, to a cer
tain extent, analysis of business structures and processes. The emphasis is on t
he graphical representation of enterprises in order to better understand their s
tructures and behavior. In most of the cases, drawing tools do not provide an in
tegrated meta model, that is, repository. Consequently, drawing tools cannot pro
vide database features such as animating and simulating process models, analyzin
g process times, and calculating process costs. Business process improvement (BP
I) frameworks such as ARIS offer a broader set of modeling methods, which are de
scribed together in one consistent metamodel. As a result, each model is stored
in the frameworks repository and thus can be retrieved, analyzed, and manipulated
and the model history can be administered (see Section 4.1). The last group enc
ompasses standard software systems, such as enterprise resource planning (ERP) s
ystems like those from SAP, Baan, or Oracle. ERP systems offer greater exibility
with respect to customization, provide support for data management, and include
functionality for creating and managing a (sub)repository. The main weakness is
in active analysis of business structuresthat is, ERP tools typically do not offe
r tools for simulation process alternatives. Because all the modeling tools pres
ented focus on different aspects, none of the systems is suitable for handling t
he entire systems development process. However, ERP systems are opening up more
and more to the modeling and analysis level, endeavoring to support the whole sy
stem life cycle. The integration between ARIS and SAP R / 3, for example, demons
trates seamless tool support from the documentation to the implementation phase.
In an excerpt from the SAP R / 3 reference model, Figure 24 shows a customizing
example with the ARIS Toolset. For clari cation purposes, the four windows are sh
own separately. In a real-world application, these windows would be displayed on
one screen, providing the user with all the information at once. The upper-righ
t window shows an excerpt from the business process model in the ARIS modeling t
ool, illustrating the part of the standard software process that can be hidden.
An additional process branch, not contained in the standard software, must be ad
ded.
306
TECHNOLOGY
The function create inquiry asks users which screen is being used in SAP R / 3. Usin
g the process model as a modeling tool and by clicking on the function (or start
ing a command), users can seamlessly invoke SAP R / 3. This screen is shown at t
he bottom left. The Implementation Management Guide (IMG) customizing tool is ac
tivated for customizing the function. In the upper-left hand window, it depicts
the function parameters that are available. With the modeling tool, results of d
iscussions, parameter decisions, unresolved issues, and the like are stored in t
he function, as depicted in the bottom-right window. This enables detailed docum
entation of business and IT-speci c business process engineering. This documentati
on can be used at a later point in time for clarifying questions, using this kno
whow for follow-up projects, and monitoring the project.
6.
MODELING OUTLOOK
In the mid-1990s, the BPR debate drew our attention from isolated business activ
ities to entire value chains. Yet entire process management in most of the cases foc
used on the information ow within departmental, corporate, or national boundaries
. Obstacles within those areas appeared hard enough to cope with. Therefore, int
erorganizational communication and cooperation were seldom seriously put on the
improvement agenda. Consequently, enterprise models, particularly business proce
ss models were also restricted to interorganizational aspects. In the meantime,
after various BPR and ERP lessons learned, companies seem to be better prepared
for business scope rede nition. More and more, they sense the limitations of inter
organizational improvement and feel the urge to play an active role in the globa
l e-business community. That means not only creating a companys website but also
designing the back-of ce processes according to the new requirements. Obviously, t
his attitude has effects on business application systems, too. While companies a
re on their way to new business dimensions, implemented ERP systems cannot remai
n inside organizational boundaries. On the technical side, ERP vendors are, like
many other software vendors, forced to move from a traditional clientserver to a
browserweb server architecture in order to deliver ebusiness capabilities. Hence
, for their rst-generation e-business solutions almost all big ERP vendors are us
ing a mixed Java / XML strategy. On the conceptual side, ERP vendors are facing
the even bigger challenge of providing instruments for coping with the increasin
g e-business complexity. Business process models appear to be particularly usefu
l in this context. While e-business process models are fundamentally the same as
interorganizational process models, integration and coordination mechanisms bec
ome even more important:
Due to increasing globalization, e-business almost inevitably means internationa
l, sometimes
even intercultural, business cooperation. While ERP systems were multilingual fr
om the very rst, the human understanding of foreign business terms, process-manag
ement concepts, legal restrictions, and cultural individualities is much more di
f cult. Because models consist of graphic symbols that can be used according to fo
rmal or semiformal grammars, they represent a medium that offers the means to re
duce or even overcome those problems. Many improvement plans fail because of ins
uf cient transparent business processes and structures. If people do not realize t
he reengineering needs and bene ts, they will not take part in a BPR project and a
ccept the proposed or made changes. While this is already a serious problem with
in organizational boundaries, it becomes even worse in the case of interorganiza
tional, that is, geographically distributed, cooperation. In the case of busines
s mergers or virtual organizations, for example, the processes of the partners a
re seldom well known. Yet, in order to establish a successful partnership, the b
usiness processes have to be designed at a very high level of detail. Thus, busi
ness process models can help to de ne the goals, process interfaces, and organizat
ional responsibilities of interorganizational cooperation clearly. Up to now, we
have mainly discussed strategic bene ts of modeling. However, at an operational l
evel of e-business, models are very useful, too. In business-to-business applica
tions such as supply chain management we have to connect the application systems
of all business partners. This means, for example, that we have to ght not only
with the thousands of parameters of one single ERP system but with twice as many
, or even more. Another likely scenario is that the business partners involved i
n an interorganizational supply chain are using software systems of different ve
ndors. In this case, a business process model can rst be used to de ne the conceptu
al value chain. Secondly, the conceptual model, which is independent from a cert
ain software, can be connected to the repositories of the different systems in o
rder to adopt an integrated software solution. These few examples already demons
trate that enterprise models play a major role in the success of e-business. Som
e software vendors, such as SAP and Oracle, have understood this development and
are already implementing their rst model-based e-business applications.
ENTERPRISE MODELING
307
REFERENCES
Burgess, B. H., and Hokel, T. A. (1994), A Brief Introduction to the Zachman Fra
mework, Framework Software. Olle, T. W. et al. (1991), Information System Method
ologies: A Framework for Understanding, 2nd ed., Addison-Wesley, Wokingham. Vern
adat, F. B. (1996), Enterprise Modeling and Integration: Principles and Applicat
ions, Chapman & Hall, London.
ADDITIONAL READING
Scheer, A.-W., Business Process Engineering: Reference Models for Industrial Ent
erprises, 2nd ed., Springer, Berlin, 1994. Scheer, A.-W., ARISBusiness Process Fr
ameworks, 2nd Ed., Springer, Berlin, 1998. Scheer, A.-W., ARISBusiness Process Mo
deling, 2nd Ed., Springer, Berlin, 1999.
326
TECHNOLOGY
Global & National Economies
Supply Chains & Industries
Enterprises External View of ERP Systems Internal View of ERP Systems Manufactur
ing ERP
Functional Elements
Implementation Elements
Figure 1 External and Internal Views of ERP.
Section 2 identi es critical integration points for ERP and other applications wit
hin manufacturing enterprises. Section 3 discusses ERP and its relationship to t
hree larger entities, namely the U.S. economy, supply chains, and individual man
ufacturers. Section 4 presents issues and possible resolutions for improving ERP
performance and interoperability. This chapter is the result of a two-year stud
y funded by two programs at the National Institute of Standards and Technology:
the Advanced Technology Programs Of ce of Information Technology and Applications a
nd the Manufacturing Systems Integration Divisions Systems for Integrating Manufa
cturing Applications (SIMA) Program. The concepts presented in this chapter were
gathered from a variety of sources, including literature reviews, manufacturing
industry contacts, ERP vendor contacts, consultants specializing in the applica
tions of IT to manufacturing, relevant professional and trade associations, and
standards organizations.
1.1.
Major Business Functions in Manufacturing Enterprises
Manufacturers typically differentiate themselves from competitors along the thre
e major business functions through which they add value for their customers. Cus
tomer relationship management
328
TECHNOLOGY
Strategic Supply Chain Planning
Supply Chain Operations Planning Customers / Geographic Markets / Product Market
s
Nominal ERP
Suppliers
Supply Planning
Production Planning
Distribution Planning
Planning / Decision Support Execution / Transaction Management
inventories
facilities/ equipment
labor
money
stockroom
plant floor
warehouse
vehicle
distribution center
Material Flow
Information Flow
Figure 2 Intraenterprise View of Supply Chain Planning Hierarchy.
330
TECHNOLOGY
packaging and shipment. Examples are wet and dry chemicals, foods, pharmaceutica
ls, paper, bers, metals (e.g., plate, bar, tubing, wire, sheet), and pseudocontin
uous processes such as weaving, casting, injection molding, screw machines, and
high-volume stamping. Assembly line refers to a facility in which products are m
ade from component parts by a process in which discrete units of product move al
ong an essentially continuous line through a sequence of installation, joining,
and nishing processes. Examples are automobiles, industrial equipment, small and
large appliances, computers, consumer electronics, toys, and some furniture and
clothing. Discrete batch, also called intermittent, refers to a facility in whic
h processes are organized into separate work centers and products are moved in l
ots through a sequence of work centers in which each work center is set up for a
speci c set of operations on that product and the setup and sequence is speci c to
a product family. This describes a facility that can make a large but relatively
xed set of products but only a few types of product at one time, so the same pro
duct is made at intervals. This also describes a facility in which the technolog
y is commonthe set of processes and the ordering is relatively xed, but the detail
s of the process in each work center may vary considerably from product to produ
ct in the mix. Examples include semiconductors and circuit boards, composite par
ts, rearms, and machined metal parts made in quantity. Job shop refers to a facil
ity in which processes are organized into separate work centers and products are
moved in order lots through a sequence of work centers in which each work cente
r performs some set of operations. The sequence of work centers and the details
of the operations are speci c to the product. In general, the work centers have ge
neral-purpose setups that can perform some class of operations on a large variet
y of similar products, and the set of centers used, the sequence, the operations
details, and the timing vary considerably over the product mix. Examples includ
e metal shops, wood shops, and other piece-part contract manufacturers supportin
g the automotive, aircraft, shipbuilding, industrial equipment, and ordnance ind
ustries. Construction refers to a manufacturing facility in which the end produc
t instances rarely move; equipment is moved into the product area and processes
and component installations are performed on the product in place. The principal
examples are shipbuilding and spacecraft, but aircraft manufacture is a hybrid
of construction and assembly line approaches.
1.3.2.
Nature of the Business in Terms of Customer Orders
Make-to-stock describes an approach in which production is planned and executed
on the basis of expected market rather than speci c customer orders. Because there
is no explicit customer order at the time of manufacture, this approach is ofte
n referred to as a push system. In most cases, product reaches retail outlets or
end customers through distribution centers and manufacturing volumes are driven
by a strategy for maintaining target stock levels in the distribution centers.
Make-to-order has two interpretations. Technically, anything that is not made-to
-stock is madeto-order. In all cases there is an explicit customer order, and th
us all make-to-order systems are described as pull systems. However, it is impor
tant to distinguish make-to-demand systems, in which products are made in batche
s, from option-to-order systems, in which order-speci c features are installed on
a product-by-product basis. The distinction between make-to-demand batch plannin
g and on-the- y option selection using single setup and prepositioning is very imp
ortant to the ERP system. A make-to-demand manufacturer makes xed products with xe
d processes but sets up and initiates those processes only when there are suf cien
t orders (i.e., known demand) in the system. This scenario may occur when there
is a large catalog of xed products with variable demand or when the catalog offer
s a few products with several options. The distinguishing factor is that orders
are batched and the facility is set up for a run of a speci c product or option su
ite. The planning problem for make-to-demand involves complex trade-offs among c
ustomer satisfaction, product volumes, materials inventories, and facility setup
times. Option-to-order, also called assemble-to-order, describes an approach in
which production is planned and executed on the basis of actual (and sometimes
expected) customer orders, in which the product has some prede ned optional charac
teristics which the customer selects on the order. The important aspects of this
approach are that the process of making the product with options is prede ned for
all allowed option combinations and that the manufacturing facility is set up s
o the operator can perform the option installation on a per-product basis during
manufacture. This category also applies to a business whose catalog contains a
family of xed products but whose manufacturing facility can make any member of th
e family as a variant (i.e., option) of a single base product. The option-to-ord
er approach effects the con guration of production lines in very complex ways. The
simplest con gurations involve prepositioning, in which option combinations occur
on the y. More complex con gurations involve combinations of batching and preposit
ioning. Engineer-to-order describes an approach in which the details of the manu
facturing process for the product, and often the product itself, must be de ned sp
eci cally for a particular customer order and only after receipt of that order. It
is important for other business reasons to distinguish contract engineering, in
which the customer de nes the requirements but the manufacturer de nes both the
332
TECHNOLOGY
sionsare captured in an ERP system. Each of these facilities may use additional s
ystems for planning and analysis, execution level scheduling, control, and autom
ated data capture. Some of the planning, analysis, scheduling, and management sy
stems are part of, or tightly connected to, the ERP systems; others are more loo
sely connected. This variation results from two major factors: systems that are
closely coupled to equipment, most of which are highly specialized, and systems
that manage information and business processes speci c to a particular industry th
at the ERP vendor may not offer. Because of the historic emphasis on reducing in
ventory costs, the management of stockroom is, in almost all cases, an intrinsic
part of an ERP system. To the contrary, plant oor activity control is almost nev
er a part of ERP. Management of execution activities within warehouses, vehicles
/ depots, and distribution centers may be handled in a centralized fashion by a
n ERP system or in a decentralized fashion by an applications speci c to those fac
ilities.
2.2.
Transaction Management and Basic Decision Support: The Core of ERP
Transactions are records of resource changes that occur within and among enterpr
ises. Through the use of a logically (but not necessarily physically) centralize
d database, it is the management of these transactions that constitutes the core
of an ERP system. More speci cally, this transaction database captures all change
s of state in the principal resources of the manufacturing enterprise. It also m
akes elements of the current state available to personnel and software performin
g and supporting the operations of the enterprise. This scope encompasses all of
the numerous resources (i.e., materials inventories, facilities / equipment, la
bor, and money) and product inventories of all kinds. It also includes the state
s and results of many business processes, which may not be visible in physical i
nstances (e.g., orders, speci cations). The detailed breakdown of this broad scope
into common separable components is a very dif cult technical task, given the man
y interrelationships among the objects, signi cant variations in the business proc
esses, and the technical origins of ERP systems. Nonetheless, it is possible to
identify general elements from that complexity and diversity. The functions of a
manufacturing enterprise that are supported by transaction management correspon
d to the major types of resources as follows:
For inventories, materials inventory and materials acquisition For facilities /
equipment, manufacturing management, process speci cation management, maintenance management, warehousing and transportation
For labor, human resource management For money,
For product, order entry and tracking
These functions, in whole or in part, make up the core of ERP. The following sec
tions describe each function and its relationship to core ERP. These functions a
re then discussed in terms of ner-grain execution and planning activities.
2.2.1.
Materials Inventory
This function is made up of all information on stores of materials and allocatio
ns of materials to manufacturing and engineering activities. That includes infor
mation on materials on hand, quantities, locations, lots and ages, materials on
order and in transit, with expected delivery dates, materials in inspection and
acceptance testing, materials in preparation, and materials allocated to particu
lar manufacturing jobs or product lots (independent of whether they are in stock
). ERP systems routinely capture all of this information and all transactions on
it. It is sometimes combined with materials acquisition information in their co
mponent architecture. Execution activities supported by materials inventory incl
ude receipt of shipment, inspection and acceptance, automatic (low-water) order plac
ement, stocking, stores management, internal relocation, issuance and preparatio
n, and all other transactions on the materials inventory. Except for some stores
management functions, all of these are revenue producing. Planning activities s
upported by materials inventory include supply planning, manufacturing planning,
and manufacturing scheduling.
2.2.2.
Materials Acquisition
This function deals primarily with information about suppliers and materials ord
ers. It includes all information about orders for materials, including recurring
, pending, outstanding, and recently ful lled orders, and long-term order history.
Orders identify internal source and cost center, supplier, reference contract,
material identi cation, quantity, options and speci cations, pricing ( xed or variable
), delivery schedule, contacts, change orders, deliveries, acceptances, rejects,
delays, and other noti cations. It may also include invoices and payments. This a
lso includes special arrangements, such as consignment and shared supply schedul
es.
ng these orders in which time frames and these assignments are captured. Executi
on processes draw materials (usually tracked as lots) and use equipment and pers
onnel to perform the work. These usages and the ow of work through the facility a
re captured. Finished goods leave the manufacturing domain for some set of distr
ibution activities, and at this point the completion of the manufacturing orders
is tracked. The execution processes supported by manufacturing management are t
he revenue-producing processes that convert materials into nished goods, but that
support is limited to tracking those processes. The planning processes supporte
d by manufacturing management are all levels of manufacturing resource planning
and scheduling, except for detailed scheduling, as noted above.
2.2.5.
Process Speci cation Management
This function deals with the information associated with the design of the physi
cal manufacturing processes for making speci c products. As such, it is an enginee
ring activity and like product engineering, should be almost entirely out of the
scope of core ERP systems. But because several information items produced by th
at engineering activity are vital to resource planning, ERP systems maintain var
iable amounts of process speci cation data. In all cases, the materials requiremen
ts for a product lotthe manufacturing bill of materialsare captured in the ERP system.
And in all cases, detailed product and materials speci cations, detailed equipmen
t con gurations, detailed operations procedures, handling procedures, and equipmen
t programs are outside the core ERP infor-
334
TECHNOLOGY
mation bases. These information sets may be maintained by ERP add-ons or third-p
arty systems, but the core contains only identi ers that refer to these objects. F
or continuous-process and assembly-line facilities, the major process engineerin
g task is the design of the line, and that is completely out of scope for ERP sy
stems. What ERP systems maintain for a product mix is the line con gurations (iden
ti ers) and equipment resources involved, staf ng and maintenance requirements, the
set of products output, the production rates and yields, and the materials requi
rements in terms of identi cation and classi cation, start-up quantities, prepositio
ning requirements, and feed rates. For batch facilities, the major process engin
eering tasks are the materials selection, the routing (i.e., the sequence of wor
k centers with particular setups), and the detailed speci cations for operations w
ithin the work centers. The materials requirements, the yields, and the routings
for products and product mixes are critical elements of the ERP planning inform
ation. The detailed work center operations are unimportant for planning, and all
that is captured in the ERP core is the external references to them, the net st
af ng and time requirements, and the assigned costs of the work center usages. For
job shop facilities, the major process engineering tasks are materials selectio
n, the routing, and the detailed speci cations for operations within the work cent
ers. The materials requirements and yields for speci c products are critical eleme
nts of the ERP planning information. The routing is often captured as a sequence
of work center operationsunit processes, each with its own equipment, staf ng and
time requirements, associated detail speci cation identi ers, and assigned cost. The
detailed unit process speci cationsoperator instructions, setup instructions, equi
pment control programsare kept in external systems. No execution processes are di
rectly supported by process speci cation management. All levels of resource planni
ng are directly and indirectly supported by this information.
2.2.6.
Maintenance Management
This function includes all information about the operational status and maintena
nce of equipment, vehicles, and facilities. Operational status refers to an avai
lability state (in active service, ready, standby, in / awaiting maintenance, et
c.), along with total time in service, time since last regular maintenance, and
so on. The ERP system tracks maintenance schedules for the equipment and actual
maintenance incidents, both preventive and remedial, and typically an attention
list of things that may need inspection and re t. Any maintenance activities inclu
de both technical data (nature of fault, repair or change, parts installed, name
d services performed, etc.) and administrative data (authorization, execution te
am, date and time, etc.). In addition, this component tracks the schedules, labo
r, and work assignments of maintenance teams, external maintenance contracts and
calls, and actual or assigned costs of maintenance activities. In those organiz
ations in which machine setup or line setup is performed by a general maintenanc
e engineering group rather than a setup team attached to manufacturing operation
s directly, it is common to have such setups seen as part of the maintenance com
ponent rather than the manufacturing component. Similarly, operational aspects o
f major upgrades and rebuilds may be supported in the maintenance component of t
he ERP system. These are areas in which the behavior of ERP systems differs cons
iderably. This component supports sourcing, manufacturing, and delivery activiti
es indirectly. Maintenance, per se, is purely a support activity. Planning activ
ities supported by maintenance management include all forms of capacity planning
, from manufacturing order release and shipment dispatching (where immediate and
expected availability of equipment are important) up to long-term capacity plan
ning (where facility age and statistical availability are important).
2.2.7.
Warehousing
This function deals with the information associated with the management of nished
goods and spare parts after manufacture and before nal delivery to the customer.
For products made-to-stock, this domain includes the management of multiple lev
els of distribution centers, including manufacturer-owned / leased centers, the
manufacturers share of concerns in customer-owned / leased centers, and contracte
d distribution services. The primary concerns are the management of space in the
distribution centers and the management of the ow of product through the distrib
ution centers. Thus, there are two major elements that are sometimes mixed toget
her: the management of the distribution center resources (warehouse space, perso
nnel, and shipping and receiving facilities) and the management of nished product
over many locations, including manufacturing shipping areas, distribution cente
rs per se, and cargo in-transport. Distribution center and product (family) incl
udes tracking actual demand experience, projected demand and safety stocks, unit
s on hand, in- ow and back-ordered, and units in manufacture that are earmarked fo
r particular distribution centers. The primary object in product distribution tr
acking is the shipment because that
e planning.
2.2.9.
Human Resource Management
The human resource management (HRM) component includes the management of all inf
ormation about the personnel of the manufacturing enterprisecurrent and former em
ployees, retirees and other pensioners, employment candidates, and possibly on-s
ite contractors and customer representatives. For employees, the information may
include personal information, employment history, organizational placement and
assignments, evaluations, achievements and awards, external representation roles
, education, training and skills certi cation, security classi cations and authoriza
tions, wage / salary and compensation packages, pension and stock plan contribut
ions, taxes, payroll deductions, work schedule, time and attendance, leave statu
s and history, company insurance plans, bonding, and often medical and legal dat
a. For contract personnel, some subset of this information is maintained (accord
ing to need), along with references to the contract arrangement and actions ther
eunder. The HRM system is often also the repository of descriptive information a
bout the organizational structure because it is closely related to employee titl
es, assignments, and supervisory relationships. The execution activities support
ed by the HRM system are entirely support activities. They include the regular c
apture of leave, time, and attendance information and the regular preparation of
data
336
TECHNOLOGY
sets for payroll and other compensation actions and for certain government-requi
red reports. They also include many diverse as-needed transactions, such as hiri
ng and separation actions of various kinds, and all changes in any of the above
information for individual personnel. But the HRM system supports no revenue-pro
ducing function directly, and it often plays only a peripheral role in strategic
planning.
2.2.10.
Finance Management and Accounting
This function includes the management of all information about the monies of the
enterprise. The primary accounting elements are grouped under accounts payable
(all nancial obligations of the organization to its suppliers, contractors, and c
ustomers); accounts receivable (all nancial obligations of customers, suppliers,
and other debtors to this organization); and general ledger (the log of all real
and apparent cash ows, including actual receipts and disbursements, internal fun
ds transfers, and accrued changes in value). In reality, each of these is divide
d into multiple categories and accounts. While smaller organizations often do nan
ce management under the heading general ledger, larger ones, and therefore ERP s
ystems, usually separate the nance management concerns from general ledger transa
ctions. They include xed asset management (acquisition, improvement, amortization
, depreciation of plants, facilities, and major equipment); nancial asset managem
ent (cash accounts, negotiable instruments, interest-bearing instruments, invest
ments, and bene cial interests [e.g., partnerships]); and debt management (capital
ization, loans and other nancing, and assignments of interest [e.g., licenses, royalt
ies]). The major enterprise execution activities supported by the nance managemen
t component are contracting, payroll, payment (of contractual obligations), invo
icing, and receipt of payment. Payroll is a supporting activity, but payment and
receipt are revenue producing. The primary nancial planning activities supported
are investment planning, debt planning, and budget and cash ow planning and anal
ysis.
2.3.
Interaction Points
It is the intent of many ERP vendors to provide the information systems support
for all the business operations of the manufacturing enterprise, and ERP systems
have gone a long way in that direction. But there are still several areas in wh
ich the manufacturing organization is likely to have specialized software with w
hich the ERP system must interface. The primary mission of the ERP system is to
provide direct support to the primary operations activities (materials acquisiti
on, manufacturing, product delivery) and to the planning and management function
s for those operations. The software environment of a large manufacturing enterp
rise includes many other systems that support nonoperations business functions.
This includes product planning and design, market planning and customer relation
s, and supply chain planning and development. Additionally, software that suppor
ts the detailed manufacturing processes and the control of equipment is so speci
alized and therefore so diverse that no ERP provider could possibly address all
customer needs in this area. On the other hand, the wealth of data managed withi
n the ERP core, as well as its logical and physical infrastructure and the deman
d for those data in many of these related processes, open the door for integrati
ng these other functions with the ERP system. This is the situation that leads t
o demands for open ERP interfaces. Moreover, as the ERP market expands to medium
-sized enterprises, the cost of the monolithic ERP system has proved too high fo
r that market, leading ERP vendors to adopt a strategy using incremental ERP com
ponents for market penetration. This strategy in turn requires that each compone
nt system exhibit some pluggable interface by which it can interact with other c
omponent systems as they are acquired. While ERP vendors have found it necessary
to document and maintain these interfaces, thus rendering them open in a limite
d sense, none of the vendors currently has an interest in interchangeable compon
ents or standard (i.e., public open) interfaces for them. But even among compone
nts there are forced ERP boundaries where the medium-sized enterprise has longer-sta
nding enterprise support software (e.g., in human resources and nance). These als
o offer opportunities for standardization. At the current time, the most signi can
t ERP boundaries at which standard interfaces might be developed are depicted in
Figure 3.
2.3.1.
Contracts Management
Contractual relationships management deals with the information associated with
managing the formal and legal relationships with suppliers and customers. It con
sists of tracking the contract development actions; maintaining points of contac
t for actions under the agreements; and tracking all formal transactions against
the agreementsorders and changes, deliveries and completions, signoffs, invoices
and payments, disputes and resolutions, and so on. ERP systems rarely support t
he document management function, tracking the contract document text through sol
icitation, offer, counteroffer, negotiation, agreement, amendments, replacement,
and termination. They leave that to a legal or
338
TECHNOLOGY
2.3.4.
Product Con guration Management
A product con gurator tracks the design of product options from desired features t
o manufacturing speci cations. It captures product planning, pricing, and engineer
ing decisions about option implementations and interoption relationships. The sa
les con gurator component tells the sales staff what option combinations a custome
r can order and how to price them. The manufacturing con gurator converts the opti
on set on a customer order to a speci cation for bill of materials, station setup,
prepositioning requirements, batching requirements, and process selections. In to
-order environments, an ERP system may include a product con guration function. A pr
oduct con gurator captures customer-speci ed product options in make-to-demand, opti
on-to- order, and engineer-to-order environments. In those environments, product
con gurators connect the front of ce with the back of ce. In make-to-demand and optio
n-to-order environments, product con gurators link sales with manufacturing operat
ions. In engineer-to-order environments, product con gurators are a conduit betwee
n sales and engineering.
2.3.5.
Product Data Management
While ERP systems are the principal repository for all operations data, they con
tain only fragments of product and process engineering data. One of the reasons
for this is that the ERP core is short transactions with concise data units, whi
le engineering data management requires support for long transactions with large
data les. As ERP systems have grown over the last 10 years, PDM systems have gro
wn rapidly as the product engineering information management system, especially
in mechanical and electrical parts / product industries. The rise of collaborati
ve product and process engineering in the automotive and aircraft industries has
led to increasing capture of process engineering information in the PDM. Produc
t engineering software tools, particularly CAD systems, are used to design tooli
ng and other process-speci c appliances, and these tools often have modules for ge
nerating detailed process speci cations from the product de nitions (e.g., exploded
bills of materials, numerical control programs, photomasks). These tools expect
to use the PDM as the repository for such data. Moreover, increased use of parts
catalogs and contract engineering services has led to incorporation of a signi ca
nt amount of part sourcing information in the PDM. Many ERP vendors are now ente
ring the PDM product market, and the interface between PDM and ERP systems is be
coming critical to major manufacturers and their software providers.
2.3.6.
Supply Chain Execution
Although ERP systems may offer a one-system solution to supporting the operation
s of a given enterprise, one cannot expect that solution to extend beyond the wa
lls. The information transactions supporting the materials ows from suppliers to
the manufacturing enterprise and the ows of its products to its customers are bec
oming increasingly automated. Although basic electronic data interchange (EDI) t
ransaction standards have been in existence for 20 years, they are not up to the
task. They were intentionally made very exible, which means that the basic struc
ture is standard but most of the content requires speci c agreements between tradi
ng partners. Moreover, they were made to support only open-order procurement and
basic ordering agreements, while increased automation has changed much of the b
The ERP core consists of one or more transaction databases as well as transactio
n services. As described in Section 2.2, these services include capturing, execu
ting, logging, retrieving, and monitoring transactions related to materials inve
ntories, facilities / equipment, labor, and money.
2.4.2.
Packaged Decision Support Applications
In addition to transaction management, ERP vendors provide decision support appl
ications that offer varying degrees of function-speci c data analysis. The terms d
ecision support application and decision support systems (DSS) refer to software
that performs function-speci c data analysis irrespective of enterprise level. Th
at is, decision support includes applications for supply, manufacturing, and dis
tribution planning at the execution, tactical, and strategic levels of an enterp
rise. There is considerable variability among ERP vendors regarding the types of
decision support applications that they include as part of their standard packa
ge or as add-ons. At one end of the spectrum, some vendors provide very speci c so
lutions to niche industries based on characteristics of the operations environme
nt (i.e., process and business nature) as well as enterprise size in terms of re
venue. For example, an ERP vendor at one end of the spectrum might focus on asse
mbly line / engineer-to-order environment with decision support functionality li
mited to manufacturing and distribution planning integrated with nancials. At the
other end of the spectrum, an ERP vendor could offer decision support functiona
lity for supply, manufacturing, and distribution planning at all enterprise leve
ls for a host of operations environments for enterprises with varying revenues.
While such a vendor offers an array of decision support tools, one or more tacti
cal level, function-speci c applications (i.e., supply, manufacturing, distributio
n) are typically part of a standard ERP implementation (Figure 2). The others
340
TECHNOLOGY
TOOLS * transactions * vendor applications * extended applications
PACKAGED DECISION SUPPORT APPLICATIONS * basic * vendor add-ons EXTENDED APPLICA
TIONS * in-house or third party * custom or commercial
* database(s) * transaction services
OPERATING SYSTEM
COMPUTER AND NETWORK HARDWARE
Figure 4 Basic Implementation Elements of ERP Systems.
tend to be considered add-ons. While a vendor may offer these add-ons, a manufac
turer may opt to forgo the functionality they offer altogether or implement them
as extended applications, either developed in-house or procured from third-part
y software vendors.
2.4.3.
Extended Applications
The wealth of data in ERP systems allows many manufacturing enterprises to use E
RP as an information backbone and attach extended applications to them. The moti
vation for such applications is that manufacturers typically see them as necessa
ry to achieve differentiation from competitors. These applications may be develo
ped in-house or by a third party. A third party may be a systems integrator who
develops custom software, or it may be a vendor who develops specialized commerc
ial software. As discussed in Section 2.3, these add-ons may provide additional
functionality for customer and supplier relationship management, product data ma
nagement, supply chain planning and execution, and human resource management. Re
gardless of the source, these applications must integrate with the ERP. Applicat
ion programmer interfaces (APIs) are the common mechanism for integrating extend
ed applications with the ERP backbone, and speci cally the ERP core. Even among th
eir partners and strategic allies (i.e., certain systems integrators and third-p
arty software vendors), ERP vendors discourage the practice of integrating their
standard applications with extended applications because of potential problems
with upward compatibility. Because APIs to the ERP core have a longer expected l
ifetime, APIs are presently the most common approach to accessing information in
the ERP system.
2.4.4.
Tools
Given the enormous scope of the system, ERP can be a challenge to set up and man
age. As such, ERP and third-party vendors provide software tools for handling va
rious aspects of these complex systems. These tools generally fall into two cate
gories: application con guration tools, which support the setup and operation of t
he ERP system itself, and enterprise application integration (EAI) tools, which
support integration of the ERP system into the enterprise software environment.
2.4.4.1. ERP Con gurators In an effort to decrease the amount of time and effort r
equired to install, operate, and maintain an ERP system, vendors may provide a v
ariety of application con guration tools. These tools vary in complexity and sophi
stication, but they all perform essentially the following tasks:
APIs
CORE ERP: TRANSACTIONS
the general logical architectures of ERP systems. While the notion of tiers gene
rally implies a hierarchy, such is not the case with tiers in present-day ERP ar
chitectures. Advances in distributed computing technologies enable more open com
munication among components. Instead, these tiers are the basic types of logical
elements within an ERP implementation and thus provide a means for describing v
arious ERP execution scenarios. Figure 6 illustrates the ve tiers of ERP architec
tures: core / internal (often called data), application, user interface, remote
application, and remote user interface. A common intraenterprise scenario involv
es the data, application, and user interface tiers. This scenario does not precl
ude applicationto-application interaction. Similarly, common interenterprise sce
narios include an internal user or application requesting information from an ex
ternal application or an external user or application requesting information fro
m an internal application. The Internet is the conduit through which these inter
nal / external exchanges occur. Additionally, in many ERP systems web browsers h
ave emerged as the platform for both local and remote user interfaces.
342
ERP System Finance Management ... Applications
TECHNOLOGY
Advanced Planning System API Y
Inventory Management
Human Resource Management
Manufacturing Execution System API Z
Product Data Management System
API X
EAI Y Adapter
EAI X Adapter
EAI Z Adapter
Exchange File Format A EAI "Message" EAI "Message" EAI ApplicationSpecific Inter
faces EAI "Message"
Exchange File Format B
EAI Mapping / Routing Engine
Information Models
Data Maps
EAI "Semantics"
Figure 5 EAI Systems Architectures.
2.6.
ERP and the Internet
The emergence of the Internet as the primary conduit for exchange of ERP-managed
information among trading partners has spawned the term Internet-based ERP. Thi
s term does not convey a single concept but may refer to any of the following:
Internal user-to-ERP-system interfaces based on web browsers External user-to-or
ders interfaces based on web browsers Interfaces between decision support softwa
re agents of different companies that support supply
chain operations
Interfaces between decision support software agents of different companies that
support joint
supply chain planning The increasingly commercial nature of the Internet and the
development of communication exchange standards have had signi cant impact on ERP
systems. In describing this impact, the term tier is used in the context of ERP
architecture (Section 2.5).
plication (i.e., Tier 4 and Tier 2). The actual exchange is usually based on EDI
or some XML message suites,* using le transfers, electronic mail, or some propri
etary messaging technology to convey the messages. This scenario is signi cantly d
ifferent from the above in that neither of the communicating agents is a user wi
th a browser. Rather, this is communication between software agents (decision su
pport modules) logging shipment packaging, release, transportation, receipt, ins
pection, acceptance, and possibly payment on their respective ERP systems (with
some separate Tier 3 user oversight at both ends). Special cases of this include
vendor-managed inventory and consignment management, which illustrate the use o
f the Internet in direct support of an operations process.
* E.g., RosettaNet (RosettaNet 2000), CommerceNet (CommerceNet 2000), Electronic
Business XML (Electronic Business XML 2000), Open Applications Group Interface
Speci cation (OAGIS) (Open Applications Group 2000).
344
TECHNOLOGY
2.6.4.
Joint Supply Planning (Advanced Planning and Scheduling) Interfaces
This scenario also involves communication between a remote application and a loc
al application (i.e.,Tier 4 and Tier 2) with the exchange based on a common prop
rietary product or perhaps an XML message suite. Again the communicating agents
are software systems, not users with browsers, but their domain of concern is ad
vanced planning (materials resource planning, distribution resource planning, et
c.) and not shipment tracking. There are very few vendors or standards activitie
s in this yet because this represents a major change in business process. This s
cenario illustrates use of the Internet in direct support of a tactical planning
process.
3.
AN EXTERNAL VIEW OF ERP SYSTEMS
This section illustrates ERP systems in a larger context. Section 3.1 provides s
ome current thinking on the apparent macroeconomic impacts of IT, with a yet-tobe-proven hypothesis speci c to ERP systems. Section 3.2 describes the relationshi
p of ERP speci cally with respect to electronic commerce, supply chains, and indiv
idual manufacturing enterprises.
3.1.
ERP and the Economy
Much has been written and said about the emerging digital economy, the informati
on economy, and the New Economy. It is the authors view that the information econ
omy must support and coexist with the industrial economy because certain fundame
ntal needs of mankind are physical. However, while these economies coexist, it i
s clear that the sources of wealth generation have changed and will continue to
change in fundamental ways. In the last 50 years, the U.S. gross domestic produc
t (GDP), adjusted for in ation, has grown more than 500% (Figure 7). While each ma
jor industry has grown considerably during that period, they have not grown iden
tically. Con rming that the economy is a dynamic system, the gross product by indu
stry as a percentage of GDP (GPSHR) saw signi cant changes in the last half of the
20th century (Figure 8). GPSHR is an indication of an industrys contribution (or
its value added) to the nations overall wealth. While most goods-based industrie
s appear to move towards a kind of economic equilibrium, non-goods industries ha
ve seen tremendous growth. The interesting aspect of ERP systems is that they co
ntribute to both goods- and non-goodsbased industries in signi cant ways. In fact,
for manufacturers, ERP plays a critical role in extending the existing industri
al economy to the emerging information economy. In the information economy, ERP
accounts for a signi cant portion of business applications sales, not to mention the w
ealth generated by third parties for procurement, implementation, integration, a
nd consulting. While these are important, in this chapter we focus on the use of
ERP in manufacturing. Therefore, the following sections describe how ERP and re
lated information technologies appear to impact the goodsproducing sectors of th
e current U.S. economy. Macroeconomics, the study of the overall performance of
an economy, is a continually evolving discipline. Still, while economists debate
both basic and detailed macroeconomic theory, consensus exists on three major v
ariables: output, employment, and prices (Samuelson and Nordhaus 1998). The prim
ary metric of aggregate output is the gross domestic product (GDP), which is a c
omposite of personal consumption expenditures, gross private domestic investment
, net exports of goods and services, and government consumption expenditures and
gross investment. The metric for employment is the unemployment rate. The metri
c for prices is in ation. While these variables are distinct, most macroeconomic t
heories recognize interactions among them. It appears that the use of informatio
n technology may be changing economic theorists understanding of the interactions
among economic variables, particularly for predicting gross domestic product, u
nemployment, and in ation. It is important to gain a deeper understanding of these
changes because their impact would affect government policy decisions, particul
arly those involving monetary policy and scal policy. In the current record domes
tic economic expansion, real output continues to increase at a brisk pace, unemp
loyment remains near lows not seen since 1970, and underlying in ation trends are
subdued. During this period, in ation has been routinely overpredicted while real
output has been underpredicted. Conventional economic theory asserts that as rea
l output increases and unemployment decreases, signi cant pressures mount and forc
e price increases. Yet, in this economic expansion, in ation remains in check, app
arently due in part to IT-enabled growth in labor productivity (Greenspan 1999).
In the early 1990s, the labor productivity growth rate averaged less than 1% an
nually. In 1998, that rate had grown to approximately 3%. So what has happened i
n this decade? In the last 10 years, information technology has enabled companie
s, most notably manufacturers, to change the way they do business with access to
better information (often in real time) and better decision-support technologie
s. These changes have improved the way manufacturers respond to market wants (i.
e., for products) and market demands (wants for products backed by buying power)
. ERP systems play a signi cant part in satisfying the latter by enabling better p
lanning and execution of an integrated order ful llment process. In short, ERP sof
tware enables these improvements by providing decision makers in an enterprise w
ith very accurate information about the current state of the enterprise.
346
TECHNOLOGY
30
25
20
Percent of GDP
15
10
5
1950
1960
1970 Year
1980
1990
2000
LEGEND Services Finance, insurance, etc. Communications Transportation Manufactu
ring Construction Mining Agriculture, forestry, etc. Source: U.S. Department of
Commerce
Figure 8 Gross Product Originating by Industry Share of GDP, 19501997.
348
TECHNOLOGY
3.2.2.
Supply Chain Management
Supply chain management is one of several electronic commerce activities. Like e
lectronic commerce, SCM has acquired buzzword status. Nonetheless, a common unde
rstanding of SCM has emerged through the work of industry groups such as the Sup
ply Chain Council (SCC), the Council on Logistics Management (CLM), and the APIC
S organization, as well as academia.
SCM is the overall process of managing the ow of goods, services, and information
among trading partners with the common goal of satisfying the end customer. Fur
thermore, it is a set of integrated business processes for planning, organizing,
executing, and measuring procurement, production, and delivery activities both
independently and collectively among trading partners.
It is important to note a critical distinction here, especially since that disti
nction is not explicit in the terminology. While SCM is often used synonymously
with supply chain integration (SCI), the two terms have connotations of differin
g scopes. As stated previously, SCM focuses on planning and executing trading pa
rtner interactions of an operations naturethat is, the ow of goods in raw, interme
diate, or nished form. SCI is broader; it includes planning and executing interac
tions of any kind among trading partners, and it refers in particular to the dev
elopment of cooperating technologies, business processes, and organizational str
uctures. The operations-speci c objectives of SCM can only be achieved with timely
and accurate information about expected and real demand as well as expected and
real supply. A manufacturer must analyze information on supply and demand along
with information about the state of the manufacturing enterprise. With the tran
saction management and basic decision support capabilities described earlier, ER
P provides the manufacturer with the mechanisms to monitor the current and nearterm states of its enterprise. As depicted in the synchronized, multilevel, mult
ifacility supply chain planning hierarchy of Figure 2, ERP provides the foundati
on for supply chain management activities. Section 1 described the various level
s of the hierarchy, and Section 2 described the details of transaction managemen
t and decision support.
4.
ERP CHALLENGES AND OPPORTUNITIES
While evidence suggests that ERP systems have brought about profound positive ec
onomic effects by eliminating inef ciencies, there is still room for substantial i
mprovement. The technical challenges and opportunities for ERP systems arise fro
m how well these systems satisfy the changing business requirements of the enter
prises that use them. Case studies in the literature highlight the unresolved in e
xibility of ERP systems (Davenport 1998; Kumar and Van Hillegersberg 2000). The
monolithic nature of these systems often hampers, or even prevents, manufacturer
s from responding to changes in their markets. Those market changes relate to ER
P systems in two important ways: the suitability of decision support application
s to an enterprises business environment and the degree of interoperability among
applications both within and among enterprises. These two issues lead to three
speci c types of activities to improving ERP interoperability: technology developm
ent, standards development, and business / technology coordination.
4.1. 4.1.1.
Research and Technology Development Decision Support Algorithm Development
Manufacturing decision support systems (DSSs), especially those that aid in plan
ning and scheduling resources and assets, owe their advancement to progress in a
number of information technologies, particularly computational ones. The early
versions of these applicationsnamely materials requirements planning (MRP) and ma
nufacturing resource planning (MRP II)assumed no limitations on materials, capaci
ty, and the other variables that typically constrain manufacturing operations (e
.g., on-time delivery, work-in-process, customer priority). The emergence of con
straint-based computational models that represent real-world conditions has enab
led manufacturers to better balance supply and demand. In spite of the improveme
nt in DSSs, signi cant opportunities still exist. Different models apply to differ
ent business environments. With the rapid pace of change, the commercial viabili
ty of the Internet, and the push to go global, there are new variables to examine an
d new models to develop. Instead of convergence to any single optimization funct
ion, there will likely be specialization to classes of functions. That specializ
ation has begun as illustrated in Figure 2 and described in Section 3.2 Continue
d specialization (not simply customization) is likely and necessary. Those class
es will account for the variety of ways that manufacturing enterprises choose to
differentiate their operations from those of their competitors. While cost redu
ction has been the focus of the current generation of DSSs, models that also add
ress revenue enhancement and response time are expected to be the focus of futur
e generations.
Standards Development
Standards play an important role in achieving interoperability. With respect to
ERP systems, opportunities exist for establishing standard interface speci cations
with other manufacturing applications.
4.2.1.
ERPPDM Interfaces
As discussed in Section 2.3.5, there is an obvious interaction point between ERP
and PDM systems. Thus, there is a need for interfaces between the two systems t
o share separately captured engineering and sourcing speci cations. In the longer
run, the goal should be to have PDM systems capture all the product and process
engineering speci cations and to extract resource requirements information for use
in ERP-supported planning activities. Conversely, sourcing information, includi
ng contract engineering services, should be captured in the ERP system. To do th
is, one needs seamless interactions as seen by the engineering and operations us
ers.
4.2.2.
ERPMES Interfaces
As presented in Section 2.3.8, future ERP systems must expect to interface with
such companion factory management systems in a signi cant number of customer facil
ities. There is the need to share resource planning information, resource status
information, order / job / lot release, and status infor-
350
TECHNOLOGY
mation. However, developing such interfaces is not a straightforward exercise. T
he separation of responsibilities and the information to be exchanged vary accor
ding to many factors both at the enterprise level and the plant level. Prestanda
rdization work is necessary to identify and understand those factors.
4.2.3.
Supply Chain Operations Interfaces
Supply chain information ows between the ERP systems of two trading partners have
been the subject of standardization activities for 20 years, with a spate of ne
w ones created by Internet commerce opportunities. Most of these changes concent
rate on basic ordering agreement and open procurement mechanisms. Requirements a
nalysis of this information is necessary before actual standards activities comm
ence. As the business practices for new trading partner relationships become mor
e stable, standards for interchanges supporting those practices will also be nee
ded. Changes in business operations practices as well as in decision support sys
tems have changed the information that trading partners need to exchange. This i
ncludes shared auctions, supply schedule, vendor-managed inventory, and other op
erational arrangements, but the most signi cant new area is in joint supply chain
planning activities (i.e., advanced planning and scheduling).
4.3.
Establishing Context, Coordination, and Coherence for Achieving Interoperability
Several developments in the past decade have combined to extend the locus of inf
ormation technology from research labs to boardrooms. The commercialization of i
nformation technology, the pervasiveness of the Internet, and the relatively low
barriers to market entry for new IT companies and their technologies all serve
to create an environment of rapid growth and change. The ERP arena, and electron
ic commerce in general, suffer from a proliferation of noncooperative standards
activities, each aimed at creating interoperability among a handful of manufactu
rers with speci c software tools and business practices. There is an imperative to
reduce competition among standards efforts and increase cooperation. Understand
ing the complex environment that surrounds ERP and other e-business and electron
ic commerce applications is a critical challenge to achieving interoperability.
Topsight is a requirement for meeting this challenge. The objective of topsight
is to establish context, coordination, and coherence among those many activities
that seek standards-based interoperability among manufacturing applications. Wh
ile the hurdles that exist in the current environment are considerable, there is
signi cant needas well as potential bene tfor an industry-led, multidisciplinary, and
perhaps government-facilitated effort to provide direction for the development
and promulgation of ERP and related standards. The notion of topsight for improv
ing interoperability among speci c applications is not new. The Black Forest Group
, a diverse assembly of industry leaders, launched the Work ow Management Coalitio
n (WfMC), which has produced a suite of speci cations for improving interoperabili
ty among work ow management systems. A similar ERP-focused standards strategy effo
rt would strive to understand better the diversity of operations and operations
planning in order to improve interoperability among ERP and related systems. For
a topsight effort to succeed in an arena as broad as ERP, particularly one that
is standardsbased, there must be a cross-representation of consumers, complemen
tors, incumbents, and innovators (Shapiro and Varian 1999). As consumers of ERP
systems, manufacturers and their trading partners face the risk of being strande
d when their systems do not interoperate. The lack of interoperability in manufa
cturing supply chains can create signi cant costs (Brunnermeier and Martin 1999),
and those costs tend to be hidden. More accurate cost structures must be develop
ed for information goods, particularly for buycon gure-build software applications
. Unlike off-the-shelf software applications, ERP systems are more like traditio
nal assets, in the business sense, with capital costs and ongoing operational co
sts. Complementors are those who sell products or services that complement ERP s
ystems. Given the role that ERP plays in electronic commerce, this group is very
large. It includes both software vendors and systems integrators and management
consultants. Some of the software that complements ERP was discussed previously
in Section 2.3. Others include additional categories of software necessary for
achieving e-business: EDI / e-commerce, business intelligence, knowledge managem
ent, and collaboration technologies (Taylor 1999). Incumbents are the establishe
d ERP vendors, and they make up a market that is very dynamic and diverse (Table
2). Achieving consensus of any kind with such a diverse market is a considerabl
e challenge. To achieve ERP interoperability requires, among others, deeper unde
rstanding of common elements. That understanding can be acquired by detailed fun
ctional and information analysis of the ERP systems. The notion of innovators in
the standards process focuses on those who collectively develop new technology.
While many individual technology development activities associated with ERP and
electronic commerce might be considered innovative, there have been few explici
t collective development
iew of ERP from the outside and from the inside. The outside view clari ed the con
nection between ERP, electronic commerce, and supply chain management. The insid
e view describe the functional and implementation elements of ERP systems, parti
cularly in the context of manufacturing enterprises, and identi ed the points at w
hich ERP interacts with other software applications in manufacturing enterprises
. Finally, we looked at open research problems surrounding ERP and identi ed those
that are important to tting ERP systems into current and future business process
es.
352
Acknowledgments
TECHNOLOGY
This work was performed at NIST under the auspices of the Advanced Technology Pr
ograms Of ce of Information Technology and Applications and the Manufacturing Engin
eering Laboratorys Systems for Integrating Manufacturing Applications (SIMA) Prog
ram in the Manufacturing Systems Integration Division. We thank the many colleag
ues who shared their knowledge, experience and patience with us: James M. Algeo,
Sr., Bruce Ambler, Tom Barkmeyer, Jeff Barton, Mary Eileen Besachio, Bruce Bond
, Dave Burdick, Neil Christopher, David Connelly, Maggie Davis, Paul Doremus, Ch
ad Eschinger, Jim Fowler, Cita Furlani, Hideyoshi Hasegawa, Peter Herzum, Ric Ja
ckson, Arpan Jani, Al Jones, Amy Knutilla, Voitek Kozaczynski, Mary Mitchell, St
eve Ray, Michael Seubert, and Fred Yeadon.
Disclaimer
Commercial equipment and materials are identi
rtain procedures. In no case does such identi
sement by the National Institute of Standards
that the materials or equipment identi ed are
he purpose.
REFERENCES
Brunnermeier, S., and Martin, S. (1999), Interoperability Costs Analysis of the
U.S. Automotive Supply Chain, Research Triangle Institute, Research Triangle Par
k, NC. CommerceNet (2000), web page, http: / / www.commerce.net. Davenport, T. H
. (1998), Putting the Enterprise into the Enterprise System, Harvard Business Review
, Vol. 76, No. 4, pp. 121131. Electronic Business XML (2000), web page, http: / /
www.ebxml.org. Gold-Bernstein, B. (1999), From EAI to e-AI, Applications Developmen
t Trends, Vol. 6, No. 12, pp. 4952. Greenspan, A. (1999), Keynote Address, The Amer
ican Economy in a World Context, in Proceedings of the 35th Annual Conference on B
ank Structure and Competition (Chicago, May 6 7, 1999), Federal Reserve Bank of C
hicago, Chicago. Hagel, J., III, and Singer, M. (1999), Unbundling the Corporation
, Harvard Business Review, Vol. 77, No. 2, pp. 133141. Kotler, P., and Armstrong, G
. (1999), Principles of Marketing, Prentice Hall, Upper Saddle River, NJ, pp. 110
. Kumar, K., and Van Hillegersberg, J. (2000), ERP Experiences and Evolution, Commun
ications of the ACM, Vol. 43, No. 4, pp. 2326. Of ce of Management and Budget (1988
), Standard Industrial Classi cation Manual, Statistical Policy Division, U.S. Gov
ernment Printing Of ce, Washington, DC. Of ce of Management and Budget (1997), North
American Industry Classi cation System (NAICS) United States, Economic Classi cation
Policy Committee, U.S. Government Printing Of ce, Washington, DC. Open Applicatio
ns Group (2000), web page, http: / / www.openapplications.org. RosettaNet (2000)
, web page, http: / / www.rosettanet.org. Samuelson, P. A., and Nordhaus, W. D.
(1998), Macroeconomics, Irwin / McGraw-Hill, Boston. Shapiro, C., and Varian, H.
R. (1999), Information Rules, Harvard Business School Press, Boston. Sprott, D.
(2000), Componentizing the Enterprise Applications Packages, Communications of the
ACM, Vol. 43, No. 4, pp. 6369. Stoughton, S. (2000), Business-to-Business Exchanges
Are the New Buzz in Internet Commerce, The Boston Globe, May 15, 2000, p. C6. Tay
lor, D. (1999), Extending Enterprise Applications, Gartner Group, Stamford, CT.
Terhune, A. (1999), Electronic Commerce and Extranet Application Scenario, Gartn
er Group, Stamford, CT.
ADDITIONAL READING
Allen, D. S., Wheres the Productivity Growth (from the Information Technology Revol
ution)? Review, Vol. 79, No. 2, 1997. Barkmeyer, E. J., and Algeo, M. E. A., Activ
ity Models for Manufacturing Enterprise Operations, National Institute of Standa
rds and Technology, Gaithersburg, MD (forthcoming). Berg, J., The Long View of U.S
. Inventory Performance, PRTMs Insight, Vol. 10, No. 2, 1998. Bond, B., Pond, K., a
nd Berg, T., ERP Scenario, Gartner Group, Stamford, CT, 1999.
AUTOMATION AND ROBOTICS 11.1. 11.2. 11.3. 11.4. Automotive Assembly Assembly of
Large Steering Components Automatic Removal of Gearboxes from Crates Electronic
Assembly 11.4.1. Assembly of an Overload Protector 11.4.2. Assembly of Measuring
Instruments 388 389 389 392 392 392 11.5. 11.6. 11.7. 11.4.3. Assembly of Lumin
aire Wiring 11.4.4. Assembly of Fiberoptic Connectors Microassembly Food Industr
y Pharmaceutical and Biotechnological Industry
355
394 395 395 396 398 398 398
REFERENCE ADDITIONAL READINGS
1.
INTRODUCTION AND DEFINITION
Automation has many facets, both in the service industry and in manufacturing. I
n order to provide an in-depth treatment of the concepts and methods of automati
on, we have concentrated in this chapter on assembly and robotics. Industrially
produced, nished products consist mainly of several individual parts manufactured
, for the most part, at different times and in different places. Assembly tasks
thus result from the requirement of putting together individual parts, dimension
less substances, and subassemblies into assemblies or nal products of higher comp
lexity in a given quantity or within a given unit of time. Assembly thus represe
nts a cross-section of all the problems in the production engineering eld, very d
ifferent activities and assembly processes being performed in the individual bra
nches of industry. Assembly is the process by which the various parts and subassem
blies are brought together to form a complete assembly or product, using an asse
mbly method either in a batch process or in a continuous process (Turner 1993). VD
I Guideline 2860 de nes assembly as the sum of all processes needed to build togethe
r geometrically determined bodies. A dimensionless substance (e.g. slip material
s and lubricants, adhesives, etc.) can be applied additionally. The production sys
tem in a company can be divided into different subsystems. The assembly area is
linked to the other subsystems by the material and information ow (see Figure 1).
Assembly is concerned with bringing together individual parts, components, or d
imensionless substances into complex components or nal products. DIN 8593 differe
ntiates the following ve main functional sections: 1. 2. 3. 4. 5. Supply Adjustme
nt Inspection Assembly work Help functions
The composition of these activities varies greatly, depending on the industry an
d product. The structure of assembly systems is based on different organization
types, depending on the batch size, the product, and the projected turnover. Ass
embly is divided into:
Point assembly (often called site assembly) Workshop assembly Group assembly Lin
e assembly
The evaluated scope for automation is expressed by this list in ascending order.
This chapter provides a general overview of the assembly eld. Special emphasis i
s given to the selection and design of the appropriate assembly approach, assemb
ly design, and assembly techniques.
356
TECHNOLOGY
Figure 1 Assembly as a Subsystem of the Production System.
2. CLASSIFICATION OF ASSEMBLY TYPES AND CHOICE OF ASSEMBLY METHODS
Investigations into automation assembly procedures should rst try to ascertain wh
ether and in what scope automation assembly systems can be applied ef ciently. Thi
s requires a very careful analysis of the assembly tasks as well as a thorough e
xamination of possible alternatives and their pro tability. The total assembly cos
t of a product is related to both the product design and the assembly system use
d for its production. Assembly costs can be reduced to a minimum by designing th
e product so that it can be assembled economically by the most appropriate assem
bly system. Assembly systems can be classi ed into three categories: 1. Manual ass
embly 2. Special-purpose assembly 3. Flex-link (programmable) assembly In manual
assembly, the control of motion and the decision making capability of the assem
bly operator, assuming that the operator is well trained, are far superior to th
ose of existing machines or arti cial intelligence systems. Occasionally, it does
make economic sense to provide the assembly operator with mechanical assistance,
such as xtures or a computer display detailing assembly instructions, in order t
o reduce the assembly time and potential errors. Special-purpose assembly system
s are machines built to assemble a speci c product. They consist of a transfer dev
ice with single-purpose work heads and feeders at the various workstations. The
transfer device can operate on a synchronous indexing principle or on a free-tra
nsfer nonsynchronous principle. Flex-link assembly systems, with either programm
able work heads or assembly robots, allow more than one assembly operation to be
performed at each workstation and provide considerable exibility in production v
olume and greater adaptability in designing changes and different product styles
. Capital-intensive assembly, such as when automatic assembly systems are applie
d, produces the required low unit costs only when relatively large quantities ar
e produced per time unit and a long
358
TECHNOLOGY
For transfer systems to ex-link systems, exibility is achieved in the way the indi
vidual stations of an assembly system are linkedfor example, for a junction in th
e material ow or the evening out of capacity uctuations and / or technical malfunc
tions of the individual assembly stations. Flex-link systems also allow a differ
ent level of automation at each of the assembly stations. Depending on the respe
ctive assembly tasks, the automation level can be stipulated according to techni
cal and economic factors. In addition, step-by-step automation of the assembly s
ystem is possible because individual manual assembly equipment can still be repl
aced by automatic assembly station later on. Some features of ex-link assembly st
ations are:
Independent handling and manufacturing equipment that is linked to the interface
s for control
purposes
Single disengagement of the processing and handling cycles that allows different
cycle times at
each of the assembly stations
Magazines as malfunction buffers between the assembly stations for bridging shor
t stoppage
times
Freedom of choice in the placement and sequence of the assembly stations.
For longitudinal transfer systems, the transfer movement usually results from th
e friction between the workpiece carrier and the transfer equipment (belt, conve
yor, etc.). The workpiece carriers transport the workpieces through the assembly
system and also lift the workpieces in the assembly stations. The workpiece car
rier is isolated and stopped in the assembly station in order to carry out the a
ssembly process. The coding of the workpiece is executed by mechanical or electr
onic write / read units on which up-to-date information about the assembly statu
s or manufacturing conditions can be stored. Due to ex-linking, the number of lin
ked assembly stations can be very high without impairing the availability of the
whole system. Flexible modular assembly systems with well over 80 assembly stat
ions for automated assembly of complex products are therefore not uncommon.
3.
DESIGN OF MANUAL ASSEMBLY SYSTEMS
The manual assembly systems are divided into two main categories, manual singlestation assemblies and assembly lines. The manual single-station assembly method
consists of a single workplace in which the assembly work is executed on the pr
oduct or some major subassembly of the product. This method is generally used on
a product that is complex or bulky and depends on the size of the product and r
equired production rate. Custom-engineered products such as machine tools, indus
trial equipment, and prototype models of large, complex consumer products make u
se of a single manual station to perform the assembly work on the product. Manua
l assembly lines consist of multi-workstations in which the assembly work is exe
cuted as the product (or subassembly) is passed from station to station along th
e line (see Figure 3). At each workstation, one or more human operators perform
a portion of the total assembly work on the product by adding one or more compon
ents to the existing subassembly. When the product comes off the nal station, wor
k has been completed. Manual assembly lines are used in high-production situatio
ns where the sequences to be performed can be divided into small tasks and the t
asks assigned to the workstations on the line. One of the key advantages of usin
360
TECHNOLOGY
Figure 4 Flexible, Modular Assembly System. (Source: teamtechnik)
tions allows several workpiece carriers to stack up. The distance between the in
dividual assembly stations is not determined by the transfer device but by an op
timal buffer capacity. Flexible assembly systems with industrial robots can be d
ivided into three principal basic types with speci c system structures: 1. Flexibl
e assembly lines 2. Flexible assembly cells 3. Flex-link assembly systems The la
rgest number of industrial assembly robots is used in exible assembly lines (see
Figure 5). Flexible assembly lines are more or less comparable to exible, modular
assembly systems with regard to construction, features, and application. The cy
cle times for exible assembly lines generally vary between 15 and 30 sec. The mai
n area of application for these systems is the automated assembly of products wi
th annual units of between 300,000 and 1 million. The application of exible assem
bly lines is economically viable especially for assembling products with several
variants and / or short product lives because the exibility of assembly robots c
an
Figure 5 Flexible Assembly Line.
362
TECHNOLOGY
of many more parts with annual units below 500,000. In order to assemble these p
roducts automatically, it is possible to distribute the assembly procedures amon
g several ex-link assembly systems. There are two typologies of linking exible ass
embly systems: permanent and ex-link sequences. Linking with a permanent linking
sequence means that each assembly system is linked in a set sequence to each oth
er, such as with longitudinal transfer systems (see Figure 7). The permanent lin
king sequence allows these assembly systems to execute only a smaller number of
assembly tasks with different assembly sequences. They are therefore suitable fo
r the automated assembly of variants and types with comparable assembly sequence
s and several similar products or components with comparable assembly sequences.
Particularly for small annual workpiece amounts, it is economically necessary t
o assemble several different products or components in one exible automated assem
bly system in order to maintain a high level of system utilization. Because some
times very different assembly processes must be realized, a linking structure ( ex
-link sequence), independent from certain assembly processes, is necessary. Asse
mbly systems of this type are able to meet the increasing demands for more exibil
ity. In ex-link assembly systems, the workpieces are usually transported by workp
iece carriers equipped with workpiece-speci c lifting and holding devices. The tra
nsfer movement usually results from the friction between the workpiece carrier a
nd the transfer equipment (belt, conveyor, plastic link chain, etc.). The transf
er device moves continuously. However, the workpiece carriers are stopped during
the execution of the assembly processes. In the assembly station, each of the w
orkpiece carriers is stopped and indexed. If high assembly forces are required,
the workpiece carriers must be lifted from the transfer device. The coding of th
e workpiece carrier is executed by means of mechanical or electronic write / rea
d units. In this way, it is possible to record up-to-date information about the
assembly status or manufacturing procedure on every workpiece carrier and to pro
cess this information through the assembly control system. Figure 8 shows a work
piece carrier with a coding device.
5. PERFORMANCE EVALUATION AND ECONOMIC JUSTIFICATION IN SELECTING THE ASSEMBLY S
YSTEM
Various methods have been developed and proposed for performance evaluation and
system selection. Simpli ed mathematical models have been developed to describe ec
onomic performance, and the effects of several important variables have been ana
lyzed: 1. Parts quality (PQ), represented by this factor, is the average ratio o
f defective to acceptable parts. 2. Number of parts in each assembly (NA). 3. An
nual production volume per shift (VS).
Figure 7 Flex-link Assembly System.
WA SH
QE
(2)
where TP
ed system
equipment
aring the
364
TECHNOLOGY
will be used and will be nondimensionalized by dividing this cost per part by th
e rate for one assembly time per part, TA. Thus, the dimensionless assembly cost
per part (CD) is given by: CD
Substituting Eq. (2) into (3) gives CD
where WR
T WA NA (5) TP CE / NA WR
TA SH
QE CA NA WA
TA (3)
(4)
and is the ratio of the cost of all operators compared with the cost of one manu
al assembly operator and expressed per part in the assembly. The dimensionless a
ssembly cost per part for an assembly operator working without any equipment wil
l be one unit, which forms a useful basis for comparison purposes. For a particu
lar assembly system Eq. (4) holds true only if the required average production t
ime (TQ) for one assembly is greater than or equal to the minimum production tim
e (TP) obtainable for the system. This means that if TP TQ (because the system i
s not fully utilized), then TQ must be substituted for TP.
6.
ASSEMBLY: SCOPE FOR RATIONALIZATION
The early history of assembly process development is closely related to the hist
ory of the development of mass production methods. Thus, the pioneers of mass pr
oduction are also considered the pioneers of modern assembly. Their ideas and co
ncepts signi cantly improved the manual and automated assembly methods employed in
large-volume production. In the past decade, efforts have been directed at redu
cing assembly costs by the application of exible automation and modern techniques
. In the course of the development of production engineering, mechanization and
automation have reached a high level in the elds of parts production, which permi
ts the ef cient production of individual parts with a relatively low proportion of
labor costs. In the eld of assembly, by contrast, automation remains limited to
large-volume production. In medium and short-run production, rationalization mea
sures have been taken mainly in the area of work structuring and workstation des
ign. For the following reasons, automation measures have scarely begun in assemb
ly: 1. Assembly is identi ed by product-speci c, and thus quite variable, work conte
nt (handling, joining, adjusting, testing activities). Once solutions have been
found, however, they can be applied to other products or companies only with gre
at dif culty, in contrast to parts manufacturing. 2. Assembly, as the nal productio
n stage, must cope extensively with continuously shifting market requirements in
regard to timing, batch sizes, derivatives, and product structure. Although the
automation trend in the assembly eld has been growing remarkably over the past y
ears, many industrial sectors still have relatively unexploited potential for ra
tionalization. Compared to other manufacturing branches, such as parts manufactu
ring, assembly is characterized by a relatively low degree of automation. It acc
ounts for only 6090% of the manufacturing sequences of parts manufacturing, spot
welding, and press shop in the automotive industry, for example. Speci cally in as
sembly, however, this percentage decreases dramatically to less than 15% because
of the complexity of the assembly tasks. The vehicle assembly as a whole (inclu
sive of nal assembly) is quite cost-intensive in terms of employed labor force, r
epresenting 30% portion of the whole vehicle cost of production. Because of the
high assembly costs, the automation of further subassembly tasks is therefore co
nsidered a major objective towards which the most effort is being directed. Over
the past years, automation technologies have been evolving dramatically in term
366
TECHNOLOGY
Enhanced sensor integration
55,00
Improved process security
58,82
User-friendly operation
65,00
Computer-aided control technique
90,91
Assembly-oriented product design
92,00
Modular assembly systems
95,65 Occurrence in %
Figure 11 Preconditions to the Realization of the Assembly scope for Rationaliza
tion.
strive for the required exibility. The investment risk decreases in proportion to
the high recoverability of most components because the volume of investment wou
ld be well paid off in the case of product change if a large portion of the asse
mbly components could be reused. New requirements for the control technique are
imposed by modularization, such as modular con guration and standardized interface
s. Conventional SPS control systems are largely being superseded by industrial P
C. The result is decentralization of intelligence achieved by integration, for e
xample, its transfer to the single components of the assembly systems. Assembly
systems can thus be realized more quickly and cost-effectively. Further possibil
ities for rationalization could be exploited by simpli cation of the operation as
well as programming of assembly systems, still characterized by a lack of user-f
riendliness. The service quality and operation of assembly systems could be grea
tly improved by the use of graphically assisted control panels allowing for bett
er progress monitoring. Industrial robots are being used more and more in exible
automation for carrying out assembly sequences. Flexibility is guaranteed by fre
e programmability of the device as well as by the everincreasing integration of
intelligent sensory mechanisms into the systems. The application of imageprocess
ing systems directly coupled with robot control is expected to grow faster and f
aster in the next years. The increasing implementation of sensor technologies is
due to the rapid development of computer hardware, allowing for better performa
nce at lower prices. Further on, the degree of automation in the assembly can be
enhanced by improving the logistics and material ow around the workplaces as wel
l as by reducing secondary assembly times. Logistical aspects can be improved by
optimizing the layout of workplaces, paying special regard to the arm sweep spa
ces. Speci c devices for local parts supply can help to reduce secondary assembly
times consistently. They should be specially designed for feeding marshalled com
ponents correctly oriented as close as possible to the pick-up station to enable
workers to grasp them blindly and quickly. Rigid workplace limitations can be relax
ed by introducing the compact layout (see Figure 12), allowing workers to walk f
rom one workstation to the next and carry out different assembling tasks corresp
onding to overlapping sequences. This allows the number of workers along the sam
e assembly line to be varied exibly (according to the task to perform) even thoug
h the whole assembly sequence is still divided proportionately among the workers
employed. As for the organizational and structural aspects in the assembly, rec
ent research approaches and developments have revealed new prospects of better han
d-in-hand cooperation between workers and robots along assembly lines or at workst
ations. Industrial robots, being intended to help workers interactively in carry
ing out assembly tasks, need not be isolated or locked any longer. For this purp
ose, manmachine interfaces should be improved and safety devices redeveloped. Fur
ther on, robots should learn to cope with partially unde ned external conditions.
This is still a vision at present, but it shows future opportunities for realizi
ng the unexploited potential for rationalization in the assembly eld.
368
TECHNOLOGY
Figure 13 Measures for Easy-to-Assemble Product Design.
Assembly rules for the design of parts: 1. Avoid projections, holes, or slots th
at will cause tangling with identical parts when placed in bulk in the feeder. T
his may be achieved by arranging that the holes or slots are smaller than the pr
ojections. 2. Attempt to make the parts symmetrical to avoid the need for extra
orienting devices and the corresponding loss in feeder ef ciency. 3. If symmetry c
annot be achieved, exaggerate asymmetrical features to facilitate orienting or,
alternatively, provide corresponding asymmetrical features that can be used to o
rient in the parts. In addition to the assembly design rules, a variety of metho
ds have been developed to analyze component tolerance for assembly and design fo
r assembly by particular assembly equipment.
7.1.2.
Assemblability Evaluation
The suitability of a product design for assembly in uences its assembly cost and q
uality. Generally, product engineers attempt to reduce the assembly process cost
based on plans drawn by designers. The recent trend has been toward product ass
emblability from the early design phase in order to respond to the need for redu
ction in time and production costs. The method of assemblability evaluation is a
pplied by product designers for quantitatively estimating the degree of dif culty
as part of the whole product design process. In 1975, the Hitachi Corporation de
veloped the pioneering method of assemblability evaluation called the Assemblabi
lity Evaluation Method (AEM). AEM analyzes the assembly structure using 17 symbo
ls, thus aiming to give designers and production engineers an idea of how easily
procedures can be assembled (see Figure 14). It points out weak areas of design
from the assemblability viewpoint. The basic ideas of the AEM are: 1. Quali catio
n of dif culty of assembly by means of a 100-point system of evaluation indexes 2.
Easy analysis and easy calculation, making it possible for designers to evaluat
e the assemblability of the product in the early stage of design 3. Correlation
of assemblability evaluation indexes to assembly cost AEM focuses on the fundame
ntal design phase. The approach is to limit the number of evaluation items so th
at designers can apply them in practice. The reason for concentrating on the ear
ly design phase was to achieve greater savings by considering assemblability iss
ues as priorities from the very
370
1. Display product structure in "Structure Chart" 2. Detailed parts description
by answering the "DFA Questions" 3. Presentation of the results
Figure 15 Analysis of the Assembly Tasks According to the BoothroydDewhurst Metho
d.
372
9 8 7 6
TECHNOLOGY
years
5 4 3 2 1 0
1980 1990
truck Volvo
car Honda
printer copy machine Brother Xerox
printer HP
computer telephone television Apple AT&T Sony
Figure 17 The Change in Product Development Times When Using Simultaneous Engine
ering. (Source: Prof. Dr. Gerpott)
7.3.2.
Rivets
Like screwing, riveting is one of the classic connecting techniques. In recent t
imes, rivets have in fact been gaining in popularity again through the developme
nt of new types and manipulation tools and the increasing replacement of steel a
nd iron by other materials. The greatest market potential among the different ty
pes of rivets is forecast for blind rivets. Access to the workpieces being joine
d is required from only one side, thereby offering the ideal conditions for auto
mation. A blind riveter for industrial robots has been developed based on a stan
dard tool that, apart from its size and weight, is also notable for its enhanced
exibility. With the aid of a changeover device, for example, different rivet dia
meters and types of blind rivet can be handled.
7.3.3.
Self-Pierce Riveting
In contrast to traditional riveting, self-pierce riveting does not require pilot
drilling of the steel plate at the joint. The most frequent technique is self-p
ierce riveting with semitubular rivets, characterized by the rivet punching out
only the punch side of the sheet. Joining simply results from noncutting shaping
of the die side and widening of the rivet base on the sheet die side. The neces
sary dies are to be matched with the rivet length and diameter, workpiece thickn
ess, as well as number of interlayer connections, thickness variants, and materi
als. The workpiece punch side is at when countersunk punching head rivets are use
d, whereas the die side is normally uneven after die sinking. Punching rivet con
nections with semitubular rivets are gastight and waterproof. Punching rivets ar
e mainly cold extruded pieces of tempering steel, available in different degrees
of hardness and provided with different surfaces. Ideal materials for applicati
on are therefore light metals and plastics or combinations of both. Punching riv
et connections with semitubular rivets show a far better tensile property under
strain than comparable spot welding connections.
7.3.4.
Press- tting
Press- tting as a joining process does not require any additional elements, making
it particularly desirable for automation. At the same time, however, high seam
joining and bearing pressures occur that may not be transferred to the handling
appliance. A press- tting tool was therefore developed with allowance for this pro
blem. It con nes the bearing pressures within the tool with very little transfer t
o the handling appliance. Various process parameters, such as joining force, joi
ning path, and joining speed, can be freely programmed so as to increase the exib
ility of the tool. A tolerance compensation system is also integrated in the too
l that compensates any positioning errors (axis and angle shift). For supply of
the connecting elements the tool also has interfaces to interchangeable part-spe
ci c magazines (revolving and cartridge magazines) and the possibility of supply v
ia formed hoses.
374
TECHNOLOGY
8.2.
Classi cation and Types of Robots
Each of the industrial robot elements is linked to one another by linear guides
and revolute joints in a kinematic chain, and the actuation of these elements fo
rms the axes of the robot. Kinematics is the spatial allocation of the movement
axes in order of sequence and construction. Figure 19 illustrates the mechanical
con gurations of different types of robots. The main axes help to position the nal
effector (tool or workpiece) spatially. The hand or adjacent axes are primarily
responsible for the orientation of the tool and are therefore usually made of a
series of revolute joints. The following information may be useful for the sele
ction of the industrial robot best suited to the respective application:
The load-bearing capacity is the largest weight that has to be handled at the sp
eci ed motion
speed in relation to the robot ange. If the work speed or the reach is decreased,
a larger weight can be moved. The mechanical structure refers to the kinematic
chain as an ordered sequence and the type of motion axis on the robot. The numbe
r of axes is the sum of all the possible movements by the system. The higher the
degree of freedom, the lower the system accuracy and the higher the costs. Ther
efore, it is advisable to limit the number of axes to the amount required. The w
ork space is calculated from the structure of the kinematics and its dimensions.
Its shape is also dependent on the movement areas of the individual axes and ac
tuators. The positional accuracy determines the deviation during the run up to f
reely selected positions and orientations. The repetitive accuracy is the differ
ence when the run-up to a spatial point is repeated.
Within a wide range of contructional types and variants, four versions have prov
ed to be particularly suitable in practical operation: 1. 2. 3. 4. Cartesian rob
ots Articulated robots SCARA robots Parallel robots
An extreme rigid structure is the rst characteristic of Cartesian robots, whose s
ingle axes move only in the direction of the Cartesian space coordinates. For th
is reason, Cartesian robots are particularly suitable for operating in large-siz
ed working areas. Major applications include workpiece palletizing and commissio
ning. Standard articulated robots consist of six axes. They are available on the
market in a large variety of types and variants. Characterized by a cylindrical
working area occupying a relatively small volume, articulated robots easily all
ow failures to be repaired directly. Articulated robots are used primarily for s
pot welding, material handling, and painting as well as machining. SCARA robots
have a maximum of four degrees of freedom but can also be equipped with just thr
ee for very simple tasks. These robots are used for assembly tasks of all types,
for simple loading and unloading tasks, and for tting of electronic components t
o printed circuit boards. The six linear driving axes of parallel robots are ali
gned between the base plate and the robot gripper so as to be parallel. Therefor
e, positioning accuracy and a high degree of rigidity are among the characterist
ics of parallel robots. The working area is relatively small. Parallel robots ar
e particularly suitable for tasks requiring high accuracy and high-range forces,
such as workpiece machining.
8.3. 8.3.1.
Major Robot Components Power Supply
Three main power sources are used for industrial robot systems: pneumatic, hydra
ulic, and electric. Some robots are powered by a combination of electric and one
other power source. Pneumatic power is inexpensive but is used mostly for simpl
er robots because of its inherent problems, such as noise, leakage, and compress
ibility. Hydraulic power is also noisy and subject to leakage but is relatively
common in industry because of its high torque and power and its excellent abilit
y to respond swiftly to motion commands. It is particularly suited for large and
heavy part or tool handling, such as in welding and material handling, and for
smooth, complex trajectories, such as in painting and nishing. Electric power pro
vides the cleanest and most quiet actuation and is preferred because it is selfc
ontained. On the other hand, it may present electric hazards in highly ammable or
explosive environments.
376
TECHNOLOGY
The typical series of sequences to be carried out by the electrical drive of a r
obot axis is shown in Figure 20. The reference variables, calculated in interpol
ation cycles by the motion-controlling device, are transmitted to the axle contr
oller unit. The servodrive converts the electric command values into couple of f
orces to be transmitted to the robot axle through the linkage. The internal cont
rol circuit is closed by a speed indicator, the external one by a path-measuring
system.
8.3.2.
Measuring Equipment
Measuring devices able to provide the axis controller unit with the suitable inp
ut quantity are required for robot position and speed control. Speed measurement
is carried out directly over tachometer generators or indirectly over different
ial signals of angular position measuring systems. Optoelectronical incremental
transducers, optoelectronical absolute transducers, as well as resolvers are amo
ng the most common devices for robotic angular measuring. For the path measureme
nt of linear motions, the measuring techniques used by incremental and absolute
transducers are suitable as well.
8.3.3.
Control System
The major task of the robot control system consists of piloting one or more hand
ling devices according to the technologically conditioned handling or machining
task. Motion sequences and operations are xed by a user program and are carried o
ut by the control system unit. The necessary process data are provided by sensor
s, which therefore make it possible for the unit to adapt to a certain extent th
e preset sequences, motions, and operations to changing or unknown environmental
conditions. Figure 21 shows the components of a robot controller system. Data e
xchange with superset control system units is actuated through a communication m
odule, such as for loading the user program in the robot control unit or exchang
ing status data. The process control device organizes the operational sequence o
f the user program which provides instructions for the motion, gripper, sensors,
and program ow. The robot axes as well as the auxiliary axes involved in the tas
k to perform are driven by the motion-controlling device. There are three kinds
of motion control: 1. Point-to-point positioning control 2. Multipoint control 3
. Continuous path control The axis controller carries out the task of driving th
e robot axes according to the reference variables. The sequential progression fr
om one axis position to the next is monitored and readjusted comparatively to th
e actual positions. An appliance (teach pendent) allows the robot to be operated
and programmed by hand. The operator can easily access all control functions in
order to run the robot in manual mode or determine the program ow. The control s
ystem also determines two major performance measures of the robot: its accuracy
and repeatability. The rst indicates the precision with which the robot can reach
a programmed
Figure 20 Electrical Drives of Robot Axis.
378
TECHNOLOGY
A programming method is the planned procedure that is carried out in order to cr
eate programs. According to IRDATA, a program is de ned as a sequence of instructi
ons aimed at ful lling a set of manufacturing tasks. Programming systems allow pro
grams to be compiled and also provide the respective programming assistance (see
Figure 22). If direct methods (online programming methods) are used, programs a
re created by using the robot. The most common method is teach-in programming, i
n which the motion information is set by moving towards and accepting the requir
ed spatial points, assisted by the teach pendent. However, the robot cannot be u
sed for production during programming. One of the main features of indirect meth
ods (of ine programming methods) is that the programs are created on external comp
uters independent of the robot control systems. The programs are generated in an
of ine programming system and then transferred into the robots control system. The
key advantage of this method is that the stoppage times for the robot systems c
an be reduced to a minimum by means of the con guration of the programs. Hybrid me
thods are a combination of direct and indirect programming methods. The program
sequence is stipulated by indirect methods. The motion part of the program can b
e de ned by teachin or play-back methods or by sensor guidance.
8.4.2.
Robot Simulation
Computer-assisted simulation for the construction and planning of robot systems
is becoming more and more popular. The simulation allows, without risk, examinat
ion and testing on the model of new experimental robot concepts, alternative ins
tallation layouts or modi ed process times within a robot system. Simulation is the
imitation of a system with all its dynamic processes in an experimental model wh
ich is used to establish new insights and to ascertain whether these insights ca
n be transferred to real situations (VDI Guideline 3633). The starting point for t
he development of graphic 3D simulation systems was the problem of planning the
use of robots and of ine programming. Independent modules or simulation modules in
tegrated into CAD system were created. To enable a simulation to be conducted, t
he planned or real robot system must rst be generated and depicted as a model in
the computer. The abstraction level of the simulation model created must be adju
sted to the required imitation: as detailed as necessary, as abstract as possible. O
nce the model has been completed, an in nite number of simulations can be carried
out and modi cations made. The objective is to improve the processes and eliminate
the possibility of planning mistakes. Figure 23 shows an example of the visuali
zation of a robot welding station. Simulation is now considered one of the key t
echnologies. Modern simulation systems such as virtual reality are real-time ori
ented and allow interaction with the operator. The operator can change the visua
l point of view in the graphical, 3D picture by means of input devices (e.g., da
ta gloves) and thus is able to take a look around in the virtual world. The virtual
objects can be interactively manipulated so that modi cations can be executed more
quickly, safely, and comfortably.
Figure 22 Overview of Programming Methods for Industrial Robots.
8.5.1.
Courier and Transportation Robots
The HelpMate, introduced in 1993, is a mobile robot for courier services in hosp
itals. It transports meals, pharmaceuticals, documents, and so on, along normal
corridors on demand (see Figure 24). Clear and simple user interfaces, robust ro
bot navigation, and the ability to open doors or operate
380
TECHNOLOGY
Figure 24 Mobile Robot for Courier Services in Hospitals. (Source: PYXIS)
elevators by remote control make this a pioneering system in terms of both techn
ology and user bene t.
8.5.2.
Cleaning Robots
A service robot climbs surfaces on suction cups for cleaning (see Figure 25), in
spection, painting and assembly tasks. Tools can be mounted on the upper transve
rsal axis. Navigation facilities allow accurate and controlled movement.
Figure 25 Cleaning Robot. (Source: IPA)
382
TECHNOLOGY
Figure 27 Operation Robot (Model number Vision OP 2015).
disoriented workpieces during the transfer movement by means of turning, twistin
g, tilting, or setting upright the workpieces and / or throwing back wrongly ori
ented workpieces into the bin. However, mechanical baf es are set up for only one
type of workpiece and much retooling is required if the workpiece is changed. Th
ese pieces of equipment are therefore suitable only for large series assembly. T
o increase exibility, mechanical baf es are being replaced more and more with optic
al partsrecognition systems in the vibratory bowl feeder. The geometry of the va
rious workpieces can be programmed, stored in, and retrieved from the control sy
stem if there is a change of product. This means that retooling can be kept to a
minimum. Image processing as a sensor for guiding robots is also being applied
more and more in exible workpiece-feeding systems. Isolation of unsorted workpiec
es is conducted on a circular system of servocontrolled conveyor belts. The corr
ectly oriented workpieces on one of these conveyor bands
Figure 28 Entertainment Robots in the Museum of Communication, Berlin. (Source:
IPA)
384
TECHNOLOGY
Storage for sorted workpieces (sorting grade >0)
With workpiece movement
Without workpiece movement
With a drive Drum magazine Chain magazine Workpiece belt
Independent workpiece movement Slot magazine Channel magazine Hanging or slide r
ail
Figure 30 Types of Magazines.
Pallet Single pallet Stacking pallet Drawer pallet
in a certain order. Workpiece positioning is conducted by means of a form closur
e. Pallet magazines are usually stackable. The pallets can be coded for controll
ing tasks in the material ow. They are usually used in ex-link assembly systems. D
riven and / or movable magazines are preferable when dealing with larger workpie
ces. These magazines are capable not only of storage but also of forwarding and
/ or transferring the workpieces.
9.3.
Fixturing
Fixtures are the bases that securely position workpieces while the assembly syst
em performs operations such as pick-and-place and fastening. They are generally
heavy and sturdy to provide the precision and stability necessary for automated
assembly. They carry a variety of substructures to hold the particular workpiece
. While each xture is different, depending on the workpiece it handles, several c
ommonsense rules for their design are suggested: 1. Design for assembly (DFA) us
ually results in simpler xture design. Simple vertical stacking of parts, for exa
mple, generally allows minimal xtures. 2. Do not overspecify xtures; do not demand
xture tolerances not required by product tolerances. Fixture cost is proportiona
l to machining accuracy. 3. Fixture weight is limited by conveyor and drive capa
bilities. The transfer conveyor must be able to carry the xtures. And because xtur
e bases get rough treatment, include replaceable wear strips where they contact
the conveyor. Replacing wear strips is cheaper than replacing bases. In addition
, plastic bumpers cut wear, noise, and shock. 4. Use standard toolroom component
s when possible. Drill bushings, pins, and clamps are available from many source
s. Use shoulder pins in through-holes instead of dowel pins positioned by bottom
ing in the hole. Through-holes are easier to drill and do not collect dirt. 5. M
ake sure any fragile or vulnerable parts are easily replaceable.
9.4.
Sensors and Vision Systems
Sensors provide information about the automated system environment. They are the
link between the technical process and its control system and can be seen as th
e sense organ of the technical system. They are used for a number of tasks, such
as material ow control, process monitoring and regulation, control of industrial
robot movements, quality inspections and industrial metrology, and for security
protection and as safeguards against collisions. The variety of sensors can be
divided into technology applications as depicted in Figure 31.
sing on a computer allow statements to be made about the type, number, and posit
ion of objects at a workpiece scene. These objects are then taught to the image
processing system in a speci c learning process (see Figure 32). The re ection of an
illuminated workpiece passes through a lens onto the CCD chip in the camera. A
2D discrete position image is created. The camera electronically turns this imag
e into a video signal. The image-processing computer uses a frame grabber to tur
n the video signal into digital images and store these in the memory. Image-proc
essing procedures can access these image data. Despite the large amount of hardw
are required, the main component of image-processing systems is the image-proces
sing software. Image-processing systems are extremely sensitive to changes in li
ghting conditions. Lighting therefore plays a decisive part in the evaluation of
the images. Different speci c lighting procedures for gathering spatial informati
on have been developed, such as:
The The The The The The
transmitted light procedure silhouette-projection procedure light section proced
ure (triangulation) structured light procedure coded lighting procedure Stereo i
mage processing
out in every workstation. The results of the time analyses and the appropriation
of the parts can also be checked and optimized in the 3D model of the workstati
ons (Figure 33). The independent stations are nally united in a complete layout o
f the system, and the interlinking system and the bumpers are de ned. In a nal simu
lation of the material ow, the buffer sizes can be optimized and bottlenecks elim
inated. The planning results are available as data and also as a 3D model of the
assembly line. This provides all participants with enough data upon which to ba
se their decisions (see Figure 34).
388
TECHNOLOGY
Cost analyses and ef ciency calculations can also be executed with the software su
pport. The use of software tools for the planning of assembly lines can reduce p
lanning time and increase the quality of the results.
10.2.
Simulation of Material Flow
The objective of material ow simulation is to predict the effect of actions befor
e they are executed in order to be able to react if the results do not meet expe
ctations. For making decisions especially in complex problems, the operations re
search procedure is also available. This procedure supposedly nds the most optima
l solution on its own, which the simulation cannot accomplish. However, the imme
nse number of model restrictions for complex problems with regard to model shapi
ng and the great amount of calculation required make this practically unusable.
In contrast, material ow simulation requires only a reasonable amount of modeling
and calculation effort. There are basically two application elds for material ow
simulation: simulation for new plans and simulation aimed at optimizing existing
systems. For new plans, the simulation aims to increase the planning security b
y determining initially whether the system works at all, whether the cycles of t
he individual stations are correct, where possible bottlenecks are in the system
, and how malfunctions in individual stations affect the whole system. During th
e subsequent planning optimization process, the effects of malfunctions mainly d
etermine the buffer size, which can vary considerably from the static buffer dev
iations. Furthermore, steps can be taken to review which measures will be needed
to increase capacity by 10%, 20%, and more without the necessity of further shi
fts. This allows a change in the level of demand during the products life cycle t
o be taken into consideration. During operation after a product modi cation, new w
ork tasks, or the assembly of a further variation on the existing system have be
en introduced, often a manufacturing system will no longer achieve the planned p
roduction levels and a clear deviation between target and actual production gures
will become evident over a longer period of time. Due to the complexity of the
manufacturing process, it is often not clear which actions would have the greate
st effect. Simulation can determine, for example, what level of success an incre
ase in the number of workpiece carriers or the introduction of an additional wor
k cell would have. This helps in the selection of the optimal measures needed to
increase productivity and the required levels of investment. Simulation of an e
xisting system also has the major advantage that existing data pertaining to sys
tem behavior, such as actual availability, the average length of malfunctions, a
nd the dynamic behavior of the system, can be used. These data can be determined
and transposed to the model relatively easily, which makes the model very reali
stic. For a simulation study to be executed, a simulation model must rst be made
that represents a simpli ed copy of the real situation. The following data are req
uired: a scaled layout of the production, the cycle times of each of the process
ing steps, the logistic concept including the transportation facilities and the
transportation speeds, the process descriptions, and data on the malfunction pro l
es, such as technical and organizational interruption times and the average leng
th of these times. Once these data has been entered into the model, several simu
lations can be executed to review and, if necessary, optimize the behavior of th
e model. The following example shows the possibilities of material ow simulation.
In the model of an assembly system which is designed to produce 140 pieces / hr
, the number of the workpiece carriers is increased. The result is a higher prod
uction level at some of the examined stations, but at some locations the higher
number of workpiece carriers cause a blockage of stations that have successor st
ations with a higher cycle time. Thus, the increase of workpiece carriers has to
be done in small steps with a separate simulation after each step to nd the opti
mum. In this example, a 10% increase in the number of workpiece carriers and the
installation of a second, parallel screwing station lead to a production increa
se from 120 to 150 pieces / hr. The model reaches its optimum when each of the p
arameters is speci cally modi ed and a combination of different measures is achieved
. Aside from the creation of the model itself, this is the real challenge.
11.
ASSEMBLY IN INDUSTRY: APPLICATIONS AND CASE STUDIES
Assembly is fundamental in almost all industrial activities. Assembly applicatio
ns in several important industries are illustrated in the following case studies
.
11.1.
Automotive Assembly
More than half of the industrial robots throughout the world are used in the aut
omotive industry. These robots are utilized in a number of different manufacturi
ng sectors. The main areas of application are in car-body construction, mainly i
n spot welding and painting. Assembly lines are also being more and more robotiz
ed in the automotive industry. This is essential in the case of complex joining
processes requiring a high standard of quality, which can be guaranteed only by
robots. But in order to promote automation in automotive assembly effectively, t
wo conditions must be met rst:
390
TECHNOLOGY
Figure 36 Assembly of Driver Control Instruments. (Source: KUKA Roboter GmbH)
Figure 37 Assembly of Car Door Rubber Pro les (Source: IPA)
392
TECHNOLOGY
The robot sensor is guided over the workpieces in the crate to determine the pos
ition of each of the workpieces. The workpiece surface that has been recognized
is used to determine the three positions and orientations of the workpiece. The
approach track with which the robot can grab and pick up the cast part is prepar
ed from the workpiece position. The whole approach track is calculated in advanc
e and reviewed for possible collisions. If the danger of a collision exists, alt
ernative tracks are calculated and the next cast part is grabbed if necessary. I
n this way the system maintains a high level of operational security and has sou
nd protection from damage.
11.4. 11.4.1.
Electronic Assembly Assembly of an Overload Protector
This pallet-based assembly system produces a safety overload protection device a
nd was designed with multiple zones (see Figure 40). The transport system is a ex
-link conveyor with a dual-tooled pallet for cycle time consideration. The cycle
time for the system is 1.2 sec. Tray handling and gantry robots were utilized f
or this application due to the high value of the component. The line features un
ique coil winding, wire-stripping technologies, and sophisticated DC welding for
over 20 different nal customer parts for the electronic assembly market.
11.4.2.
Assembly of Measuring Instruments
The production of measuring instruments is often characterized by small batches,
short delivery times, and short development times. Automated assembly of the me
asuring instrument cases enables products to be manufactured at lower costs. The
cases are made of several frame parts. Each frame part is separated by a metal
cord that provides screening against high-frequency radiation. The different kin
ds of cases have various dimensions (six heights, three widths, and three depths
). Altogether there are about 1,000 product variants with customer-speci c fasteni
ng positions for the measuring instrument inserts. Therefore, the assembly line
is divided into order-independent preassembly and order-speci c nal assembly (see F
igure 41). In the preassembly cell are automatic stations for pressing in differ
ent fastening nuts and screwing in several threaded bolts. An industrial robot h
andles the frame parts. After removing the frames from the supply pallets, the r
obot brings the frame into a mechanical centering device for ne
Figure 40 Assembly Line for Safety Overload Protection Device. (Source: ATS Inc.
)
394
TECHNOLOGY
Figure 42 Final-Assembly Cell with Flexible Clamping System.
required positions is essential. The clamping device is made up of standard comp
onents that can clamp more than 25 cases with different dimensions. The robot ar
m has an automatic tool-changing system for changing tools quickly; each tool ca
n be picked up within a few seconds. Experience with the assembly cells outlined
above has shown that even assembly tasks with extensive assembly steps producin
g less than 10,000 units per year can be automated in an ef cient way.
11.4.3.
Assembly of Luminaire Wiring
For the optimal economic assembly of luminaire wiring, a simultaneous developmen
t of new products and process techniques was necessary. A new method developed f
or the direct assembly of single leads has proved to be a great improvement over
the traditional preassembly of wires and wiring sets. Single leads are no longe
r preassembled and placed in the end assembly. Instead, an endless lead is taken
directly from the supply with the help of a newly developed fully automated sys
tem and built into the luminaire case. This greatly simpli es the logistics and ma
terial ow inside the company and reduces costs enormously. The IDC technique has
proved to be the optimal connection system for the wiring of luminaires. Time-co
nsuming processes like wire stripping and plugging can now be replaced by a proc
ess combining pressing and cutting in the IDC connector. The introduction of thi
s new connection technique for luminaire wiring requires modi cation of all compon
ent designs used in the luminaires. Fully automatic wiring of luminaires is made
possible by the integration of previously presented components into the whole s
ystem. Luminaire cases are supplied by a feeding system. The luminaire case type
is selected by the vision system identifying the bar code on the luminaire case
. Next, luminaire components are assembled in the luminaire case and then direct
ed to the wiring station (Figure 43). Before the robot starts to lay and contact
the leads, a camera, integrated in the wiring tools, is positioned above the re
quired position of the component designated for wiring. Each component is identi e
d and its precise position is controlled by vision marks in the shape of three c
ylindrical cavities in the die casting of the IDC connector. Any deviation of th
e actual position from the target position is calculated and transmitted as comp
ensation value to the robots evaluation program. Quality assurance for the proces
ses, especially for pressing and contacting of the conductor in the IDC connecto
r, is directly controlled during processing. When the wire is inserted into the
connector, a signi cant and speci c force gradient is shown within narrow tolerances
. This measured gradient can be easily compared and evaluated immediately after
pressing with the help of a reference
396 ferrule hole 125 m 3m ST - connector glass fiber 125m 2 m tight buffer
TECHNOLOGY sheathing
glue
primary and secondary coating
strain relief
Figure 44 Construction of Connector and Cable.
The assembly of precision mechanics and technical microsystem components often r
equires joining work to be conducted in the micrometer range. Intelligent robot
grabbing systems have been developed in order to achieve the join accuracy requi
red. Motion-assisted systems are particularly suited for exible offsetting of tol
erances in short cycles. Figure 46 shows an example of a oscillation-assisted jo
ining robot tool for precision and microassembly of small planetary gears. Assis
ted by the adjustable oscillation in the grabber and the regulated tolerance off
set, each of the gearwheels and the gearwheel housing can be process assembled v
ery securely by means of the integrated ne positioning. The special features of t
he joining tool are the joining accuracy of up to 2 m that can be achieved, the p
rocess-integrated offsetting of orientation and position deviations (up to 1 mm), a
nd the high level of adjustment as well as exibility essential for performing the
different joining tasks. The robot tool is used especially for phaseless bolt-i
n-hole joining tasks with accuracy requirements in the micrometer range and for
the assembly of complex toothed wheel pairs.
11.6.
Food Industry
The number of convenience products sold annually in recent years has increased m
arkedly. Particularly in everyday life and in cafeterias the trend is toward mea
ls that can be prepared more quickly and conveniently. According to the experts
at Food Consulting, the share of convenience products in
Figure 45 Prototype Assembly Cell for Manufacturing Fiberoptic connectors.
398
TECHNOLOGY
Figure 48 Automated Arrayer Based on a Gantry Robot. (Source: IMNT)
Figure 47 shows a robot in use in the food-packaging sector. At the end of a pro
duction line, the unsorted sausages that arrive on the conveyor belt are recogni
zed by an image-processing system and automatically sorted according to the nal p
ackaging unit with the help of the robot.
11.7.
Pharmaceutical and Biotechnological Industry
In the pharmaceutical industry, large sums of money are used for preclinical and
clinical research. The development of a drug costs millions of dollars. Hundred
s of tests have to be conducted. One of the innovative tools in the eld of drug d
iscovery is microchip technology. Microchips require less reagent volume, make a
nalytical processes run faster because of their smaller size, and allow more sen
sitive detection methods to be implemented. In this way, they reduce costs, save
time, and improve quality. Microchips come in two main categories: chips based
on micro uidics, with components such as pumps, mixers, microinjectors, and microa
rray chips, which have numerous locations of samples, such as DNA samples, on th
eir surface. Most of the interest now seems to be focused on this second categor
y. Figure 48 shows an arrayer for low-volume spots. With a capillary-based tip p
rinting method, volumes in the lower pl-range can be produced. These very small
drop volumes give the opportunity to use more sensitive methods of detection and
produce high-density arrays. The automated arrayer is based on a gantry robot.
Its three axes are driven by electronic step drives that have a positional accur
acy of 1 m and a speed up to 300 mm / sec. The work area is 300
300 mm and the total
dimension is 500 500 mm. In order to have a controlled environment, the system
is operated in a clean room. So far, commercial available arraying systems are u
seful only for R&D applications and research labs. They have not achieved wide c
ommercial availability.
REFERENCES
Turner, W. C. (1993), Introduction to Industrial and Systems Engineering, 3rd Ed
., Prentice Hall, Englewood Cliffs, NJ.
ADDITIONAL READING
Armada, M., and de Santos, P. G., Climbing, Walking and Intervention Robots, Industr
ial Robot, Vol. 24, No. 2, pp. 158163, 1997. Barbey, J., Kahmeyer, M., and Willy,
A., Technikgestaltung in der exibel automatisierten Serienmontage, Fortschritt-B
erichte VDI, Reihe 2, Vol. 262, VDI, Du sseldorf, 1992.
400
TECHNOLOGY
Schraft, R. D., Wolf, A., and Schmierer, G., A Modular Lightweight Construction Sy
stem for Climbing Robots, in Proceedings of First International Symposium on Climb
ing and Walking Robots (Brussels, November 2628 1998). Schulte, J., Automated Mount
ing of Connectors to Fiber Optic Cables, in Proceedings of 40th International Wire
& Cable Symposium (St. Louis, November 1821, 1991), pp. 303308. Takeda, H., Autom
ation ohne Verschwendung, Verlag Moderne Industrie, Landsberg, 1996. Takeda, H.,
Das System der Mixed Production, Verlag Moderne Industrie, Landsberg, 1996. Tay
lor, R. H., Lavealle, S., Burdea, G. C., and Mosges, R., Computer-Integrated Sur
gery: Technology and Clinical Applications, MIT Press, Cambridge, MA, 1995. Tsud
a, M., Yuguchi, R., Isobe, T., Ito, K., Takeuchi, Y., Uchida, T. Suzuki, K., Mas
uoka, T., Higuchi, K., and Kinoshita, I., Automated Connectorizing Line for Optica
l Cables, in Proceedings of 43rd International Wire and Cable Symposium (Atlanta,
November 1417, 1994), pp. 781789. Wapler, M., Weisener, T., and Hiller, A., HexapodRobot System for Precision Surgery, in Proceedings of 29th International Symposium
on Robotics (ISR 98) (Birmingham, UK, April 27 30, 1998), DMG Business Media Ltd.
, Redhill, UK. Wapler, M., Neugebauer, J., Weisener, T., and Urban, V., Robot-Assi
sted Surgery System with Kinesthetic Feedback, in Proceedings of 29th Internationa
l Symposium on Robotics (ISR 98) (Birmingham, UK, April 2730, 1998), DMG Business
Media Ltd., Redhill, UK. Warnecke, H.-J., et al., Der Produktionsbetrieb 1: Orga
nisation, Produkt, Planung, Springer, Berlin, 1995. Willemse, M. A., Interactive P
roduct Design Tools for Automated Assembly, Dissertation, Delft University of Tech
nology, Fabs Omslag, Delft, Netherlands, 1997. Xu, C., Worldwide Fiberoptic Connect
or Market and Related Interconnect Hardware, in Proceedings of Newport Conference
on Fiberoptics Markets (Newport, RI, October 202, 1997), p. 17. Yeh, C., Handbook
of Fiber Optics: Theory and Applications, Academic Press, San Diego, 1990.
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
401
402 5.2. 5.3. Disassembly Processes and Tools Applications 5.4. 440 443
TECHNOLOGY Entire Framework for Assembly and Disassembly 444 445
REFERENCES
1.
CURRENT DEVELOPMENTS
Todays competitive environment is characterized by intensi ed competition resulting
from market saturation and increasing demands for a customer-oriented productio
n. Technological innovations also have an in uence on the competitive environment.
These facts have dramatically altered the character of manufacturing. Meeting c
ustomers demands requires a high degree of exibility, low-cost / lowvolume manufac
turing skills, and short delivery times. Production and thereby manufacturing pe
rformance thus have gained increasing signi cance and are conceived as a strategic
weapon for achieving and maintaining competitiveness (Verter and Dincer 1992).
Especially in high-tech markets, where product technology is rapidly evolving, m
anufacturing process innovation is becoming an increasingly critical capability
for product innovation. To meet the requirements of todays markets, new paths mus
t be trodden both in organizational methods and in manufacturing and automation
technology (Feldmann and Rottbauer 1999).
1.1.
General Developments in Assembly
The great signi cance of assembly in a companys success is due to its function- and
qualitydetermining in uence on the product at the end of the direct production ch
ain (Figure 1). Rationalization of assembly is still technologically impeded by
high product variety and the various in uences resulting from the manufacturing to
lerances of the parts to be joined (To nshoff et al. 1992). As a result, conside
rable disturbance rates are leading to reduced availability of assembly systems
and delays in assembly operations. This complicates ef cient automation and leads
to consideration of displacing assembly plants into lower-cost regions. Assembly
is also in uenced by innovative developments in the manufacturing of parts, such
as surface technology or connecting technology, which can have an important in uen
ce on assembly structures. The complex re ector assembly of a car headlight, for e
xample, can be substituted for by surface coating technology. Due to the rapidly
changing market and production conditions, the role and the design of the indiv
idual functions within the entire value-adding chain are also changing. Other fa
ctors helping to bring this about include by the in uence of microelectronics on p
roduct design and manufacturing structure and global communication possibilities
. Facing the customers demands for the after-sales or service functions, for exam
ple, is becoming of increasingly important to a companys success. More and more c
ustomers would like to purchase service along with the product contract.
Figure 1 Signi cance of Assembly in the Value-Adding Chain. (From Feldmann et al.
1996)
ASSEMBLY PROCESS
403
The growing complexity of the manufacturing process in industry has its origin i
n the globalization of the markets, customer demands for systems instead of sing
le products, and the introduction of new materials and technologies. Designing a
product for ease of assembly using design for manufacture and assembly (DFMA) m
ethodology leads to a reduced number of variants and parts, higher quality, shor
ter time-to-market, lower inventory, and few suppliers and makes a signi cant cont
ribution to the reduction of complexity in assembly (Boothroyd 1994). The in uence
of the different branches and product structures is more evident in the range o
f assembly technology than in prefabrication. This also has a lasting impact on
site selection because of the required workforce potential. Therefore, the globa
l orientation of the assembly plants should be clari ed on the basis of four major
product areas (Figure 2). Car assembly can be characterized as a serial assembl
y with trained workers. In contrast, the machine tool industry is characterized
by a high degree of specializiation and small lot sizes, requiring highly skille
d workers for assembly tasks. This makes the global distribution of assembly pla
nts dif cult, especially because the close interaction of development, manufacturi
ng, and start-up still plays an important role. In contrast, the assembly of ele
ctronic components, inspired by the technological transition to surface mount te
chnology (SMT), has been rapidly automated in recent years. In view of the small
remaining share of personnel work, labor costs do not have a major in uence on si
te selection in this product area any longer. In contrast to car assembly, the e
lectronics industry is characterized by a more global distribution of production
sites and comparatively smaller production units. However, this simpli es the reg
ional or global distribution of assembly plants. Product size as well as logisti
cal costs for the global distribution of products from central assembly sites ar
e generally lower than in the automotive industry. In the white goods industry,
a relatively small ratio of product value to product size is decisive. Serving g
lobal markets under minimized logistical costs requires corresponding global pos
itioning of distributed assembly plants. In general, for all industries, four fu
ndamental solutions in assembly design can be distinguished (Figure 3). Manual a
ssembly in small batch sizes is at one end and automated serial assembly at the
other. Thus, the introduction of exible assembly systems is reinforced. Again, th
ese exible automated assembly systems offer two alternatives. The integration of
NC axes increases the exibility of conventional machines, whereas the introductio
n of robot solutions is aimed at opening up further assembly tasks for ef cient au
tomation. There have been many technological responses to the global demand for
large product variety coupled with short delivery times. In this context, the co
ncept of human-integrated production systems is gaining ground. The intention he
re is to allow the human operator to be a vital participant in the future comput
er integrated manufacturing systems. This also has an impact on the design of as
sembly systems. Changes in product structure and the in uence of electronics are d
rivers of assembly rationalization. A common approach, from car assembly up to a
ssembly in the electronics industry, is the
Figure 2 Major Product Areas with Different Conditions for Global Assembly.
404
TECHNOLOGY
Figure 3 The Basic Technological Alternatives in Assembly. (From Feldmann et al.
1996)
stronger concentration on higher functional density in subsystems (Figure 4). In
car assembly, this means preassembling complex units like doors or cockpits; in
electronics, this means circuit design with few but highly integrated circuits.
The new paradigm in manufacturing is a shift from Taylorism and standardization
to small-lot, exible production with emphasis on short lead-time and responsiven
ess to market. Organizational decentralization in autonomous business units, cal
led fractals (Warnecke 1993), has also encouraged the distribution of assembly p
lants. The reduction of production complexity as a prerequisite for a faster, mo
re exible, and self-organizing adaptation of business units to changing market co
nditions is based on a redistribution of decision making responsibilities to the
local or distributed unit. The managing of complexity in manufacturing by outso
urcing is rather common in the prefabrication eld, whereas the reduction of produ
ction depth and the resulting cost advantages contribute to keep assembly sites
in the countries of origin. In turn, the formation of decentralized business uni
ts results in more favorable conditions for the relocation of speci c assembly tas
ks to other regions.
1.2.
Impact of Electronics on Assembly
Within the framework of assembly rationalization, electronics almost has a doubl
e effect (Figure 5). In the rst step, ef cient assembly solutions can be built up b
y electronically controlled systems with programmable controllers and sensors. I
n the second step, the assembly task can be completely replaced by an electronic
ally provided function. Examples are the replacement of electromechanical uoresce
nt lamp starters by an electronic solution and, on a long-term basis, the replac
ement of technically complex letter-sorting installations by purely electronic c
ommunication via global computer networks. Not only does the replacing electrome
chanical solutions with electronic functional carriers reduce assembly expenditu
re, but electronics production can be automated more ef ciently. In many cases, th
e functionality and thus the customer bene t can be increased by the transition to
entirely electronic solutions. The further development of semiconductor technol
ogy plays an important role in electronics production. In addition to the direct
consequences for the assembly of electronic components, further miniaturization
, increasing performance, and the expected decline in prices have serious effect
s on the assembly of electronic devices and the further development of classical
engineering solutions. Within a certain range, the degree of automation in mech
anical assembly can be increased only up to a certain level. In contrast, in ele
ctronics assembly, there is greater higher potential for increasing the degree o
f automation. This also has consequences for product design. Figure 6 shows a co
mparison of the degree of automation in mechanical and electronics assembly. Tod
ays paradigms for manufacturing require a holistic view of the value-adding chain
. The disadvantages of breaking up the value-adding chain and distributing the s
ingle functions globally can be compensated for by models and tools supporting i
ntegrated process optimization. Examples of this are multimedia applications bas
ed on new developments in information technology and the
ASSEMBLY PROCESS
405
.
Figure 4 Trends toward New Assembly Structures. (From Feldmann et al. 1996)
concept of virtual manufacturing, which is based on simulation technology. The d
iffusion of systems such as electronic data interchange (EDI) and integrated ser
vice digital network (ISDN) allows a more ef cient communication and information e
xchange (Figure 7). Distributed and decentralized manufacturing involves the pro
blem of locally optimized and isolated applications as well as incompatibilities
of process and system. To ensure synergy potentials and stabilize the productiv
ity of the distributed assembly plants, an intensive communication and data
Figure 5 Impact of Electronics on Assembly Tasks. (From Feldmann et al. 1996)
406
TECHNOLOGY
Figure 6 Comparison of the Degree of Automation in Mechanical and Electronics As
sembly.
exchange within the network of business units is vital (Feldmann and Rottbauer 1
999). New information technologies provide correct information and incentives re
quired for the coordination of ef cient global production networks. For instance,
the diffusion of systems such as EDI and ISDN allows more ef cient communication a
nd information exchange among the productive plants linked in the same network.
Furthermore, high-ef ciency systems such as satellite information systems have con
tributed to perform operations more ef ciently with regard to the critical success
factors. For instance, tracking and expediting international shipments by means
of preclearance operations at customs leads to a reduction in the delivery time
. The coordination of a network of transplants dispersed throughout the world pr
ovides operating exibility that adds value to the rm. From that
Figure 7 Global Engineering Network for Assembly Plants.
ASSEMBLY PROCESS
407
point of view, a decisive improvement in the conditions for global production ne
tworks has come about through the in uence of microelectronics with the new possib
ilities in telecommunications. As data de nitions become more sophisticated under
emerging standards such as STEP, corporate server networks can distribute a grow
ing wealth of product and process information among different parts of the organ
ization and its associates. Processing programs developed worldwide can be used
in all production sites by means of direct computer guidance (Figure 7). The dia
gnosis of assembly systems is coordinated from a central control center. In this
way, it becomes possible to transmit the management of a project in the global
production network around the clock.
2.
2.1.
ASSEMBLY TECHNOLOGIES AND SYSTEMS
Basic Structures of Assembly Systems
Assembly is the sum of all processes needed to join together geometrically deter
mined bodies. A dimensionless substance (e.g., lubricants, adhesives) can be app
lied in addition (VDI 1982). In addition to joining in the manufacturing process
, handling of components is the primary function of assembly. The assembly also
contains secondary functions such as adjusting and inspecting as well as various
special functions (see Figure 8). Joining is de ned by DIN 8593 as a part of manu
facturing processes. In this case, the production of a compound consisting of se
veral parts can be achieved by merging, pressing, pressing in, metal forming, pr
imary shaping, lling, or by combining substances. Handling is de ned in VDI Guideli
ne 2860 / 1 as the creation, de ned varying, or temporary maintaining of a prescri
bed 3D arrangement of geometrical de ned solids in a reference coordinate system.
For this, procedures such as ordering, magazining, carrying on, positioning, and
clamping are important. It is simple for a human being to bring parts into corr
ect position or move them from one place to another. However, a considerably lar
ger expenditure is necessary to automate this task. An extensive sensory mechani
sm often must be used. Manufacturing of components is subject to a great number
of in uences. As a result, deviations cannot be avoided during or after the assemb
ling of products. These in uences must be compensated for, and thus adjusting is a
process that guarantees the required operating ability of products (Spur and St
o ferle 1986). Testing and measuring are contained in the inspection functions.
Testing operations are necessary in all individual steps of assembly. Testing me
ans the ful llment of a given limiting condition. The result of the test operation
is binary (true or false, good or bad). On the other hand, speci cations are dete
rmined and controlled by given reference quantities while measuring. Secondary f
unctions are activities, such as marking or cleaning operations, that can be ass
igned to none of the above functions but are nevertheless necessary for the asse
mbly process. Assembly systems are complex technical structures consisting of a
great number of individual units and integrating different technologies.
Source:Sony
HANDLING Magazining
Carrying on Ordering Allocation Positioning Clamping etc.
JOINING
Merging Pressing and Pressing-in Metal Forming Primary Shaping Filling Combining
Substances
ADJUSTING Setting up Adaptation etc.
408
TECHNOLOGY
There are different possibilities for the spatial lineup of assembly systems. On
e possibility is a line structure, which is characterized by:
Clear ow of materials Simple accessibility of the subsystems (e.g., for maintenan
ce and retro tting) Simple lineup of main and secondary lines Use mainly for mass
production (the same work routine for a long time)
Alternatively, an assembly system can be arranged in a rectangular structure, wh
ich is characterized by:
Very compact design High exibility. The combination of opposing subsystems is eas
y to realize. Poor accessibility to the subsystems during maintenance and retro tt
ing Use mainly for small and middle lot sizes
In addition to different spatial lineups, other basic modi cations of cell structu
re are possible for achieving the required ef ciency (see Figure 10). The number o
f work cycles that can be carried out on an assembly system depends on the size
of the assembly system (number of cells) and the required productivity. The avai
lability drops as the number of stations increases. Therefore, the distribution
of work cycles onto several machines is necessary. In this case, the productivit
y increases with decreasing exibility. The entire assembly system can be subdivid
ed into different cells, which are connected by the ow of materials and informati
on. The basic structure of a cell consists of a tabletop, a basic energy supply,
the mounted handling devices, the internal conveyor system, and a safety device
(protective covering, doors with electrical interlock). The individual cells sh
ould be built up modular and equipped with standardized interfaces (energy, pneu
matic, and information technology). A strict module width for the spatial measur
ements is also useful. As a result, fast realization of the individual assembly
cells, a high degree of reuse, and exible use are guaranteed. The assembly cell i
tself consists of different components and units (Figure 9). The main components
of an assembly cell are devices (integrated robots, modular system built of num
erically controlled axes, pick-and-place devices, etc.), its mechanical construc
tion, the design of the grippers, the actuating system, the control system, the
method of programming, and the sensor technology used. Furthermore, the part sup
ply is very important (ordered or disordered part supply; see also Section 2.3).
Last but not least, the joining process (see also Section 2.2) must be adapted
and optimized to meet all the necessary requirements. The different alternatives
for design of assembly systems are represented in Figure 10. The exible cell is
able to carry out all necessary assembly functions for a certain variant spectru
m (three variants,
System
Assembly Cell
Components
Process
Devices
Part Supply
Source: afag
Figure 9 Structure and Components of Assembly Cells.
ASSEMBLY PROCESS
409
Flexibility Flexible cell
Duplex-cell
Serial cells Parallel cells
AC variants14 assembly steps
Productivity
Figure 10 Alternative Cell Structures for Assembly Systems.
A, B, and C, are represented here). For this purpose, it has all the necessary m
ain and secondary functions for the assembly process. High variant exibility, seq
uential processing of the individual assembly tasks, as well as the resulting hi
gh nonproductive times, lead to the use of exible cells, mostly for small and mid
dle lot sizes. If the duplex cell is used, the assembly tasks will be distribute
d over two handling units. These always have the full variant exibility. With reg
ard to function, the cells need not be completely exible. They must be able to ca
rry out only a part of the assembly tasks. The function size of the multiple gri
ppers (in the case of unchangeable variation of the functions) can be increased
by parallel assembly with two independent handling devices. Even shorter cycle t
imes can be achieved than with the exible cell. Serial cells are exible only with
respect to the assembly of variants. However, the function size of an individual
cell is highly limited. The spatial lineup of the cells is responsible for the
fact that the cell with the longest cycle time determines the total cycle time o
f the assembly process. Considering the spatial extension of serial structures,
it is obvious that the integration level of the system is smaller than that of ex
ible cells or duplex cells. Parallel cells will be used if only one assembly cel
l is responsible for the complete assembly of a variant. Each individual cell ha
s a high degree of functional exibility, which unfortunately can only be used for
one variant. Therefore, the potential of the resources cannot be exploited beca
use identical assembly conditions must be used during the assembly of different
variants. If the individual variants show only a small number of similar parts,
which on top of that often have different gripping conditions, splitting up vari
ants will be advantageous. As a result, a smaller variant exibility will be neces
sary. The serial structure will react extremely sensitively to uctuation in the n
umber of variants. At worst, restructuring or reconstruction of all cells will b
e required. In contrast, the rate of utilization of the system will remain almos
t stable if full variant exible cells are used. Because the handling devices of p
arallel cells have full functional exibility, it is not necessary to carry out an
adaptation of the number of exible systems. Due to changes in lot size, parallel
cells designed for special variants will even have too much or too little capac
ity, which cannot be removed without adequate measures.
2.2. 2.2.1.
Joining Technologies Classi cation and Comparison of Joining Technologies
Industrialized manufactured products predominantly consist of several parts that
are usually manufactured at different times in different places. Assembly funct
ions thus result from the demand for
410
TECHNOLOGY
joining subsystems together into a product of higher complexity with given funct
ions. According to German standard DIN 8580, joining is de ned as bringing togethe
r two or more workpieces of geometrically de ned form or such workpieces with amor
phous material. The selection of a suitable joining technique by the technical d
esigner is a complex function. In DIN 8593, manufacturing methods for joining (s
ee Figure 11) are standardized. In addition to the demands on the properties of
the product, economic criteria have to be considered (e.g., mechanical strength,
optics, repair possibilities). The ability to automate processes and design pro
ducts with the manufacturing process view are important points to focus on. Ther
efore, examples of the joining techniques in automation presented are given here
. The joining processes can be divided into various classes on the basis of seve
ral criteria:
Constructional criteria: strength, form, and material closure connections or com
binations of these
possibilities
Disassembly criteria: in removable and nonremovable connections Use of auxiliary
joining part, e.g., screws In uence of temperature on the parts
Some important joining techniques and applications are presented below. Pros and
cons of the joining techniques will be discussed in regard to the features of f
unction, application, and assembly.
2.2.2.
Bolting
Screwing is one of the most frequently used joining processes in assembly. Screw
driving belongs in the category of removable connections (ICS 1993). Joinings ar
e removable if the assembly and separation process can be repeated several times
without impairing the performance of the connection or modifying the components
. The speci c advantages of detachable connections are easy maintenance and repair
, recycling, and allowing a broad spectrum of materials to be combined. The most
important component of the bolt connection is the screw as the carrier of subst
antial connecting functions and the link between the components. The screws funct
ion in the product determines its geometrical shape and material. Important char
acteristics of a screw are the form of thread, head, shaft end, tensile strength
, surface coating, tolerances, and quality of the screw lots. Screwing can be ma
nual, partly automated, or fully automated. The bolting tools used can be classi e
d according to structural shape, control principle, or drive. Substantial differ
ences occur in the drive assigned, which can be electric, pneumatic, or hydrauli
c (only for applying huge torques). Figure 12 shows the basic structure of a pne
umatic screwdriving tool. A basic approach is to increase the economy of automat
ed screwing systems and reduce deadlock times. Process stabilization can be achi
eved by using fast control loops in process control and diagnostic support of th
e operators for error detection and recovery (see Figure 13) (Steber 1997). Furt
her, position errors of the parts can be compensated automatically by adaptation
of the coordinate
composing
filling
forcing in
primary shaping
forming
welding
soldering
applying glue
e.g. snapping
e.g. riveting
screwdriving
clamping
bracing
pressing
nailing
insert a wedge
spanning
Detachable
Difficult to detach
Detachable only by destroying the joint
DIN 8593 Manufacturing process: joining
Figure 11 Survey of Joining Technologies.
ASSEMBLY PROCESS
air inlet valve ball motor
FAP S
411
valve stick driving shaft adapter clutch ring spindle insert
Figure 12 Structure of a Pneumatic Screwdriving System.
system in exible handling systems, such as robots and image-processing systems. T
he automated screwing technique is becoming more and more important in mass or s
erial production. Apart from reduction unit costs, longer production times, and
higher output, it offers the advantage of continuously high quality that can be
documented.
2.2.3.
Riveting / Clinching
Riveting is one of the classical joining processes. It is in particularly wide u
se in the aircraft industry. What all rivet methods have in common is that an au
xiliary joining part, the rivet, must be provided. Blind rivets are often used i
n device assembly because accessibility from one side is suf cient.
Data Acquisition Diagnosis Fault Recovery
Torque in Nm
WW+ M+ MM+/M-: upper/lower acceptable torque W+/W-: upper/lower acceptable angle
Rotation angle in
Process
Process control
Figure 13 Optimized Process Control of a Screwdriving Machine.
412
Characteristics of the joint
1 3
TECHNOLOGY
5 s 4 2 Optical: 1. Position of rivet head 2. No cracks 3. Fill in of rivet shaf
t 4. Remaining thickness of material 5. Form closure Mechanical: Static and dyna
mic strength
min
Figure 14 Characteristics of Punching Rivets.
Recently the punching rivet method has become increasingly important in the auto
mobile industry. It is one of the joining processes with less heat in uence on the
material than with spot welding (Lappe 1997). The advantage of this technique i
s that no additional process step for punching the hole into the parts is necess
ary. The punching rivet itself punches the upper sheet metal, cuts through, and
spreads in the lowest metal sheet. Figure 14 shows the pro le of a punching rivet
with typical characteristics for quality control. It is characterized by the fac
t that different materials, such as metal and aluminum, can be combined. The use
of new lightweight design in automotive manufacturing means that materials such
as aluminum, magnesium, plastics, and composites are becoming increasingly impo
rtant. Audi, with the introduction of the aluminum space frame car body concept,
is a pioneer in the application of punching rivets. In clinching, the connectio
n is realized through a specially designed press-driven set of dies, which defor
ms the parts at the join to provide a friction-locked connection. No auxiliary j
ointing part is necessary, which helps to save costs in supply and re ll of jointi
ng parts.
2.2.4.
Sticking
Figure 15 shows a comparison of various destructive detachable joining technolog
ies. The progress in plastics engineering has had positive effects on the engine
ering of new adhesives. Sticking is often
Welding
Soldering
Sticking
Both parts liquid
Parts solid Solder liquid Hard soldering > 450C Soft soldering < 450C
Parts solid Low to high viscous glue Cold to hot 25C to 150C
Hardening by Activation by pressure or heating humidity
P
Maximum temperature in seam 1400 C - 1600 C
e.g. installation e.g. electronics s h
P
Figure 15 Comparison of Joining Technologies That Are Detachable Only by Destruc
tion.
ASSEMBLY PROCESS
413
used in combination with other joining processes, such as spot welding and punch
ing rivets. The connection is made by adhesion and cohesion (Spur and Sto ferle
1986). Adhesives can be divided into two groups. Physically hardening adhesives
achieve adherence by two different mechanisms. The rst is by cooling of the melte
d adhesive, and the second is by the evaporation of solvent or water (as the car
rier) out of the adhesive. Because the adhesive does not interlace, it is less r
esistant to in uences such as heating up, endurance stress, or interaction of solv
ent. Chemically hardening adhesives solidify themselves by a chemical reaction i
nto a partially interlaced macromolecular substance characterized by high rmness
and chemical stability. Adhesives can also be differentiated into aerobic and an
aerobic adhesives. The quality and rmness of an adhesive depends on the condition
s at the part surface. The wettability and surface roughness of the parts, as we
ll as contamination (e.g., by oil), play a substantial role. To ensure quality o
f sticking, therefore, often a special surface treatment of the jointing parts i
s necessary. This represents an additional process step, which can be automated
too. Typically, car windows are automatically assembled into the car body in thi
s way.
2.2.5.
Welding
Welding methods can be subdivided into melt welding and press welding methods. I
n melt welding, such as arc welding, metal gas-shielded welding (e.g., MIG, MAG)
or gas fusion welding, the connection is made by locally limited heating to jus
t above the liquidus temperature of the materials. The parts that should be conn
ected and the usually used additional welding materials ow together and solidify.
In pressure welding, such as spot welding, the connection is realized by locall
y limited heating followed by pressing or hammers. Welding methods differ accord
ing to their capacity for automation. In the building of car bodies, fully autom
ated production lines with robots are usually already in use. Fusion welding, su
ch as gas-shielded arc welding, makes higher demands on automation. Therefore, r
obots are suitable, which should be also equipped with special sensors for seam
tracking. Therefore, both tactile and contactless sensors (e.g., optical, capaci
tive, inductive) are used. With the increasing power density of diode lasers, la
ser beam welding with robots is becoming much studied.
2.3. 2.3.1.
Peripheral Functions Handling Devices
For complex operations, industrial robots are normally used. If the handling tas
k consists of only a simple pick-and-place operation, specially built devices wi
th one or more linear axes are probably the better choice. These handling device
s are classi ed by their degrees of freedom (DOF), which indicate the number of po
ssible translational and rotational movements of a part. Therefore, six DOFthree
translational and three rotationalare required to arrange an object in a de ned way
in a 3D room. Opening and closing of grippers is not counted as a degree of fre
edom, as this movement is used only for clamping and does not, strictly speaking
, move the part. Mechanically controlled inserting devices offer one to two DOF.
They are suitable for simple handling tasks, such as inserting. Due to their st
rict mechanical setup, they are compact, robust, and very economical. Their kine
matics allows them to reach different points with a prede ned, mechanically determ
ined motion sequence. This motion can be adapted to the different handling tasks
by the use of different radial cams (control curves). Due to the sensor-less me
chanical setup, it is an inherent disadvantage of the system that the precision
of the movement is not very high. In an open control loop, only the end position
s are detected by sensors. If the handling task demands higher accuracy or exibil
ity, numerically controlled (NC) axes or industrial robots are recommended. Usin
g two or three linear axes allows more complex and precise handling tasks to be
performed than with mechanical handling devices. Industrial robots are built in
different setups. The most common are the SCARA (selective compliance assembly r
obot arm) robot and the six-DOF robot. The SCARA robot usually has four DOF, thr
ee translational and one rotational, whereas the z-stroke represents the actual
working direction. The other axes are used for positioning. SCARA robots are oft
en used for assembling small parts automatically with very short cycle times. Si
x-DOF robots are used for more complex tasks because they are more exible in thei
r movements so they can grip parts in any orientation.
2.3.2.
Grippers
Grippers have to perform different tasks. Not only do they have to lock up the p
art to be handled (static force), but they also have to resist the dynamic force
s resulting from the movement of the handling device. They also have to be made
so that the parts cannot move within them. Additional characteristics such as lo
w weight, fast reaction, and being fail-safe are required. There are different t
echniques for gripping parts. Grippers are typically classi ed into four groups by
performing principle (Figure 17): mechanical, pneumatic, (electro-) magnetic, a
nd other. With a market share of 66%,
414
range of function, flexibility
TECHNOLOGY
d c
b
a
a) b) c) d)
Mechanically driven devices Pneumatically driven devices NC axes Industrial robo
t
degree of freedom
1
2
3
4
5
6
Figure 16 Degrees of Freedom of Alternative Handling Devices.
mechanical grippers are the most commonly used. Mechanical grippers can be subdi
vided by their kind of closing motion and their number of ngers. Three- nger-centri
c grippers are typically used for gripping round and spherical parts. Two- nger-pa
rallel grippers perform the gripping movement with a parallel motion of their nge
rs, guaranteeing secure gripping because only forces parallel to the gripping mo
tion occur. Because these gripper models can cause harm to the component surface
, vacuum grippers are used for handling damageable parts. Two-dimensional parts
such as sheet metal parts are also handled by vacuum grippers. Using the princip
le of the Venturi nozzle, an air jet builds up a vacuum in the suction cup that
holds the parts. When the air jet is turned off, the parts are automatically rel
eased. Heavy parts such as shafts are lifted not by mechanical grippers but with
electromagnetic grippers. However, secure handling, not exact positioning, is n
eeded when using these grippers. To handle small and light parts, grippers using
alternative physical principles, such as electrostatic and adherent grippers, a
re used because they do not exert any pressure that could cause damage to the pa
rt. Fields of application include microassembly and electronics production.
Pneumatic
Magnetic
Mechanical
Alternative Methods
Vacuum Gripper Air Jet Gripper
Electromagnet Permanent magnet
ASSEMBLY PROCESS
415
Establish Order
Keep Order
Bowl feeder
Singulation, orientation and presentation of parts within single device Small, c
heap parts with large batch sizes
Worktrays with workpiece-specific cavities
Deep-draw worktrays Used with larger and more expensive parts Small batch sizes
Figure 18 Establish Order-Keep Order. (Source: Bowl feeder, Grimm Zufuhrtechnik
GmbH & Co.)
Security of the gripped parts and the workers is an important consideration. Tog
gle lever grippers, for example, ensure that the part cannot get lost even if a
power failure occurs. In contrast, parts handled by vacuum grippers fall off the
gripper if the air supply fails.
2.3.3.
Feeding Principles
To be gripped automatically, parts have to be presented to the handling device i
n a de ned position and orientation irrespective of the above-mentioned gripping p
rinciples. For small parts, which are assembled in high volumes, the vibratory b
owl feeder is commonly used. This feeder integrates three different tasks: singu
lation, orientation, and presentation. The parts are stored in the bowl feeder a
s bulk material in the bunker, which is connected via a slanted feeding track to
the pick-up point. Using suitable vibrations generated by lamellar springs thro
ugh an electric motor, the parts move serially toward the pick-up point. The mov
ement is a result of the superposition of micro-throws and a backwards slide. A
disadvantage of the bowl feeder is that the parts may be damaged by the micro-th
rows and the relative movement and contact between the parts and between the sin
gle part and the surface of the bowl feeder. Additionally, this feeding principl
e leads to the annoyance of noise. Another disadvantage is that the bowl feeder
is extremely in exible with regard to different parts because it is strictly const
ructed for the use of one special part. Also, it is not yet possible to automate
the process of constructing a bowl feeder. Rather, whether the bowl feeder will
meet the requirements is up to the experience and skill of the person construct
ing it. Nevertheless, this is and will continue to be one of the most important
feeding technologies for small parts. Larger and more expensive parts, or parts
that must not be damaged on their surface can be presented to the handling devic
e palletized by using deep-draw work trays. Generally speaking, a degree of orie
ntation, once reached, should never be allowed to be lost again, or higher costs
will result.
2.3.4.
Linkage
Normally a product is not assembled and produced by only one machine. Therefore,
multiple handling devices, machines, and processes have to be arranged in a spe
cial way. The different ways of doing this are shown in Figure 19. The easiest,
which has been widely used since the industrial revolution, is the loose linkage
. This is characterized by the use of discrete ingoing and outgoing buffers for
each machine or process. It is thus possible to achieve independence for the dif
ferent cycle times. Disadvantages are that very high supplies are built up and t
he ow time is very high. An advantage is that the production will keep running fo
r some time even if one machine fails, because the other machines have supplies
left. The stiff linkage works the opposite way. The workpieces are transported f
rom station to station without a buffer in between. The corresponding cycle time
is equal to the process time of the slowest machine. As a consequence, if one m
achine is out of order, the whole production line has to be
416
TECHNOLOGY
Loose linkage
Elastic linkage
Stiff linkage
Figure 19 Different Possibilities for Linking Assembly Cells.
stopped. A well-known application of this principle can be found in automobile p
roduction, where, because there are no bypasses, the whole assembly line has to
be stopped if problems occur at any working cell. The elastic linkage can be see
n as a combination of the other two. The stations are connected by different tra
nsports (belts, live roller conveyors, etc.). Because the single transports are
not connected with each other, they can be used as additional buffers. The failu
re of one machine does not necessarily cause the whole production line to be sto
pped.
2.4. 2.4.1.
Manual Assembly Systems Description of Manual Assembly Systems and Their Compone
nts
Manual assembly systems are often used within the area of ne mechanics and electr
ical engineering. They are suitable for the assembly of products with a large nu
mber of versions or products with high complexity. Human workers are located at
the focal point of manual assembly systems. They execute assembly operations by
using their manual dexterity, senses, and their intelligence. They are supported
by many tools and devices (Lotter 1992). The choice of tools depends on the ass
embly problem and the speci c assembly organization form. The most frequently used
forms of organization are, on the one hand, assembly at one separate workstatio
n, and, on the other, the ow assembly with chained assembly stations. Assembly at
only one workstation is also an occasional form of assembly organization. The c
hoice of the form of organization depends on the size of the product, the comple
xity of the product, the dif culty of assembly, and the number of units. Workstati
ons are used for small products or modules with limited complexity and a small n
umber of units. High version and quantity exibility are the most important advant
ages. Also, disturbances affect other workstations to only a small extent. The c
omponents for the basic parts and the assembly parts are the substantial constit
uents of manual assembly systems. The assembly parts are often supplied in grab
containers. The distances to be covered by the workers arms should be short and
in the same direction. The intention is to shorten the cycle time and reduce the
physical strain on the workers. This can be realized by arranging the grab cont
ainers in paternoster or on rotation plates. Further important criteria are glar
e-free lighting and adapted devices such as footrests or work chairs (Lotter and
Schilling 1994). When assembly at a workstation is impossible for technological
or economical reasons, the assembly can be carried out with several chained man
ual assembly stations (Lotter 1992). Manual assembly systems consist of a multip
licity of components, as shown in Figure 20. The stations are chained by doublebelt conveyors or transport rollers. The modules rest on carriers with adapted d
evices for xing the modules. The carriers form a de ned interface between the modul
e and the
ASSEMBLY PROCESS
Lighting Grab container
417
Operating control
Identification system
Carrier
Separation
Conveyor
Footrest
Figure 20 Components of Manual Assembly Systems. (Source: Teamtechnik)
superordinate ow of material. For identifying the different versions, the carrier
s can be characterized. Identi cation systems separate the different versions and
help to transport them to the correct assembly stations.
2.4.2.
Criteria for the Design of Manual Assembly Systems
There are many guidelines, tables, and computer-aided tools for the ergonomic de
sign of workstations. The methodology of planning and realizing a manual worksta
tion is shown in Figure 21. The following criteria have to be considered in desi
gning manual assembly systems.
Design of the workstation (height of the workbench and work chair, dimensions of
the footrest,
etc.)
Arrangement of the assembly parts, tools, and devices Physical strain on workers
(forces, torque, noise)
Ergonomic guidelines
Simulation
Real workstation
Dimensions of workstation Arrangement of devices Partitioning of grab area
Optimized arrangement of components Ergonomic examinations Optimization and comp
ression of movements
Choice of components Creation of the workstation Integration into the manufactur
ing environment
Figure 21 Ergonomic Design of Manual Workstations. (Source: Bosch)
418
TECHNOLOGY
Ergonomic design requires the adjustment of the work height, chair height, grab
distance, and so on to the human dimensions. It should be possible for workers t
o do the work by sitting or standing (Konold and Reger 1997). To ensure that mos
t of the possible workers work under ergonomic conditions, the human dimensions
between the 5th and 95th percentiles are considered in designing manual workstat
ions. One of the most important criteria is the organization of the assembly seq
uences. The assembly operation should be easy and require little force to be exe
rted by the worker. Devices like a screwdriver can reduce physical strain on the
worker. An optimized arrangement of the assembly parts and components can ratio
nalize the movements. Therefore, the grab room is partitioned into four areas (L
otter 1992): 1. 2. 3. 4. The The The The grab center extended grab center one-ha
nd area extended one-hand area
Assembly should take place in the grab center because both hands can be seen. Pa
rt supply should be in the one-hand area. The forces and torque also have to be
considered. Overstressing the worker with too great or too extended physical str
ain is not permitted. The maximum limit values for static and dynamic load depen
d on the frequency of the operation, the hold time, and the body attitude. Furth
er correction factors are sex, age, and constitution of the worker. Today many p
ossible forms of computer-aided tools have been developed for optimization of de
vices, minimization of forces, time registration, and simpli cation of movements.
They also enable shorter planning time, minimized planning costs, and extended p
ossibilities for optimization of movements (Konold and Reger 1997).
2.5.
Automated Assembly Systems
Automated assembly systems are used mainly for the production of series and mass
-produced articles. In the eld of indexing machines, a distinction is made betwee
n rotary indexing turntables and rectilinear transfer machines. The essential di
fference between the two systems is the spatial arrangement of the individual wo
rkstations. Rotary indexing turntables are characterized by short transport dist
ances. Therefore, high clock speeds are possible. The disadvantage is the restri
cted number of assembly stations because of the limited place. Rectilinear trans
fer machines can be equipped with as many assembly stations as needed. However,
the realizable cycle time deteriorates through the longer transport distances be
tween the individual stations. Indexing machines are characterized by a rigid ch
ain of stations. The construction design depends mostly on the complexity of the
product to be mounted. The main movements (drives for transfer systems) can be
effected from an electrical motor via an adapted ratchet mechanism or cam and le
ver gears or can be implemented pneumatically and / or hydraulically. Secondary
movements (clamping
Figure 22 Rectilinear Transfer Machine with High Productivity. (Source: IMA Auto
mation)
ASSEMBLY PROCESS
419
Figure 23 Rotary Indexing Turntable for an Ef cient Assembly of Mass-Produced Arti
cles. (Source: IMA Automation)
of parts, etc.) can be carried out mechanically (e.g., via cam and lever gears),
electromechanically, or pneumatically. The handling and assembly stations are o
ften driven synchronously over cam disks. If small products are assembled under
optimal conditions, an output of up to 90 pieces / min will be possible. However
, this presupposes a products design suitable for the automation, vertical joini
ng direction, easy-to-handle components, processes in control, and a small numbe
r of product variants. The total availability of the assembly system is in uenced
by the availability of the individual feeding devices. The number of stations ne
eded depends on the extent of the single working cycles that have to be carried
out (e.g., feeding, joining, processing, testing, adjusting). Therefore, the dec
ision between rotary indexing and rectilinear transfer machines depends not only
on the necessary number of workstations and the required space but also on the
entire assembly task. On the one hand, short cycle times and high accuracy durin
g the assembly of small products can be achieved by using typical indexing machi
nes. On the other hand, only the assembly of mass-produced articles and small pr
oduct variants is possible. Normally the reusability of the individual component
s is also very small. Consequently, these product-speci c special-purpose machines
can only be amortized heavily. In order to balance this disadvantage, modern ro
tary indexing turntables are available on the market. These have higher exibility
, greater reusability, easier reequipping, and higher modularity. The product-in
dependent components (basic unit with the drive, operating panel, protective dev
ice, control box, control system, transport unit) should be strictly separated f
rom the product-dependent components (seat, handling and assembly modules). De ned
and standardized interfaces are very useful for integrating the product-speci c c
omponents into the basic system very quickly and with little effort. With modula
r construction, it is possible to combine the high capability of the indexing ma
chines with economical use of the resources.
2.6.
Flexible Assembly Systems
Different objectives can be pursued in order to increase the exibility of assembl
y processes. One the one hand it may be necessary to handle different workpieces
, on the other hand different versions of one product may have to be produced in
changing amounts. Another contribution to exibility is to use the advantages of
off-line programming in order to optimize cycle time when the product manufactur
ed is changed.
2.6.1.
Assembly of Different Versions of a Product in Changing Amounts
For producing changing amounts of different versions of a product, an arrangemen
t as shown in Figure 24, with a main assembly line to which individual modules f
rom self-suf cient manufacturing
420
TECHNOLOGY
Figure 24 Flexible Manufacturing of Different Versions of a Product in an Assemb
ly Line. (Source: Siemens)
cells are supplied, is very suitable. In order to decouple the module manufactur
ing from the main assembly line with respect to availability of the different mo
dules, a storage unit is placed between module assembly and main assembly line t
o keep necessary numbers of presently needed versions of the modules permanently
ready. Thus, even lot size 1 can be manufactured and at the same time the syste
m remains ready for use even when one module of a module line fails. The individ
ual manufacturing cells are connected by a uniform workpiece carrier-transport s
ystem, whereby the workpiece carrierswhich are equipped with memory capacityfuncti
on as means of transport, assembly xtures, and for product identi cation simultaneo
usly. As the workpiece carriers are equipped with a mobile memory, all necessary
manufacturing and test data are af liated with the product through the whole proc
ess. Thus, the product data are preserved, even if the system controller breaks
down.
2.6.2.
Flexible Handling Equipment
Figure 25 shows an industrial robot with six DOF as an example of handling equip
ment with great exibility of possible movement. Especially in the automobile indu
stry, exible robots are commonly used, and a further increase in investment in au
tomation and process control is predicted. Robots are introduced in automobile a
ssembly to reduce strenuous tasks for human workers, handle liquids, and precise
ly position complicated shapes. Typical applications include windshield-installa
tion, battery mounting, gasoline pouring, rear seat setting, and tire mounting.
Another important application where the exibility of a robot is necessary is manu
facturing automobile bodies with stop welding. In order to reach each point in t
he workspace with any desired orientation of the welding tongues in the tool cen
ter point, at least three main axes and three secondary axes are necessary. Beca
use now each point of the workspace with each orientation can be reached, all ki
nds of workpieces can be handled and assembled.
2.6.3.
CAD-CAM Process Chain
In traditional production processes such as lathing and milling, numerical contr
ol is widespread and process chains for CAD-based programming of the systems are
commonly used. In contrast, winding
ASSEMBLY PROCESS
421
Figure 25 Most kinds of Workpieces can be assembled by an Industrial Robot Becau
se of Its Great Flexibility of Movements.
systems are even now often programmed using the traditional teach-in method. Fig
ure 26 shows a winding system for which a CAD-CAM process chain has been develop
ed by the Institute for Manufacturing Automation and Production Systems in order
to increase exibility with respect to product changes. The disadvantages of on-l
ine programming are obvious: because complex movements of the wire-guiding nozzl
e are necessary while the wire is xed at the solder pins and guided from the pin
to the winding space, considerable production downtimes during programming resul
t. Furthermore collisions are always a danger when programming the machine with
the traditional teach-in method. The tendency for lower production batches and s
horter production cycles combined with high investment necessitates a exible offline programming system for assembly cells such as winding systems. Therefore a
general CAD-CAM process chain has been developed. The process chain allows the C
AD model of the bobbin to be created conveniently by adding or subtracting geome
tric primitives if necessary. Afterwards the path of the wire-guiding nozzle is
planned at the CAD workstation, where the user is supported by algorithms in ord
er to avoid collisions with the bobbin or the winding system. Simulating the wir
e nozzle movements at the CAD system allows visual and automated collision detec
tion to be carried out. If a collision occurs, an automatic algorithm tries to nd
an alternative route. If no appropriate path can be calculated automatically, t
he wire nozzle movements can be edited by the user. To this point, the result of
this CAD-based planning system is a data le that is independent from the winding
machine used. A postprocessor translates this le into a machine-speci c data le, wh
ich is necessary for the numerical control of the winding system. This fault-fre
e NC data le can be sent from the CAD workstation to the machine control unit via
network, and the manufacturing of the new product can start (Feldmann and Wolf,
1996, 1997; Wolf 1997).
2.7.
Control and Diagnosis
Despite major improvements in equipment, advanced expertise on assembly processe
s, and computeraided planning methods, assembly remains susceptible to failure b
ecause of the large variety of parts and shapes being assembled and their corres
ponding tolerances. Computer-aided diagnosis makes a substantial contribution to
increasing the availability of assembly systems. Assuring fail-safe processes w
ill minimize throughput times and enable quality targets to be achieved.
422
TECHNOLOGY
Figure 26 Flexible Automation in Winding Technology Using Off-Line Programming.
Diagnosis includes the entire process chain of failure detection, determination
of the cause of failure, and proposal and execution of measures for fault recove
ry. Realizing and operating a highperformance diagnosis system requires comprehe
nsive acquisition of the assembly data. For this purpose, three different source
s of data entry in assembly systems can be used. First, machine data from contro
ls can be automatically recorded. Second, for extended information, additional s
ensors based on various principles can be integrated into the assembly system. T
hird, the data can be input manually into the system by the operator. Diagnosis
systems can be divided into signal-based, model-based, and knowledge-based syste
ms. A crucial advantage of knowledge-based systems is the simple extendability o
f the database, for example concerning new failures. Therefore, they are frequen
tly used with assembly systems. Dependent on the ef ciency of the diagnosis system
, different hierarchically graded control strategies are possible. In the simple
st case, the diagnosis system only supplies information about the occurrence and
number of a disturbance. The user must determine the cause of failure and execu
te
Manual data acquisition
Automatic data acquisition
Diagnosis Action
Information Information Diagnosis Instruction Information Diagnosis Instruction
Action
Action
Figure 27 Computer-Aided Diagnosis with Hierarchically Graded Strategies for Fau
lt Recovery.
ASSEMBLY PROCESS
423
the fault recovery on his own. Ef cient diagnosis systems offer information to the
user about the number and location of the failure. Further, appropriate diagnos
tic strategies are available that allow computer-aided detection of the cause of
failure. The systems also suggest appropriate measures for fault recovery. In a
ddition, diagnosis systems can independently initiate reaction strategies after
the automatic determination of the cause of failure. These strategies directly a
ffect the control of the assembly system or the assembly process. Due to the par
ticularly high complexity of this procedure, it is applicable only to simple and
frequently occurring errors.
3.
3.1.
ELECTRONIC PRODUCTION
Process Chain in Electronic Production
Production of electronic devices and systems has developed into a key technology
that affects almost all areas of products. In the automotive sector, for exampl
e, the number of used electronic components has increased about 200,000% during
the last 20 years, today accounting for more than 30% of the total value added.
The process chain of electronics production can be divided into three process st
eps: solder paste application, component placement, and soldering (Figure 28). S
older paste application can be realized by different technologies. The industry
mainly uses stencil printing to apply the solder paste. Solder paste is pressed
on the printed circuit board (PCB) by a squeegee through form openings on a meta
l stencil, allowing short cycle times. Several other technologies are on the mar
ket, such as single-point dispensing, which is very exible to changes of layout o
f PCBs, but leads to long cycle times due to its sequential character. In the ne
xt step the components of surface-mounted devices (SMDs) and / or through-hole d
evices (THDs) are placed on the PCB. Finally the PCB passes through a soldering
process. Traditionally the soldering process can be classi ed into two groups: re ow
and wave soldering. The purpose of re ow processing is to heat the assembly to sp
eci ed temperatures for a de nite period of time so that the solder paste melts and
realizes a mechanical and electrical connection between the components and the P
CB. Today three re ow methods are used in soldering: infrared (IR), convection, an
d condensation. A different method of molten soldering is wave or ow soldering, w
hich brings the solder to the board. Liquid solder is then forced through a chim
ney into a nozzle arrangement and returned to the reservoir. For wave soldering,
the SMDs must be glued to the board one step before. The quality of the electro
nic devices is checked at the end of the process chain by optical and electrical
inspection systems (IPC 1996). The special challenges in electronics production
result from the comparatively rapid innovation of microelectronics. The continu
ing trend toward further integration at the component level leads to permanently
decreasing structure sizes at the board level.
3.2.
Electronic Components and Substrate Materials
Two main focuses can be distinguished concerning the development of electronic c
omponents. With the continuing trend toward function enhancement in electronic a
ssembly, both miniaturization and integration are leading to new and innovative
component and package forms. In the eld of passive components, the variety of pac
kages goes down to chip size 0201. This means that resistor or capacitor compone
nts with dimensions of 0.6 mm 0.3 mm have to be processed. The advantages of
424
TECHNOLOGY
Figure 29 Component Technology: Development through the Decades.
such components are obvious: smaller components lead to smaller and more concent
rated assemblies. On the other hand, processing of such miniaturized parts is of
ten a problem for small and mediumsized companies because special and cost-inten
sive equipment is needed (high-precision placement units with component speci c fe
eders, nozzles, or vision systems). Regarding highly integrated components two b
asic technologies should be mentioned. First, leaded components such as quad at p
ack (QFP) are used within a wide lead pitch from 1.27 mm down to 300 m. But these
packages with a high pin count and very small pitch are dif cult to process becau
se of their damageable pins and the processing of very small amounts of solder.
Therefore, area array packages have been developed. Instead of leaded pins at al
l four sides of the package area, array packages use solder balls under the whol
e component body. This easily allows more connections within the same area or a
smaller package within the same or even a larger pitch. Ball grid arrays (BGAs)
or their miniaturized version, chip scale packages (CSPs), are among the other p
ackages used in electronics production (Lau 1993). Further miniaturization is en
abled by direct assembly of bare dies onto circuit carriers. This kind of compon
ent is electrically connected by wire bonding. Other methods for direct chip att
achment are ip chip and tape automated bonding. All three methods require special
equipment for processing and inspection. In addition to planar substrates, many
new circuit carrier materials are being investigated. Besides MID (see Section
4), exible circuit technology has proved to be a market driver in the eld of PCB p
roduction. Today, exible circuits can be found in nearly every type of electronic
product, from simple entertainment electronics right up to the highly sophistic
ated electronic equipment found in space hardware. With growth expected to conti
nue at 1015% per year, this is one of the fastestgrowing interconnection sectors
and is now at close to $2 billion in sales worldwide. However, up to now, most ex
ible circuit boards have been based on either polyester or polyimide. While poly
ester (PET) is cheaper and offers lower thermal resistance (in most cases re ow so
ldering with standard alloys is not possible), polyimide (PI) is favored where a
ssemblies have to be wave or re ow soldered (with standard alloys). On the other s
ide, the relative costs for polyimide are 10 times higher than for polyester. Th
erefore, a wide gap between these two dominant materials has existed for a long
time, prohibiting broad use of exible circuits for extremely cost-sensitive, high
-reliability applications like automotive electronics. Current developments in t
he eld of exible-base materials as well as the development of alternative solder a
lloys seem to offer a potential solution for this dilemma.
3.3.
Application of Interconnection Materials
The mounting of SMD components onto PCBs or exible substrates demands a certain a
mount of interconnection material (solder or conductive adhesive) to form a corr
ect joint. In contrast to wave soldering, where heat and material (molten solder
) are provided simultaneously during the soldering
ASSEMBLY PROCESS
425
process, re ow soldering of SMDs necessitates the application of solder in the for
m of paste as the rst process step in the assembly line. Also, for interconnectio
ns established by conductive adhesive, the application of interconnection materi
al is necessary. This process is a decisive step in electronic device assembly b
ecause as failures caused here can cause dif culties during the following process
steps, such as component placement and re ow soldering. In modern high-volume asse
mbly lines, stencil printing is the predominant method for applying solder paste
to the substrates. Other methods that are used industrially to a minor extent a
re automatic dispensing and screen printing. Each method has its speci c advantage
s and drawbacks, depending on, for example, batch sizes or technological boundar
y conditions like component pitch. The main advantage of stencil printing over s
creen printing occurs in applications where very small areas of paste have to be
deposited. For components with pitches equal to or smaller than 0.65 mm, the st
encil printing process is the only viable way for printing solder paste. Therefo
re, stencil printing has replaced screen printing in most cases. Dispensing has
the advantage over screen and stencil of being highly exible. For small batches,
dispensing may be an economical alternative to printing. On the other hand, disp
ensing is a sequential process and therefore quite slow. Additionally, paste dis
pensing for ne-pitch components is limited. The principle of a stencil printer is
shown in Figure 30(a). The major components of an automatic screen printer are
squeegee blades (material stainless steel or polyurethane), the screen itself, t
he work nest (which holds the substrate during printing), an automatic vision sy
stem for PCB alignment, and sometimes a printing inspection system. During the p
rinting process, solder paste is pressed by a squeegee through the apertures in
a sheet of metal foil (attached in frame, stencil) onto the substrate. Dispensin
g is an application of solder paste in which separate small dots of paste are de
posited onto the substrate sequentially. Dispensing is a good way for applying s
older paste when small batches have to be produced. The main advantages are shor
t changeover times for new boards (due to loading only a new dispensing program)
and low cost (no different screens are used). Several dispensing methods are re
alized in industrial dispensing systems. The two dominant principles are the tim
e pressure dispensing method and the rotary pump method (positive displacement).
The principle of dispensing by the timepressure method is shown in Figure 30(b).
A certain amount of solder paste is dispensed by moving down the piston of a lled
syringe by pressure air for a certain time. The paste ows from the syringe throu
gh the needle onto the PCB pad. The amount of solder can be varied by changing t
he dispensing time and the air pressure.
3.4.
Component Placement
Traditional THD technology has nearly been replaced by SMD technology in recent
years. Only SMD technology permits cost-ef cient, high-volume, high-precision moun
ting of miniaturized and complex components. The components are picked up with a
vacuum nozzle, checked, and placed in the correct position on the PCB.
3.4.1.
Kinematic Concepts
Within the area of SMD technology, three different concepts have found a place i
n electronics production (Figure 31) (Siemens AG 1999). The starting point was t
he pick-and-place principle. The machine takes one component from a xed feeder ta
ble at the pick-up position, identi es and controls the component, transports it t
o the placement position on the xed PCB, and sets it down. New concepts have been
developed because pick-and-place machines have not been able to keep pace with
the increasing demands on placement rates accompanying higher unit quantities. T
he original pick-and-place principle process is now only used when high accuracy
is required. The rst variant is the collect-and-place principle based on a revol
ver head (horizontal rotational axis) on a two-axis gantry system. Several compo
nents are collected within one placement circle and the positioning time per com
ponent is reduced. Additionally, operations such as component centering are carr
ied out while the nozzle at the bottom (pick-up) position is collecting the next
component, and the centering does not in uence the placement time. The highest pl
acement rate per module can be obtained with the carousel principle. The most wi
dely used version of this principle uses a movable changeover feeder table and a
moving PCB. It is analogous to the revolver principle in that the testing and c
ontrolling stations are arranged around the carousel head and the cycle time pri
marily depends on the collecting and setting time of the components. High-perfor
mance placement systems are available for the highest placement rates, especiall
y for telecommunications products. These systems take advantage of several indiv
idual parallel pick-andplace systems. A placement rate of up to 140,000 componen
ts per hour can be attained.
3.4.2.
Classi cation of Placement Systems
The classi cation and benchmarking of placement machines depend on different in uenc
e coef cients. In addition to accuracy and placement rate, placement rate per inve
stment, placement rate per needed area, operation, maintenance, and exibility hav
e to be considered. In the future, exible
426
(a)
TECHNOLOGY
Solder paste Squeegee
Stencil
Openings
(b)
Pressure Piston Solder paste or adhesive
Nozzle Printed circuit board Dispensed solder paste or adhesive
Figure 30 Alternative Principles for Application of Solder Paste. (a) Stencil pr
inting. (b) Dispensing.
machines with changeable placement heads will be con gurable to the current mounte
d component mix within a short time. The whole line can thus be optimized to the
currently produced board in order to achieve higher throughput and reduce the p
lacement costs.
3.4.3.
Component and PCB Feeding
An important factor for the throughput and the availability of a placement machi
ne is the component feeding. Depending on package forms, the user can choose bet
ween different types of packages (Figure 32). For packages with low or middle co
mplexity, taped components are favored. With the development of improved feeders
, the use of components in bulk cases is becoming more important. Compared to ta
pes, bulk cases have reduced packaging volume (lower transport and
ASSEMBLY PROCESS
427
Manual
Pick-and-Place
Collect-and-Place
Carousel
Figure 31 Alternative Concepts for Component Placement.
stock costs) and higher component capacity (less replenishing) and are changeabl
e during the placement process. The components can be disposed with higher accur
acy, and the cassette can be reused and recycled. The disadvantages of the bulk
cases are that they are unsuitable for directional components and the changeover
to other components takes more time and requires greater accuracy. For feeding
complex components, often in low volumes, waf e pack trays are used. Automatic exc
hangers, which can take up to 30 waf e packs, reduce the space requirement.
Tape
Tray
Bulk Case
R100 Tol:5%
Component Feed
PCB Transport
Single Transport
PCB
Dual Transport
PCB PCB
Line Configuration
serial
parallel
Figure 32 Component and PCB Feeding.
428
TECHNOLOGY
The trend towards more highly integrated components is reducing placement loads
for PCBs. As a direct result, the nonproductive idle time (PCB transport) has a
greater in uence on the processing time and leads to increased placement costs. Wi
th the use of a double transport (Figure 32) in asynchronous mode, the transport
time can be eliminated. During component placement onto the rst PCB, the next PC
B is transported into the placing area. For the optimal placement rate to be ach
ieved, high-performance placement machines have to assemble more than 300 compon
ents per board. To reach this optimal operating point of the machine with an acc
eptable cycle time, the machines are connected in parallel instead of in a seria
l structure. Parallel machines place the same components. Depending on the produ
ct, the placement rate can be raised up to 30%.
3.4.4.
Measures to Enhance Placement Accuracy
The trend in component packaging towards miniaturization of standard components
and the development of new packages for components with high pin account ( ne pitc
h, BGA, ip-chip) are increasing requirements for placement accuracy. For standard
components, an accuracy of 100 m is suf cient. Complex components must be placed wi
th an accuracy better than 50 m. For special applications, high-precision machine
s are used with an accuracy of 10 m related to a normal distribution and at a sta
ndard deviation of 4 . This means, for example, that only 60 of 1 million componen
ts will be outside a range of 10 m. Due to the necessity for using a small amount of
solder paste with ne-pitch components (down to 200 m pitch), minimal vertical ben
ding of the component leads (about 70 m) causes faulty solder points and requires
cost-intensive manual repairs. Each lead is optically scanned by a laser beam d
uring the coplanarity check after the pick-up of the component (Figure 33), and
the measured data are compared with a default component-speci c interval. Componen
ts outside the tolerance are automatically rejected. Given the increasing requir
ements for placement accuracy, mechanical centering of PCBs in the placement pro
cess will not suf ce. The accuracy needed is possible only if gray-scale CCD camer
a systems are used to locate the position of the PCB marks ( ducials) in order to
get the position of the PCB proportional to the placement head and the twist of
the PCB. Additionally, local ducials are necessary for mounting ne-pitch component
s. Furthermore, vision systems are used to recognize the position of each compon
ent before placement (visual component centering) and to correct the positions.
Instead of CCD cameras, some placement systems are tted with a CCD line. The comp
onent is rotated in a laser beam and the CCD line detects the resulting shadow.
Coplanarity
Sampling Direction
+Z
Reference level Transmitter (Optics) Laser
Receiver (Optics)
+Z
Component/PCB Position (CCD Camera)
BGA
Detector (CCD)
Ball Inspection
Fiducial PCB
Force
Fmax.
F
Tplace
Time
Positioning Force
Figure 33 Measures to Enhance Placement Accuracy.
ASSEMBLY PROCESS
429
Component-speci c illumination parameters are necessary for ball / bump inspection
and centering of area array packages. Direct optoelectronic scanning units are
used for the positioning control of the axis to minimize positioning offsets. Th
e glass scales have a resolution up to 1 increment per m. The in uence of the tempe
rature drifts of, for example, the drives can be partly compensated for. Despite
the preventive measures and the high quality of the single systems in placement
machines, reproducible errors caused by manufacturing, transport, or, for examp
le, crashes of the placement head during the process must be compensated for. Se
veral manufacturers offer different calibration tools that make inline mapping o
f the machines possible. Highly accurate glass components are positioned on a ca
libration board (also glass) with special position marks. The ducial camera scans
the position of the component proportional to the board (resolution about 23 m).
Extrapolating a correction value allows the placing accuracy of the machine to b
e improved. Fine-pitch components must be placed within a close interval to guar
antee suf cient contact with the solder paste but avoid deformations of the leads.
Adapted driving pro les are necessary to reach the optimal positioning speed with
accelerations up to 4 g, but the last millimeters of the placing process must b
e done under sensor control to take the positioning power down to a few newtons.
3.5.
Interconnection Technology
In electronics production, two main principles of interconnection are used: sold
ering using metalbased alloys and adhesive bonding with electrically conductive
adhesives (Rahn 1993). Soldering is a process in which two metals, each having a
relatively high melting point, are joined together by means of an alloy having
a lower melting point. The molten solder material undergoes a chemical reaction
with both base materials during the soldering process. To accomplish a proper so
lder connection, a certain temperature has to be achieved. Most solder connectio
ns in electronics are made with conventional mass soldering systems, in which ma
ny components are soldered simultaneously onto printed circuit boards. Two diffe
rent mass soldering methods are used in todays electronics production. Wave or ow
soldering is based on the principle of simultaneous supply of solder and solderi
ng heat in one operation. The components on a PCB are moved through a wave of me
lted solder. In contrast, during re ow soldering process solder preforms, solid so
lder deposits or solder paste attached in a rst operation are melted in a second
step by transferred energy.
Figure 34 SMT Placement Line (Siemens Siplace).
430
TECHNOLOGY
Vapor Phase Soldering
Figure 35 Re ow Soldering Methods.
Used in more than 80% of the electronic components processed worldwide, re ow sold
ering technology is the predominant joining technology in electronics production
. Re ow soldering works satisfactorily with surface mount technology and can keep
up with the constantly growing demands for productivity and quality. Current dev
elopments such as new packages or the issue of lead-free soldering, however, are
creating great challenges for re ow soldering technology. The re ow soldering metho
ds of electronics production can be divided according to the type of energy tran
sfer into infrared (IR), forced convection (FC), and condensation soldering. In
contrast to the other versions of energy transfer, re ow soldering using radiant h
eating is not bound to a medium (gas, vapor), but is subject to electromagnetic
waves. For IR re ow soldering, short- to long-wave infrared emitters are used as e
nergy sources. Although heat transfer established with this method is very ef cien
t, it is strongly in uenced by the basic physical conditions. Depending on the abs
orptivity, re ectivity, and transmissivity of the individual surface, large temper
ature differences are induced
Figure 36 Soldering Pro le.
ASSEMBLY PROCESS
431
across the electronic modules. The problem of uniform heat distribution is incre
ased by intricate geometries, low thermal conductivity, and variable speci c heat
and mass properties of the individual components. Therefore, this soldering meth
od leads to very inhomogeneous temperature distributions on complex boards. The
transfer of energy by means of forced air convection induces a more uniform heat
distribution than in IR soldering. The heating up of the workpiece is determine
d by, in addition to the gas temperature, the mass of gas transferred per time u
nit, the material data of the gas medium, and by the geometry and thermal capaci
ty of the respective PCB and its components. Re ow soldering in a saturated vapor
(vapor phase or condensation soldering) utilizes the latent heat of a condensing
, saturated vapor, whose temperature corresponds to the process temperature, to
heat the workpiece. Due to the phase change from vapor to liquid during condensa
tion, the heat transfer is very rapid, resulting in very uniform heating that is
relatively independent of different speci c heat and mass properties or geometric
in uences. An overheating of even complex surfaces is impossible due to the natur
e of vapor phase heating. Despite the optimization of these mass soldering metho
ds, an increasing number of solder joints are incompatible with conventional sol
dering methods, so microsoldering (selective soldering) methods have to be used.
In the last few years there has been a steady ow of requests for selective solde
ring. Several systems have been designed and introduced into the market. Dependi
ng on the kind of heat transfer, different systems using radiation, convection,
or conduction can be distinguished. The most popular selective soldering in mode
rn assembly lines seems to be fountain soldering, a selective process using molt
en solder. The process is similar to the conventional wave soldering process, wi
th the difference that only a few joints on a PCB are soldered simultaneously. I
n contrast to soldering, conductive adhesives are used for special electronic ap
plications. Conductive adhesives simultaneously establish mechanical and electri
cal joints between PCBs and components by means of a particle- lled resin matrix.
Whereas the polymer matrix is responsible for the mechanical interconnection, th
e lling particles (silver, palladium, or gold particles) provide the electrical c
ontact between PCB and component. Therefore, in contrast to solder joints, condu
ctive adhesive joints have a heterogeneous structure.
3.6.
Quality Assurance in Electronics Production
Quality assurance has become a major task in electronics production. Due to the
complex process chains and the huge variety of used materials, components, and p
rocess steps, the quality of the nal product has to be assured not only by tests
of the nished subassemblies but also by integrated inspection steps during proces
sing. It is common knowledge that the solder application process causes about tw
o thirds of process-related failures but is responsible for only about 20% of pr
ocess and inspection costs (Lau 1997). This means that the rst step of processing
leads to a huge amount of potential failures but is not or cannot be inspected
suf ciently. Therefore, a combined strategy for the assembly of complex products s
uch as automotive electronic or communication devices is useful in electronics p
roduction (Figure 38). Both capable processes and intelligent inspection tools a
re needed. Several optical inspection systems are available that measure, for ex
ample, shape and height of the applied solder paste or placement positions of co
mponents. These systems, based on image
Conductive Adhesives
Component Termination
Solder Joint
silver
Component Termination
Tin-X
epoxy
Copper
Copper
Principles of electron transfer
Figure 37 Comparison: Conductive AdhesiveSoldering Interconnection.
432
TECHNOLOGY
Figure 38 Basic Concept for Accompanying Quality Assurance.
processing with CCD cameras, capture 2D images for analysis. Advanced systems wo
rk with 3D images for volume analysis or with x-ray vision for visualization of
hidden solder joints (as are typical for area arrays). The main problem with com
plex inspection strategies is inline capability and the long duration of the ins
pection itself. As a result, 3D inspection is used almost entirely for spot test
ing or in scienti c labs. On the other hand, inline strategies are often combined
with and linked to external systems for quality assurance. These systems are sui
table for collecting, handling, and analyzing all relevant data during processin
g. They allow both short-term control loops within a single step and long-term c
oordination strategies for optimizing assembly yields. The vital advantage is th
e direct link between a real product and its database information. Further tasks
of quality assurance systems occur in diagnosis and quality cost analysis. Mach
ine diagnosis is necessary to ensure and enhance machine or process capabilities
. Especially, systematic errors such as constant placement offsets can only be d
etected and regulated by integrated diagnosis sensors. Further, telediagnosis an
d defect databases are possible. Both support quick and direct removal of proble
ms with the assistance of the machine supplier, who remains distant. Another goa
l is to calculate process- and defect-related costs in electronics production. W
ith an integrated quality cost-evaluation system, it should be possible to optim
ize not only quality, but costs as well within one evaluation step. This will nal
ly lead to a process-accompanying system that focuses on technically and economi
cally attractive electronics production.
4.
INTEGRATION OF MECHANICAL AND ELECTRONIC FUNCTIONS
The use of thermoplastics and their selective metal plating opens a new dimensio
n in circuit carrier design to the electronics industry: 3D molded interconnect
devices (3D MIDs). MIDs are injectionmolded thermoplastic parts with integrated
circuit traces. They provide enormous technical and economic potential and offer
remarkably improved ecological behavior in comparison to conventional printed c
ircuit boards which they will, however, complement, not replace.
4.1.
Structure of Molded Interconnect Devices
The advantages of MID technology are based on the broader freedom of shape desig
n and environmental compatibility as well as on a high potential for rationalizi
ng the process of manufacturing the nal product. The enhanced design freedom and
the integration of electrical and mechanical functions in a single injection-mol
ded part allow a substantial miniaturization of modules to be obtained. The rati
onalization potential comes from reduced part and component counts and shortened
process chains. In several applications, it was possible to replace more than 1
0 parts of a conventional design solution with a single MID, thus reducing assem
bly steps and material use. Further potential lies in the increased reliability.
Additional advantages are obtainable regarding environmental compatibility and
electronics wastedisposal regulations. MID technology allows the material mixtur
e of a (conventional) combination of PCB and mechanical parts, which usually con
sist of a great number of materials, to be replaced by a metallized plastic part
(MID). MIDs are made of thermoplastics, which can be recycled and are noncritic
al in disposal (Franke 1995). The novel design and functional possibilities offe
red by MIDs and the rationalization potential of the respective production metho
ds have inevitably led to a quantum leap in electronics production. The most imp
ortant functions that can be integrated into an MID are depicted in Figure 39. S
o far, cost savings of up to 40% have been achieved by integration, depending, o
f course, on the speci c product, lot sizes, and other boundary conditions.
ASSEMBLY PROCESS
433
electrical functions
mechanical functions
Cooling fins Battery compartment Surface mounted THDs Shielding walls Stiffeners
Socket Screw fastening elements Casing Snap fits Switches
Figure 39 Advantages of MID Technology.
Key markets for MID technology are automotive electronics and telecommunication.
MIDs are also suitable for use in computers, household appliances, and medical
technology. The market currently shows an annual growth rate of about 25%. Typic
ally high geometrical and functional exibility make MID suitable for applications
in peripheral integrated electronics, such as small control units and portable
devices. Past and present MID products bear this out. Standard electronics appli
cations with high component density and multiple layers, such as motherboards in
computers, will continue to be produced with conventional technologies. MID pro
duct development shows a trend towards the implementation of MIDs in new elds of
application. Examples include components in the telecommunications and computer
industries, security-relevant parts in the automotive industry, and customized p
ackages in components manufacturing (Figure 40) (Po hlau 1999).
4.2.
Materials and Structuring
The MID process chain can be divided in four main steps: circuit carrier product
ion, metallization and structurization, component mounting, and nally joining of
electronic components. A number of alternative technologies are available for ea
ch step. Some of the production technologies are speci c MID technologies, such as
two-shot molding and hot embossing. Others are established processes in electro
nic production that have been altered and adapted to the requirements of MID. Du
ring the planning phase of a MID product it is necessary to choose a combination
of the single process steps most suitable for the situation, given their respec
tive bene ts and limitations. MIDs can be manufactured in a variety of ways, as sh
own in Figure 41. Hot embossing is characterized by low investment for the embos
sing die or stamp and high ef ciency. The layout is applied following injection mo
lding. Because no wet chemistry is needed, hot-embossed MIDs are well suited for
decorative surfaces. The photoimaging process is perfectly suited for generatin
g EMC screens (at the same time as the conductors) and is highly exible as to the
layer structure. The design freedom is good and few limitations exist. The use
of 3D photomasks enables well-de ned structures to be generated precisely where th
e mask contacts the MID directly (Franke 1995). Laser imaging is highly exible as
to layer structure. It basically involves copperplating the entire workpiece su
rface rst with a wet chemical and then with a galvanical treatment to the desired
nal coating thickness. Flexibility as to layout modi cation is very high and cost
effective: all that is necessary is a rewriting of the laser-routing program. Of
all MID production processes, two-shot injection molding offers the greatest de
sign freedom. Conductor geometries that raise major problems in the other proces
ses, such as conductors on irregularly shaped surfaces in recesses as well as th
rough-platings, are easily realized by means of the two-shot process. A platable
and a nonplatable constituent material are injected on top of each other in two
operations. In the second shot, the workpiece is supported inside the tool by t
he surfaces that were contoured in the rst shot.
434
TECHNOLOGY
Telecommunication
Packaging
ISDN Socket (Ackermann, Bolta)
Motion Sensor (Matsushita)
Computer Peripherals
Automotive Technology
Sensor Mount for Bank Printers (Fuba Printed Circuits)
ABS Coil Carrier (AHC, Inotech, Siemens, Wabco)
Figure 40 Application Fields for 3D MID Technology.
The capture decal ( lm overmolding) technique features a short process chain. It i
s suitable for decorative surfaces unless the surfaces are nished later on. The c
onductor pattern can be generated by means of screen printing or conventional PW
B technology; in the latter case, through-platings are also possible. The struct
uring takes place separately and before the injection-molding step. MID supplier
s use various materials to which they have adapted their manufacturing processes
. Basically, high-temperature and engineering thermoplastics are plated with var
ious surface materials. The key material properties to be considered are process
ing and usage temperatures, required amMOLDED INTERCONNECT DEVICES
SUBSTRATE MANUFACTURING
1-SHOT MOLDING
2-SHOT MOLDING
INSERT MOLDING
METALLIZATION
HOT EMBOSSING
GALVANIC SKW PCK STRUCTURED FOIL
STRUCTURING
ENGRAVED DIE
3DMASK
LASER IMAGING
Figure 41 Manufacturing Methods for MIDs.
ASSEMBLY PROCESS
435
mability rating, mechanical and electrical properties, moldability and platabili
ty, and cost. As the price of thermoplastics generally increases with their ther
mal resistivity, low-temperature materials, such as ABS, PC, and PP, seem to be
advantageous. These materials, however, exclude the use of conventional re ow sold
ering, thus necessitating an alternative joining method. These problems can be a
voided by the use of high-temperature resistance materials, such as PES, PEI, an
d LCP, which are also best for chemical plating. For the completion of MID assem
blies after substrate manufacturing and plating, several further steps are norma
lly necessary. In component placement and soldering several restrictions must be
considered, as shown in Figure 42. Conventional printing processes are less use
ful for applying conductive adhesives or solder paste. Dispensing of single dots
in a complex geometric application is therefore necessary. Threedimensional cir
cuit carriers also decrease the freedom of component placement, which leads to r
estrictions for placement systems. Conventional solder procedures require the us
e of high-temperature thermoplastics. Other plastics can be processed using lowmelting-point solders, conductive adhesives, or selective soldering methods. Ava
ilable assembly and interconnection technologies as well as processes for the pr
oduction of the circuit carriers must be carefully considered, selected, and pos
sibly modi ed to the speci c MID materials.
4.3.
Placement Systems for 3D PCBs
MIDs create new demands on assembly because of their complex geometric shape. Th
e new requirements on assembly processes caused by different MID types in order
to develop quali ed production systems for MID assembly must be considered (Figure
43). The new task of mounting SMDs onto 3D circuit boards calls for new capabil
ities in dispensing and mounting systems. Both processes work sequentially and a
re realized with Cartesian handling systems for PCB production. One way for SMD
assembly onto MID is to use six-axis robots, which are designed as geometry exible
handling systems for reaching each point in the workspace with every possible o
rientation of the tool. The available placement systems for SMD assembly onto PC
Bs are optimized for placement accuracy and speed. In order to work with these s
ystems on MIDs, it is important to be able to move them during the process (Feld
mann and Krimi 1998). The rst step was a systematic development of a MID placemen
t system outgoing from an available four-axis system. This aim is realized by ex
tending the machine by two DOF. First, an additional movement in the direction o
f z-axis allows high obstacles to be surmounted. Second, the MID can be inclined
so that the process plane is oriented horizontally in the workspace. This conce
pt is not suitable for MIDs with high geometrical complexity.
4.3.1.
Six-Axis Robot System for SMD Assembly onto MID
Robot placement systems have till now been used primarily to place exotic THT co
mponents such as coils and plugs, whose assembly is not possible with pick-and-p
lace machines. The application of
Mounting Head with centering pliers
IR Emitter
Distance to Heat Source Convection
Height
Dispensing
Mounting
Soldering
Figure 42 De nition of Design Rules for MID.
436
Optimized MID System Industrial Robot
TECHNOLOGY
PCB System
Different types of 3-dimensional components
Figure 43 New Requirements for Assembly Process Caused by 3D Circuit Board.
MIDs has lead to a new task for robots in electronics assembly. Robots can be eq
uipped with pipettes, which are built small enough to assemble SMDs with a small
distance to obstacles. If the length of the free assembly perpendicular of a MI
D is limited, bent pipettes can be used without inclined feeders or a centering
station. Available tool-changing systems make it possible to integrate additiona
l processes such as application of solder paste and soldering into one assembly
cell. A critical point in surface-mounted technology is the necessary accuracy i
n centering the SMD to the pipette and the accurately placing components. This i
s realized by the use of two cameras, one mounted on joint ve of the manipulator,
pointing down toward a reference mark, and the other mounted under the stage, l
ooking up for component registration. The robot cell layout was designed to mini
mize the total distance to be covered by the robot in order to assemble onto MID
. In addition, the feeder system is movable. As a result, the components can be
placed at an optimized position. With this feeder moving system, the placement r
ate can reach 1800 components / hr (with a stationary feeder the placement rate
is up to 1200 components / hr). This geometric advantage and the exibility of the
robot system come at the expense of lower productivity and accuracy than with C
artesian systems. Therefore, Cartesian placement systems should be the basis for
a specialized MID assembly system.
4.3.2.
Optimized MID Placement System
The next step is to nd out the possibilities for extending the detected limits of
conventional SMD assembly systems. The insuf ciency of Cartesian systems lies in
the possible height and angle of inclination of the MID and the fact that an unl
imited length of free assembly perpendicular is required. A concept has been dev
eloped to extend these limits. A module for handling the MID in the workspace of
an SMD assembling system, as well as the use of a pipette with an additional ax
is, make 3D assembly of component onto MID possible. For both these concepts to
be realized, it is necessary to extend and work on hardware and control software
of the PCB basic assembly system. These systems are realized with a widespread
standard system at the FAPS Institute. Figure 45 shows the developed MID placeme
nt system with the different modules for the handling of SMD components and MID.
The way to enlarge the system kinematics of standard placement systems is to mo
ve the workpiece during the assembly process. Solutions for this problem are as
follows. First, an additional movement in the direction of the z-axis enlarges t
he possible height of obstacles that can be surmounted (six-axis pipette). Secon
d, the MID can be inclined so that the process plane is oriented horizontally in
the workspace (MID handling system). This allows the high accuracy and speed of
a Cartesian system to be used. The tolerance chain is divided into the two part
s: SMD handling system and MID moving system. Thus, it is possible to compensate
for the possibly lower
ASSEMBLY PROCESS
437
Figure 44 Flexible Robot Placement System for Assembly onto MID.
Figure 45 Prototypical Realization of MID Placement System for 3D assembly. (Fro
m Feldmann et al. 1998)
438
TECHNOLOGY
position accuracy of the MID moving system by correction of the handling system.
The use of vision system is necessary here in order to detect the position of t
he inclined MID. Inclination of the MID means that the process plane, which shou
ld be oriented horizontally, may be on another level than the usual. Its height
must be compensated for by an additional z-axis. This means that both possibilit
ies of enlargement of Cartesian assembly systems could be integrated into one li
ft-and-incline module. With this module, integrated into the Cartesian SMD assem
bly system, it is possible to attach SMD components onto MID circuit planes with
an angle of inclination up to 70. For this, the MID is xed on a carrier system th
at can be transported on the double-belt conveyor. At the working position the c
arrier is locked automatically to the module, the connection to the conveyor is
opened, and the MID can be inclined and lifted into the right position. It is th
us possible to enlarge the capability of the Cartesian assembly system almost wi
thout reducing productivity.
4.4.
Soldering Technology for 3D PCBs
Processing 3D molded interconnect devices in re ow soldering processes places stri
ngent requirements on re ow technology due to the base materials used and the poss
ible geometrical complexity of the circuit carriers. Most MID solutions, which a
re assembled with electronic components, consist of expensive high-temperature t
hermoplastics, such as PEI, PES, and LCP, which can resist high thermal stress d
uring re ow soldering. However, in order for the total potential of the MID techni
que to be used, inexpensive technical thermoplastics like polyamide have been di
scussed as possible base materials, although the process window would clearly be
limited regarding the tolerable maximum temperatures. As a consequence, the pro
cess parameters of the selected re ow soldering method have to be modi ed to reduce
the thermal loading to a suf cient minimum value to avoid:
Thermal damage to the base material Degradation of metallization adhesion Warpin
g of the thermoplastic device
In addition, the highest solder joint quality has to be ensured for the best qua
lity and reliability to be attained. In extensive investigations of the 3D MID s
oldering topic, the in uence of the re ow soldering process on the points mentioned
above has been examined. It can be stated that the different methods of heat tra
nsfer clearly in uence the temperature distribution on the workpiece and therefore
the process temperature level needed when processing complex MIDs. The example
of a circuit carrier of MID type 3 (the highest level of geometrical complexity)
demonstrates these facts. While during condensation or convection soldering ver
y uniform temperature distributions, with temperature differences on the molded
device of only up to a few degrees kelvin can be observed, for IR soldering, tem
perature differences of about 2030K on the surface of the 3D device can be shown b
y thermal measurement with an infrared scanner or attached thermocouples. Thus,
particularly in the more heated upper areas of a 3D device the danger of local t
hermal damage to the base material exists due to the reduced distance to the inf
rared emitters. If thermoplastic base materials are suspended above the usual ad
missible operating temperature, material damage can occur. For some hygroscopica
l thermoplastics (e.g., PA) the humidity stored in the material heats up, strong
ly expands with the transition to the vapor state, and causes a blistering on th
e surface of the already softened material (Gerhard 1998). Besides thermal damag
e to base material caused by local overheating, the in uences of thermal stress du
ring soldering and of the soldering parameters (especially re ow peak temperature)
on the geometrical shape of the molded part and the peeling strength of the met
allization are important to the process quality of thermoplastic interconnection
ASSEMBLY PROCESS
439
Conductive adhesives
Reflow soldering
Selective soldering
Interconnection technologies for MID
Figure 46 Interconnection Technologies for 3D MIDs.
5.
5.1.
DISASSEMBLY
Challenge for Disassembly
The development of disassembly strategies and technologies has formerly been bas
ed on the inversion of assembly processes. The characteristics of components and
connections are like a systematic interface between both processes. In addition
, assembly and disassembly can have similar requirements regarding kinematics an
d tools (Feldmann, et al. 1999). Nevertheless, the most obvious differences betw
een assembly and disassembly are their goals and their position in the products l
ife cycle. The objective of assembly is the joining of all components in order t
o assure the functionality of a product (Tritsch 1996). The goals of disassembly
may be multifaceted and may have a major in uence on the determination of dismant
ling technology. The economically and ecologically highest level of product trea
tment at its end of life is the reuse of whole products or components (Figure 47
) because not only the material value is conserved but also the original geometr
y of components and parts, including its functionality. Thus, disassembly genera
lly has to be done nondestructively in order to allow maintenance and repair of
the components. A further material loop can be closed by the disassembly and reu
se of components in the production phase of a new product. In order to decrease
disassembly time and costs, semidestructive dismantling technologies are applica
ble, such as to open a housing.
Raw Material
Product Design
Production
Use
Material Recycling
Treatment
Reuse of Components
Repair/ Reuse
Disposal
Disassembly
Figure 47 Role of Disassembly in the Products Life Cycle.
440
TECHNOLOGY
In most cases, disassembly is done to remove hazardous or precious materials in
order to allow ecologically sound disposal or high-quality-material recycling. A
ll types of disassembly technology are used in order to decrease costs and incre
ase ef ciency (Feldmann et al. 1994). In the determination of disassembly technolo
gies, several frame conditions and motivations have to be taken into account (Fi
gure 48). Because of rapidly increasing waste streams, the ecological problems o
f disposal processes, and the shortage of land ll capacities, the products end of l
ife has become a special focus of legislative regulation (Gungor and Gupta 1999;
Moyer and Gupta 1997) that requires disassembly and recycling efforts. Furtherm
ore, the ecological consciousness of society (represented by the market and its
respective requirements) and the shortage of resources, leading to increasing ma
terial prices and disposal costs, put pressure on the ef cient dismantling of disc
arded products. The increasing costs and prices also provide economic reasons fo
r dismantling because the bene ts of material recycling increase as well. Finally,
on the one hand, the technological boundary conditions indicate the necessity o
f disassembly because of increasing production and shorter life cycles. And on t
he other hand, the availability of innovative recycling technologies is leading
to new opportunities and challenges for disassembly technology. Due to in uences i
n the use phase, unpredictable effects such as corrosion often affect dismantlin
g. Also, the incalculable variety of products when they are received by the dism
antling facility has a negative in uence on the ef ciency of disassembly. Together w
ith the lack of information, these aspects lead to high dismantling costs becaus
e of the manpower needed (Meedt 1998). On the other hand, some opportunities in
comparison to the assembly, for example, can be exploited. Thus, with disassembl
y, only speci c fractions have to be generated. This means that not all connection
s have to be released, and (semi)destructive technology is applicable for increa
sing ef ciency.
5.2.
Disassembly Processes and Tools
The advantages and disadvantages of manual and automated disassembly are compare
d in Figure 49. Because manual disassembly allows greater exibility in the variet
y of products and types as well as the use of adequate disassembly tools, it is
generally used in practice. Due to risk of injury, dirt, the knowledge needed fo
r identi cation of materials, and wages to be paid, manual disassembly is not cons
idered an optimal solution. On the other hand, problems of identi cation, the in uen
ce of the use phase (as mentioned above), damage to connections, and particularl
y the broad range of products, prevent the effective use of automated disassembl
y processes (Feldmann et al. 1994). Thus, for most disassembly problems partial
automated systems are the best solution.
Frame Conditions
Legislation
Take Back Regulations
assembl y Dis
Specific Aspects
Problems
Influence of use phase (i.e. Corrosion)
isassembly costs Lack of information
Social Aspects
Ecological consciousness of the society
Sustainability/shortage of resources
Economic Aspects
Gain for recycled material
Opportunities
Tec hn o log y
Not all connections have to be released
(Semi)destructive disassembly technology
applicable Generation of specific fractions
Technological Aspects
Increasing production/ shorter life cycles
s
ASSEMBLY PROCESS
441
problems with identification
manual disassembly
automated disassembly
partial automated systems
Figure 49 Features of Manual and Automated Disassembly Systems.
Along with the combination of manual and automated disassembly, other aspects, s
uch as logistics, ow of information, and techniques for fractioning, storing, and
processing residuals, have to be optimized for ef cient dismantling. In addition
to the planning of the optimal disassembly strategy, the major area for improvem
ent in the disassembly of existing products is in increasing the ef ciency of disa
ssembly processes. Highly exible and ef cient tools especially designed for disasse
mbly are needed. The function of tools is based on different types of disassembl
y: (1) destructive processes in order to bring about the destruction of componen
ts or joining; (2) destructive processes in order to generate working points for
further destructive or nondestructive processes; (3) nondestructive processes s
uch as the inversion of assembly processes (Feldmann et al. 1999). Figure 50 sho
ws an example of a exible unscrewing tool developed at the Technical University o
f Berlin that generates working points in a destructive way. This tool consists
of an impact mass that is speeded up in a rst step by tangent pneumatic nozzles u
ntil a certain angular velocity is reached (Seliger and Wagner 1996). In a secon
d step, the rotating impact mass is accelerated towards a conical clutch and tra
nsmits the linear and rotational impulse to the end effector. Using W-shaped wed
ges of the end effector, a new acting surface is generated on the head of the sc
rew, avoiding any reaction force for the worker. Furthermore, the rotational imp
ulse overcomes the breakaway torque and the linear impulse reduces pre-tension i
n order to ease loosening. The loosened screw is now unscrewed by a pneumatic dr
ive and the impact mass is pushed to the starting position (Seliger and Wagner 1
996). In case a screw cannot be released due to in uences in the use phase (e.g.,
corrosion or dirt), the end effector can be used as a hollow drill with wide edg
es in order to remove the screw head. A major problem in dismantling is in openi
ng housings quickly and ef ciently in order to yield optimal accessibility to haza
rdous or worthy materials. Especially with regard to small products, the effort
in dismantling with conventional tools is very often too high compared to the ac
hieved bene t. To solve this problem, a exible tool, the splitting tool (Figure 51)
, has been developed (Feldmann et al. 1999). The splitting tool has speci c levers
and joints that divide the entry strength or impulse (e.g., by a hammer or pneu
matic hammer) into two orthogonal forces or impulses. Through the rst force compo
nent with the same direction as the original impact, the splitting elements are
brought into action with the separating line of the housing in order to generate
acting surfaces. The strength component set normally to the entry impact is use
d simultaneously as separation force. This way, screws are pulled out and snap ts
are broken. Thus, in general the housing parts are partially destroyed and torn
apart without damaging the components inside. Using specially designed tools fo
r disassembly allows unintentional destruction of other components, which is oft
en an effect of dismantling with conventional tools such as hammers and crowbars
, to be avoided.
442
pneumatic drive linear cyclinder rotation nozzle impact mass driving shaft impac
t clutch acting surface generator screw part rotational acceleration linear acce
leration impact with acting surface generation and loosening of screw
TECHNOLOGY
unscrewing
Figure 50 Example of a Flexible Unscrewing Tool. (From Seliger and Wagner 1996)
Vertical Impulse
Hinges
Guide
Lever System
Effectors
Separation Motion
Fe
Fe
Components to be Separated
Fe Fh Fv Fv Fh Fe : Separation force h
Fv : Intrusion force to use/ generate acting surface F : Effector force
Figure 51 Example of a Flexible Splitting Tool.
ASSEMBLY PROCESS
443
The examples given show the basic requirements for disassembly tools: exibility c
oncerning products and variants, exibility concerning different processes, and fa
ilure-tolerant systems.
5.3.
Applications
Conventional tools such as mechanized screwdivers, forceps, and hammers are stil
l the dismantling tools generally used. The use of tools, the choice of disassem
bly technology, and the determination of the disassembly strategy depend to a la
rge degree on the knowledge and experience of the dismantler because detailed di
sassembly plans or instructions are usually not available for a product. This of
ten leads to a more or less accidental disassembly result, where the optimum bet
ween disassembly effort and disposal costs / recycling gains is not met. Many ap
proaches have been proposed for increasing the ef ciency of disassembly by determi
nating an optimal dismantling strategy (Feldmann et al. 1999; Hesselbach and Her
rmann 1999; Gungor and Gupta 1999). Depending on the actual gains for material r
ecycling and reusable components, the costs of disposal, and legislative regulat
ions, products are divided into several fractions (Figure 52). First, reusable c
omponents (e.g., motors) and hazardous materials (e.g., batteries, capacitors) a
re removed. Out of the remaining product, in general the fractions ferrous and n
onferrous metals and large plastic parts are dismantled so they can be recycled
directly after the eventual cleaning and milling processes. Removed metalplastic
mixes can be recycled after the separation in mass- ow procedures (e.g., shredding
). Special components such as ray tubes are also removed in order to send them t
o further treatment. The remainder must be treated and / or disposed of. As ment
ioned in Section 5.2, the economic margin of dismantling is very tight and the l
ogistic boundary conditionsfor example, regarding the sporadic number of similar
units and the broad variety of product typesare rather unfavorable for automation
. Depending on the exibility of automated disassembly installations, three strate
gies can be used. Some standardized products (e.g., videotapes, ray tubes of com
puter monitors) can be collected in large numbers so that automated disassembly
cells especially designed for these products can work economically. Greater exibi
lity from using PC-based control devices and specially designed disassembly tool
s, as well as the integration of the exibility of human beings allow the automate
d dismantling of a certain range of discarded products in any sequence. Figure 5
3 shows an example of a hybrid disassembly cell that was realized on a laborator
y scale. One of the integrated robots, used for unscrewing actions, is equipped
with a exible unscrewing tool that can release various types of screws. The secon
d robot, using a special multifunctional disassembly and gripping device, is abl
e to split components, cut cables, and remove parts without changing the device.
Another strategy for automated disassembly is the enhancement of exibility using
sensors that detect automatically connection locations and types. By an evaluat
ion of the sensor data, the reManual Disassembly of Monitors
Tools commonly used for manual disassembly:
orceps Knife
Hammer
Electrical/pneumatical screwdriver
Foto: HER
Commonly generated fractions:
Plastics Ferrous and non-ferrous metals
tic mix Printed circuit boards Special components i.e. ray tubes
Metal-plas
444
TECHNOLOGY
Flexible Disassembly Device
Unscrewing Device
Manual Workstation
Transport System
Product Entrance
Figure 53 Layout of a Hybrid Disassembly Cell.
spective tools are determined and controlled (Weigl 1997). This strategy in gene
ral leads to high investments that have to be compensated for by disassembly ef ci
ency.
5.4.
Entire Framework for Assembly and Disassembly
Mostly in regard to maintenance and reuse of components, but also in regard to o
ptimal disassembly planning, an integrated assembly and disassembly approach sho
uld be consideredjust with the design of a product (DFG 1999). Five stages of int
egration can be distinguished (Figure 54).
Design Assembly
Exchange of Information Exchange of Material Exchange of Components Exchange of
Staff Exchange of Equipment
Disassembly
Figure 54 Stages of the Integration of Assembly and Disassembly.
ASSEMBLY PROCESS
445
1. Information is exchanged between assembly and disassembly. Data on, for examp
le, connection locations and types ease substantially the disassembly of discard
ed products. 2. Recycled material, such as auxiliary materials, can be delivered
from disassembly plants to the production plants. 3. The reuse and reassembly o
f components, which rst requires disassembly, is already practiced with many prod
ucts. Nevertheless, the integrated consideration of both assembly and disassembl
y is important for ef ciency. 4. The exchange of staff requires the proximity of a
ssembly and disassembly facilities. At this stage, the knowledge and experience
of the workers can be used for both processes and thus provide major bene ts in ef c
iency. 5. The highest stage of integration is the use of a device for both assem
bly and disassembly. Examples are repairing devices for printed circuit boards.
The possible level of integration is determined mainly by the design of a produc
t. Thus, an overall approach that looks at all phases of the products life cycle
is required.
REFERENCES
Boothroyd, G. (1994), Product Design for Manufacture and Assembly, Computer-Aided De
sign, Vol. 26, pp. 505520. Brand, A. (1997), Prozesse und Systeme zur Bestu ckung r
a umlicher elektronischer Baugruppen, Dissertation, Meisenbach, Bamberg. DFG (1999
), Integration der Montage- und Demontageprozessgestaltung in einem produktneutr
alen Ansatz, Project Report, DFG, Erlangen. Feldmann, K., and Krimi, S. (1998), Al
ternative Placement Systems for Three-Dimensional Circuit Board, Annals of the CIR
P, Vol. 47, No. 1, pp. 2326. Feldmann, K., and Rottbauer, H. (1999), Electronically
Networked Assembly Systems for Global Manufacturing, in Proceedings of the 15th I
nternational Conference on Computer-Aided Production Engineering (Durham), pp. 5
51556. Feldmann, K., and Wolf, K. U. (1996), Computer Based Planning of Coil Windin
g Processes for Improvements in Ef cency and Quality, in Proceedings of the Electric
al Manufacturing and Coil Winding Conference (Chicago), pp. 299305, EMCWA. Feldma
nn, K., and Wolf, K. U. (1997), Improved Winding Quality and Production Ef ciency wi
th the help of Computer Based Planning and Programming Systems, in Proceedings of
the Coil Winding, Insulation and Electrical Manufacturing Conference (Berlin). F
eldmann, K., Meedt, O., and Scheller, H. (1994), Life Cycle EngineeringChallenge in
the Scope of Technology, Economy and General Regulations, in Proceedings of the 2
nd International Seminar on Life Cycle Engineering (Erlangen, October 1011), pp.
117. Feldmann, K., Rottbauer, H., and Roth, N. (1996), Relevance of Assembly in Glo
bal Manufacturing, Annals of the CIRP, Vol. 45, No. 2, pp. 545552. Feldmann, K., Lu
chs, R., and Po hlau, F. (1998), Computer Aided Planning and Process Control with
Regard to New Challenges Arising through Three-Dimensional Electronics, in Proceed
ings of the 31st CIRP International Seminar on Manufacturing Systems (Berkeley,
CA, May 2628), pp. 246251. Feldmann, K., Trautner, S., and Meedt, O. (1999), Innovat
ive Disassembly Strategies Based on Flexible Partial Destructive Tools, Annual Rev
iews in Control, Vol. 23, pp. 159164. Franke, J. (1995), Integrierte Entwicklung ne
uer Produkt- und Produktionstechnologien fu r ra umliche spritzgegossene Schaltu
ngstra ger (3D MID), Dissertation, Carl Hanser, Munich. Gerhard, M. (1998), Qualita
tssteigerung in der Elektronikproduktion durch Optimierung der Prozessfu hrung b
eim Lo ten komplexer Baugruppen, Dissertation, Meisenbach, Bamberg. Gungor A., and
Gupta, S. M. (1999), Issues in Environmentally Conscious Manufacturing and Produc
t Recovery: A Survey, Computers and Industrial Engineering, Vol. 36, No. 4, pp. 81
1 853. Hesselbach, J., and Herrmann, C. (1999), Recycling Oriented Design Weak-Poin
t Identi cation and Product Improvement, Proceedings of the International Seminar on
Sustainable Development (Shanghai, November 1617). Informationscentrum des Deuts
chen Schraubenverbandes (ICS) (1993), ICS-Handbuch Automatische Schraubmontage,
Mo nnig, Iserlohn. Institute for Interconnecting and Packaging Electronic Circui
ts (IPC) (1996), Acceptability of Electronic Assemblies, ANSI / IPC-A-610, IPC, Chic
ago.
446
TECHNOLOGY
Klein Wassink, R. J., and Verguld, M. M. F. (1995), Manufacturing Techniques for
Surface Mounted Assemblies, Electrochemical Publications, Port Erin, Isle of Ma
n. Konold, P., and Reger, H. (1997), Angewandte Montagetechnik, F. Vieweg & Sohn
, Brunswick. Lappe, W. (1997), Aufbau eines Systems zur Prozessu berwachung bei
Stanznieten mit Halbhohlniet, Shaker, Aachen. Lau, J. H. (1993), Ball Grid Array
Technology, McGraw-Hill, New York. Lau, J. H. (1997), Solder Joint Reliability
of BGA, CSP, Flip Chip, and Fine Pitch SMT Assemblies, McGraw-Hill, New York. Lo
tter, B. (1992), Wirtschaftliche Montage, VDI, Du sseldorf. Lotter, B., and Schi
lling, W. (1994), Manuelle Montage, VDI, Du sseldorf. Meedt, O. (1998), Ef zienzstei
gerung in Demontage und Recycling durch optimierte Produktgestaltung und exible D
emontagewerkzeuge, Dissertation, Meisenbach, Bamberg. Moyer, L. K., and Gupta, S.
M. (1997), Environmental Concerns and Recycling / Disassembly Efforts in the Elect
ronics Industry, Journal of Electronics Manufacturing, Vol. 7, No. 1, pp. 122. Po h
lau, F. (1999), Entscheidungsgrundlagen zur Einfu hrung ra umlicher spritzgegossen
er Schaltungstra ger (3-D MID), Meisenbach, Bamberg. Rahn, A. (1993), The Basics o
f Soldering, John Wiley & Sons, New York. Seliger, G., and Wagner, M. (1996), Mode
ling of Geometry-Independent End-effectors for Flexible Disassembly Tools, in Proc
eedings of the CIRP 3rd International Seminar on Life Cycle Engineering: ECO-Per
formance 96 (March 1829), ETH Zu rich. Siemens AG (1999), The World of Surface Mou
nt Technology, Automation Technology Division, Siemens AG, Munich. Spur, G., and
Sto ferle, T. (1986), Fu gen, Handhaben, Montieren, Vol. 5 of Handbuch der Fert
igungstechnik, Carl Hanser, Munich. Steber, M. (1997), Proze optimierter Betrieb ex
ibler Schraubstationen in der automatisierten Montage, Meisenbach, Bamberg. To n
shoff, H. K., Metzel, E., and Park, H. S. (1992), A Knowledge-Based System for Aut
omated Assembly Planning, Annals of the CIRP, Vol. 41, No. 1, pp. 1924. Tritsch, C.
(1996), Flexible Demontage technischer Gerbauchsgu ter, Forschungsberichte wbk,
University of Karlsruhe. Verter, V., and Dincer, M. (1992), An Integrated Evaluat
ion of Facility Location, Capacity Aquisition and Technology Selection for Desig
ning Global Manufacturing Strategies, European Journal of Operational Research, Vo
l. 60, pp. 118. Verein Deutscher Ingenieure (VDI) (1982), Richtlinie 2860, Bl. 1,
Entwurf, Montage- und Handhabungstechnik: Handhabungsfunktionen, Handhabungsein
richtungen, Begriffe, De nitionen, Symbole, VDI, Du sseldorf. Warnecke, H. J. (199
3), Revolution der Unternehmenskultur, Springer, Berlin. Weigl, A., (1997), Exem
plarische Untersuchungen zur exiblen automatisierten Demontage elektronischer Ger
a te mit Industrierobotern, Berichte aus der Automatisierungstechnik, Technical
University of Darmstadt, Shaker, Aachen. Wolf, K. U. (1997), Verbesserte Prozessfu
hrung und Prozessplanung zur Leistungs- und Qualita tssteigerung beim Spulenwic
keln, Dissertation, Meisenbach, Bamberg.
448
TECHNOLOGY
1.
INTRODUCTION
Manufacturing process planning is an important step in the product-realization p
rocess. It can be de ned as the function within a manufacturing facility that establ
ishes which processes and parameters are to be used (as well as those machines c
apable of performing these processes) to convert a part from its initial form to
a nal form predetermined (usually by a design engineer) in an engineering drawin
g (Chang et al. 1998, p. 515). Alternatively, it can be de ned as the act of prepari
ng detailed work instructions to produce a part. The result of process planning
is a process plan. A process plan is a document used by the schedulers to schedu
le the production and by the machinist / NC part programmers to control / progra
m the machine tools. Figure 1 shows a process plan for a part. The process plan
is sometimes called an operation sheet or a route sheet. Depending on where they
are used, some process plans are more detailed than others. As a rule, the more
automated a manufacturing shop is, the more detailed the process plan has to be
. To differentiate the assembly planning for an assembled product, process plann
ing focuses the planning on the production of a single part. In this chapter, wh
en a product is mentioned, it refers to a discrete part as the nal product. One i
mportant step in process planning is process selection, which is the selection o
f appropriate manufacturing processes for producing a part. When none of the exi
sting processes can produce the part, a process may have to be designed for this
purpose. Process design can also be interpreted as determining the parameters o
f a process for the manufacture of a part. In this case, process design is the d
etailing of the selected processes. Thus, process planning and process design ar
e used for the same purposedetermining the methods of how to produce a part. In t
his chapter, process planning and design are discussed. Techniques employed for
process planning and process design are also introduced. Due to the vast number
of manufacturing processes, it would be impossible to cover them all in this cha
pter. Only machining processes are focused upon here. However, early in the chap
ter, casting, forming, and welding examples are used to illustrate alternative p
roduction methods.
1.1.
The Product-Realization Process
Manufacturing is an activity for producing a part from raw material. In discrete
product manufacturing, the objective is to change the material geometry and pro
perties. A sequence of manufacturing processes is used to create the desired sha
pe. The product-realization process begins with product design. From the require
ments, an engineering design speci cation is prepared. Through the design process
(the details of which are omitted here), a design model is prepared. Traditional
ly, the design model is an engineering drawing (drafting) either prepared manual
ly or on a CAD system. Since the
PROCESS PLAN Part No. S0125-F Part Name: Housing Original: S.D.Smart Date: 1/1/8
9 Checked: C.S.Good Date: 2/1/89 No. Operation Description Mill bottom surface 1
Mill top surface Drill 4 holes Workstation Material: steel 4340Si Changes: Appr
oved: T.C. Chang Setup
ACE Inc.
Date: Date: 2/14/89 Time (Min) 3 setup 5 machining 2 setup 6 machining 2 setup 3
machining
Tool
10 20 30
MILL01 MILL01 DRL02
see attach#1 for illustration see attach#1 set on surface1
Face mill 6 teeth/4" dia Face mill 6 teeth/4" dia twist drill 1/2" dia 2" long
Figure 1 Process-Plan.
Verification
Scheduling
Scheduling knowledge
Execution
Figure 2 Product-Realization Process.
450 TABLE 1 Size Material Carbon-steel plate and sheet Hot rolled Cold rolled Ca
rbon-steel bars Hot rolled, round Cold nished, round Cold nished, square Stainless
steel sheet 304 316 410 Stainless steel bars 304 round 303 square
1995. Repr
t an axle, and a T-slot is used to hold a bolt. The designer must create the appr
opriate geometry for the part in order to satisfy the functional requirements. T
he ultimate objective of the designer is to create a functionally sound geometry
. However, if this becomes a single-minded mission, the designed part may not be
economically competitive in the marketplace. Cost must also be considered. Manu
facturing processes are employed to shape the material into the designed geometr
y. A large number of unrelated geometries will require many different processes
and / or tools to create. The use of standard geometry can save money by limitin
g the number of machines and tools needed. For example, a standard-size hole mea
ns fewer drill bits are needed. Design for manufacturing also means imposing man
ufacturing constraints in designing the part geometry and dimension. The designe
d geometry is modeled on a CAD system, either a drawing or a solid model (see Fi
gure 3). More and more designs are modeled using 3D solid modelers, which not on
ly provide excellent visualization of the part and assembly but also support the
downstream applications, such as functional analysis, manufacturing planning, a
nd part programming. The key is to capture the entire design geometry and design
intents in the same model.
1.2.3.
Function Analyses
Because the designed part must satisfy certain functional requirements, it is ne
cessary to verify the suitability of the design before it is nalized. Engineering
analyses such as kinematic analysis and heat transfer are carried out from the
design. Finite element methods can be used, often directly from a design model.
The more critical a product or part is, the more detailed an analysis needs to b
e conducted.
1.2.4.
Design Evaluation
The task of design evaluation is to separate several design alternatives for the
nal selection of the design. Cost analysis, functionality comparison, and reliab
ility analysis are all considerations. Based on the prede ned criteria, an alterna
tive is selected. At this point the design is ready for production.
1.2.5.
Process Planning
Production begins with an assembly / part design, production quantity, and due d
ate. However, before a production order can be executed, one must decide which m
achines, tools, and xtures to use as well as how much time each production step w
ill take. Production planning and scheduling are based on this information. As n
oted earlier, process planning is used to come up with this information. How to
produce a part depends on many factors. Which process to use depends on the geom
etry and the material of the part. Production quality and urgency (due date) als
o play important roles. A very different production method will de nitely be appro
priate for producing a handful of parts than for a million of the same part. In
the rst case, machining may be used as much as possible. However, in the second c
ase, some kind of casting or forming process will be preferable. When the due da
te is very close, existing processes and machines must be used. The processes ma
y not be optimal for the part, but the part can be produced in time to meet the
due date. The cost
452
TECHNOLOGY
and designed. After a line is installed, the operation of the line is simple. Up
on the workpiece being launched at one end of the line, the product is built seq
uentially through the line. However, in a shop environment, production schedulin
g is much more complex. For each planning horizon (day or week), what each machi
ne should process and in which sequence must be decided. Prede ned objectives such
as short processing time, maximum throughput, and so on are achieved through pr
oper scheduling. Scheduling uses information provided by the process plan. A poo
rly prepared process plan guarantees poor results in production.
1.2.7.
Consideration of Production Quantity in Process Planning
As noted above, production quantity affects the manufacturing processes selected
for a part. If the production quantity is not considered in process planning, t
he result may be an expensive part or a prolonged production delay. Large quanti
ties make more specialized tools and machines feasible. Small-quantity productio
n must use general-purpose machines. Although the total quantity may be high, in
order not to build up inventory and incur high inventory cost, parts are usuall
y manufactured in small batches of 50 parts or less. Therefore, production batch
size is also a factor to be considered. There is decision making loop. The proc
ess plan is the input to production planning; production planning determines the
most economical batch size; batch size, in turn, changes the process plan. A de
cision can be made iteratively.
2.
PROCESS PLANNING
As noted above, process planning is a function that prepares the detailed work i
nstructions for a part. The input to process planning is the design model and th
e outputs include processes, machines, tools, xtures, and process sequence. In th
is section, the process planning steps are discussed. Please note that these ste
ps are not strictly sequential. In general, they follow the order in which they
are introduced. However, the planning process is an iterative process. For examp
le, geometry analysis is the rst step. Without knowing the geometry of the part,
one cannot begin the planning process. However, when one selects processes or to
ols, geometric reasoning is needed to re ne the understanding of the geometry. Ano
ther example is the iterative nature of setup planning and process selection. Th
e result of one affects the other, and vice versa.
2.1.
Geometry Analysis
The rst step in process planning is geometry analysis. Because the selection of m
anufacturing processes is geometry related, the machining geometries on the part
, called manufacturing features, need to be extracted. An experienced process pl
anner can quickly and correctly identify all the pertinent manufacturing feature
s on the part and relate them to the manufacturing processes. For example, the m
anufacturing features of holes, slots, steps, grooves, chamfers, and pockets are
related to drilling, boring, reaming, and milling processes. The process planne
r also needs to note the access directions for each feature. Also called approac
h directions, these are the unobscured directions in which the feature can be ap
proached by a tool. When features are related to other features, such as contain
ment, intersection, and related in a pattern, these relationships must be captur
ed. Feature relations are critical in deciding operation (process) sequence. Fig
ure 4 shows a few features and their approach directions. The pocket at the cent
er and steps around the center protrusion are approachable from the top. The hol
e may be approached from the top or from the bottom. In computer-aided process p
lanning, the geometry analysis (or geometric reasoning) is done by computer algo
rithms. The design model in the form of a solid model is analyzed. Based on the
local geometry and topology, regions are extracted as features. To be signi cant t
o manufacturing, these features must be manufacturing features (Figure 5). Again
, manufacturing features are geometric entities that can be created by a single
manufacturing process or tool. Like manual geometry analysis, geometric reasonin
g must nd feature access directions (approach directions) and feature relations.
Due to the vague de nitions of features, the large number of features, and the com
plexity in feature matching (matching a feature template with the geometric enti
ties on the solid model), geometric reasoning is a very dif cult problem to solve.
This is one of the reasons why a practical, fully automatic process planner is
still not available. However, a trained human planner can do geometry analysis r
elatively easily. More details on geometric reasoning can be found in Chang (199
0).
2.2.
Stock Selection
Manufacturing is a shape-transformation process. Beginning with a stock material
, a sequence of manufacturing processes is applied to transform the stock into t
he nal shape. When the transformation is done using machining, minimizing the vol
ume of materials removed is desirable. Less material removal means less time spe
nt in machining and less tool wear. The stock shape should be as close to the nis
hed part geometry as possible. However, in addition to the minimum material remo
val rule, one also has to consider the dif culty of work holding. The stock materi
al must be
454
TECHNOLOGY
Blind Hole
Through hole
Pocket
Step
Blind slot
Through slot
Chamfer
Counterbore
Figure 5 Some manufacturing Features.
Countersink
In the initial evaluation, the technical feasibility and relative costs are take
n into consideration. Process capabilities and relative cost for each casting an
d machining process are used in the evaluation. Based on the geometry to be crea
ted, material used, and production volume, one can propose several alternative p
rocesses. The process capability includes shape capabilities and accuracy. Casti
ng processes, such as die casting and investment casting, can create intricate g
eometry with relative high precision (0.01 in.). However, sand casting is much w
orse. For tighter dimensional control, secondary machining is a must. Certain pa
rts can be built using the welding process as well. This is especially true for
structural parts. Large structural parts such as ship hulls are always built usi
ng welding. Medium-sized structural parts such as airplane structure parts may b
e joined together from machined parts. Sectional structures for jet ghters are al
ways machined from a solid piece of metal in order to obtain the maximum strengt
h. For small structures, all manufacturing methods are possible. When there are
several alternatives, an initial selection needs to be made.
2.3.2.
Product Strength, Cost, etc.
Parts made using different stocks and processes exhibit different strength. As m
entioned above, sectional structures for jet ghters are always machined from a so
lid piece of alloy steel. The raw material is homogeneous and rolled into the sh
ape in the steel mill. It has high strength and consistent properties. When cast
ing is used to form a part, a certain number of defects can be expected. The coo
ling and thus solidi cation of the material are not uniform. The surface area alwa
ys solidi es rst and at a much higher rate than the interior. A hard shell forms ar
ound the part. Depending on the
456
TECHNOLOGY
Workpiece
Approach direction
Alternative approach direction Part design
Figure 7 Workpiece and Features on a Part.
2.5.
Process Selection
Process is de ned as a speci c type of manufacturing operation. Manufacturing proces
ses can be classi ed as casting, forming, material-removal, and joining processes.
Under casting are sand casting, investment casting, die casting, vacuum casting
, centrifugal casting, inject molding, and so on. Forming includes rolling, forg
ing, extrusion, drawing, powder metallurgy, thermoforming, spinning, and so on.
Material removal includes milling, turning, drilling, broaching, sawing, ling, gr
inding, electrochemical machining (ECM), electrical-discharge machining (EDM), l
aser-beam machining, waterjet machining, ultrasonic machining, and so on. Joinin
g processes include arc welding, electron-beam welding, ultrasonic welding, sold
ering, brazing, and so on. In this chapter only material-removal processes examp
les are used.
458
TECHNOLOGY
Figure 9 Sample Part with Holes and Step.
2.6.2.
Process Parameters Determination
Process parameters include speed, feed, and depth of cut. They are critical to t
he cutting process. Higher parameter values generate higher material-removal rat
e, but they also reduce the tool life. High feed also means rougher surface nish.
In drilling and turning, feed is measured as how much the tool advances for eac
h rotation of the tool. In milling, it is the individual tooth advancement for e
ach tool rotation. In turning, for example, smaller feed means closely spaced to
ol paths on the part surface. The nish will be better in this case. Higher feed s
eparates the tool paths and in the worst case creates uncut spacing between two
passes. Types of process, the tool and workpiece materials, and hardness of the
workpiece material affect process parameters. The parameter values are determine
d through cutting experiments. They can be found in the tool vendors data sheets
and in the Machining Data Handbook (Metcut 1980). These data are usually based o
n the constant tool life value, often 60 minutes of cutting time. When the requi
red surface nish is high, one must modify the recommended parameters from the han
dbook. For new materials not included in any of the cutting parameter handbooks,
one must conduct ones own cutting experiments.
2.6.3.
Process Optimization
It is always desirable to optimize the production. While global optimization is
almost impossible to achieve, optimization on the process level is worth trying.
The total production time is the sum of the cutting time, the material handling
time, and the tool-change time. Shorter cutting time means faster speed and fee
d and thus shorter tool life. With a shorter tool life, the tool needs to be cha
nged more frequently and the total time is thus increased. Because there is a tr
ade-off between cutting time and tool life, one may nd the optimal cutting parame
ters for minimum production time or cost. The techniques of process optimization
are based on an objective function (time, cost, or another criterion) and a set
of constraints (power, cutting force, surface nish, etc). The process-optimizati
on models will be discussed later in the chapter. Finding optimal cutting parame
ters is fairly complex and requires precise data on both tool life model and mac
hining model. Except in mass production, process optimization is generally not c
onsidered.
2.7.
Plan Analysis and Evaluation
In on an old study conducted in industry, when several process planners were giv
en the same part design, they all came up with quite different process plans. Wh
en adopted for the product, each
460 Tm
L V V
TECHNOLOGY
V
n n
where Tm L V n D
machining time, min length of cut, in. feed rate, ipm feed, ipr (in. per revolut
ion) tool rpm tool diameter, in.
For complex features, it is harder to estimate the length of the tool path. A ro
ugh estimation may be used. The material-removal rate (MRR) of the tool is calcu
lated. The machining time is therefore the feature volume divided by the MRR. MR
R for turning and drilling is: MRR
V A where A
the cross-sectional area of the cu
tting For hole drilling, A For turning, A 2 r 2ap where r cutting radius ap
the dep
th of cut Machining cost can be calculated by the machining time times a machine
and operator overhead rate.
D2 4
2.7.2.
Estimated Product Quality
The commonly considered technological capabilities of a process include toleranc
es and surface nish. Surface nish is determined by the process and process paramet
ers. It is affected not by the order in which processes are applied but only by
the last process operated upon the feature. However, tolerances are results of a
sequence of processes. Operation sequences will affect the nal tolerance. Using
a simple 2D part, Figure 10 shows the result of different process sequences. The
arrow lines are dimension and tolerance lines. The drawing shows that the desig
ner had speci ed dimensions and tolerances between features AB, BC, and CD. Notice
that in this case features are vertical surfaces. If one uses feature A as the
setup reference for machine B, B for C, and C for D, the produce part tolerance
will be the same as the process tolerance. However, it would be tedious to do it
this way. One may use A as the reference for cutting B, C, and D. In this case,
tolerance on AB is the result of the process that cut B (from A to B). However,
the tolerance on BC is the result of processes that cut feature B (from A to B)
and feature C (from A to C). The nished tolerance on BC is twice the process tol
erance and twice that of AB. The same can be said for CD. If we choose D as the
reference, of course, the tolerance on CD is smaller than that for AB and BC. So
we may conclude that process sequence does affect the quality of the part produ
ced. The question is whether the current process sequence satis es the designed to
lerance requirements. Often this question is answered with the tolerance chartin
g method. Tolerance charting will be introduced in the next section.
3.
TOOLS FOR PROCESS PLANNING
Process-planning steps were introduced in the previous section. This section dis
cusses the tools used to carry out these steps. Manual process planning, which r
elies on human experience, will not be discussed. The tools discussed in this se
ction are those used in computer-aided process-planning systems. They are used t
o assist the human planner in developing process plans. Most of these tools have
been used in practical process-planning systems. Methodologies or algorithms us
ed only in advanced research will not be introduced here.
them based on the part family, one will nd that the process plans for the family
members are similar. Summarizing the process plans for the family allows a stand
ard process plan to be de ned. All parts in the family may share this standard pro
cess plan. When a new part is to be planned, one can nd the part family of this p
art, based on the geometric characteristics. The standard process plan for the f
amily is then modi ed for this new part. Using group technology for manufacturing
is like using a library to nd references for writing a paper. Without the library
database, locating appropriate references will be much more dif cult and thus so
will the writing.
3.1.2.
Coding and Classi cation
Group technology is based on the concept of similarity among parts. Classi cation
or taxonomy is used for this purpose. The dictionary de nition of taxonomy is orderl
y classi cation of plants and animals according to their presumed natural relation
ships. Here, taxonomy is used to classify parts in manufacturing. There are many m
ethods of part classi cation. To name just a few: visual observation, manual sorti
ng of the parts, sorting photographs of the parts, and sorting engineering drawi
ngs. Because keeping the physical parts or drawing in the sorted order is tediou
s or sometimes impossible, it is necessary to create a convenient representation
, called a coding system. A coding system uses a few digits or alphanumeric code
s to represent a group (family) of similar parts. The classi cation system is embe
dded into the coding system. For example, one can easily see that parts can be c
lassi ed as rotational and nonrotational. A crude coding system can have 0
462
TECHNOLOGY
representing any rotational parts and 1 representing any nonrotational parts. Rotati
onal parts can further be classi ed as rods, cylinders, and disks, based on the le
ngth-to-diameter ratio. The code can be re ned to have 0 represent rod, 1 cylinder,
d 3 nonrotational parts. Further, the external and internal shapes of the part can b
e classi ed as smooth, step, screw thread, and so on. Additional digits may be use
d to represent the external and internal shapes. Each digit re nes the classi cation
or adds additional characteristics. There are many public domain and proprietar
y coding systems. Opitz (Opitz 1970), Dclass (Allen 1994), MICLASS (OIR 1983), K
K3 (Chang et al. 1998) are but a few popular ones. Opitz code (Figure 12), devel
oped by Professor Opitz of Aachen University in the 1960s, uses ve digits to repr
esent the geometry of the part and four supplemental digits to represent part di
mension, material, raw material shape, and accuracy. If only the geometry is of
concern, the supplemental digits need not be coded. Extensive code tables and il
lustrations are given to guide the user in coding parts. Given a ve-digit code, o
ne can have a rough idea of the shape of the part. The code can also be used for
searching the part database to nd similar parts. Other coding systems may be mor
e detailed or cover a different part domain, but they all serve the same purpose
.
3.1.3.
Family Formation
To take advantage of the similarity among parts, one must group parts into famil
ies. If the geometry is used in de ning the part family, the coding and classi catio
n system discussed in the previous subsection can be used. Such families are cal
led design families because they are design geometry based. However, often one w
ould like to form part families based on the production methods used. In this ca
se, the part family is called a production family. Because the manufacturing met
hods are geometry related, members of a production family share many similar fea
ture geometries as well. Families may be formed using the visual observation met
hod, as mentioned above, or using the sorting approach. In forming design famili
es the coding system can be used as well. Parts with the same codes always belon
g to the same family. To enlarge the family, several codes may be included in th
e same family. The determination as which codes should be included is based pure
ly on the applications considered. To form a production family, one has to consi
der the manufacturing method. The rst well-known production family formation meth
od was called production ow analysis (Burbidge 1975). First, production ows, or pr
ocess sequences, for all parts are collected. An incidence matrix with columns r
epresenting each part and rows representing each process, machine, or operation
code (a certain process performed on a machine) is prepared based on the product
ion ows. By sorting the rows and columns of the matrix, one may move the entries
along the diagonal of the matrix (see Figure 13). In the matrix, one may conclud
e that parts 8, 5, 7, 2 belong to one family and 4, 1, 3, 6, 9, 10 belong to ano
ther family. Family one needs processes P1, P5, P6, and P3. Family two needs pro
cesses P3, P2, and P4. This approach can be tedious and requires human judgment
on separating families. As can be seen, P3 is needed for both families. It is no
t uncommon to have overlapping entries. Over the years, many mathematics-based s
orting methods, called clustering analysis, have been developed. For more detail
s see Kusiak (1990) and Chang et al. (1998).
3.1.4.
Composite Component Concept
Given a family of parts, one can identify several features. When one takes all t
he features and merges them into an imaginary part, this imaginary part, called
a composite component, is a superset containing the features of the family membe
rs. The composite component concept was developed before World War II in Russia,
where it was used to design exible xtures for a part family. By adjustment of the
xture, it can accommodate all the parts belonging to a family. This concept can
also be used in design. For example, parametric design uses a parameterized desi
gn model for a narrowly de ned
A
B
C
D
Figure 12 Tolerances.
3.2.1.
Process for Features Mapping
The geometric capability of a process is summarized in Table 3. As can be seen,
milling can create many different features (Volume Capabilities column). Drillin
g, reaming, and boring primarily create holes. Turning can create different axia
l symmetric parts. Other processes are also listed in the table.
Cutting edge
Figure 14 Drill Bit.
at bootom volume
Milling
End milling
T-slot Internal groove pocket, slot, at sculptured surface, at round hole round ho
le deep round hole deep round hole large round hole shallow round hole multiple
diameter round hole countersink hole counterbore hole thin thin thin thin wall w
all wall wall of of of of round round round round hole hole hole hole
Drilling
Reaming
Boring
thin wall of round hole thin wall of round hole ? disk disk ? thin wall of round
hole round hole thin wall of round hole at bottom volume slot step polyhedral th
rough hole formed through volume ?
Turning
Broaching Form tool Hacksaw Bandsaw Circular saw Form tool Inserted tool Cylindr
ical grinding Centerless grinding Internal grinding External grinding Surface gr
inding Grinding wheels Points
Sawing Shaping Planing
at bottom volume, slot
ume
Grinding
Honing Lapping Tapping
Honing stone Lap Tap
? most surfaces threaded wall of hole
at bottom vol
uracy.
3.4.
Cost Model
Another extremely important factor is process economics. We are always intereste
d in nding the most economical solution. Often it means the survival of the compa
ny. Process economics means the cost ef ciency of the processes. For mass producti
on, a very detailed economic analysis is necessary before a speci c processing met
hod can be selected. However, for the usual small to medium batch production, it
is not practical to conduct a very detailed study. The savings cannot justify t
he amount of effort spent. Some rough estimation or just common sense should be
used to select a better process. Whenever there are more than two candidate proc
esses, both technologically suitable for the task, it is time to compare their r
elative costs. A process cost model can be stated as: C
labor cost machine overh
ead tool change cost
tool cost C
Cm(Tm
Th)
(Ct
Cm tt) Tm Tl
466
top slot
TECHNOLOGY
Broach a blind hole
Mill T slot w/o a top slot
Large hole through a small slot reamer
Small hole off center in a large hole
Center line in void
Do not ream to the bottom
Use Gun drill
Hole drilled on slant surface
Flat bottom hole
Thread to the bottom
bore or ream w/0 center hole
Hole too close to each other
Milling pocket w/o a hole
Saw inner loop w/o a hole
Hole too close to the wall
Tool cannot reach the recess
Broach a blind slot
Grinding the corner
Figure 15 Process Constraints.
where C Cm Ct Tm Th Tt Tl
total cost for the operation ($) operator rate and machine overhead ($ / hr) cos
t of tool ($) processing time (hr) material-handling time, if any (hr) tool chan
ge time (hr) tool life (hr)
In the equation, Tm / Tl is the number of tool changes for the operation. It is
determined by the tool life and the processing time. Processing time can be calc
ulated by the necessary tool travel
MANUFACTURING PROCESS PLANNING AND DESIGN TABLE 4 Cost of Machinery Price range
($000) 10300 10100 30150 50150 100200 40150 20100 100150 30150 50150 100400
00400 501000 20250 10250 20200 5100 50500
467
Type of Machinery Broaching Drilling Electrical discharge Electromagnetic and el
ectrohydraulic Gear shaping Grinding Cylindrical Surface Headers Injection moldi
ng Boring Jig Horizontal boring mill Flexible manufacturing system Lathe Singleand multi-spindle automatic Vertical turret Machining center Mechanical press M
illing Robots Roll forming Rubber forming
From S. Kalpakjian, Manufacturing Engineering and Technology, 3d Ed.,
inted by permission of Prentice-Hall, Inc., Upper Saddle River, NJ.
1995. Repr
divided by the feed speed. For example, for drilling an x-in. deep hole using a
feed speed of a ipm, Tm x / a / 60. The tool life equation can be expressed as (
for milling): Tl where C V ap
, ,
C V
ap
a constant determined by the tool geometry, tool material, and workpiece materia
l the cutting speed, fpm the feed, ipr the depth of cut, in. coef cients
Unfortunately, several dif culties prohibit us from using this model to predict th
e operation cost. First, the cutter path is not known at the process-selection t
ime. Generating a cutter path for each possible process to be machined would be
very time consuming. The second problem is the availability of coef cients for eac
h combination of tool-material type and workpiece-material type. There is little
published data for tool life equations. Most of the tool life and machinability
data are published in terms of recommended feed and speed. With these two major
problems, this approach will probably not work for real-world problems. A quick
and dirty way must be found to estimate the cost. Since we are dealing with the
machining of a single feature, it is reasonable to assume that the material-han
dling time is negligible. The chance of changing a tool during the operation is
also minimal. Also, the feed and speed recommended by the Machining Data Handboo
k (Metcut 1980) usually make the tool life to be about 60 minutes. Since the rec
ommended feed and speed are what are used in most machining operations, it is re
asonable to assume that Tl 1 hr. Therefore, the cost function can be simpli ed to:
1000
TABLE 5 Cutters Plain Inserted-tooth tol atness angularity parallelism surface nis
h roughing tol atness surface nish 0.002 0.001 50 0.001 0.001 30 nishing 0.002 0.00
1 0.001 0.001 50 0.001 0.001 0.001 0.001 30 roughing nishing Tolerances, Surface
Finish, etc. Capabilities
Process Technological Capabilities
468 Plain Slitting saw Form Inserted-tooth Staggered-tooth Angle T-slot cutter W
oodruff keyseat cutter Form milling cutter roughing tol parallelism surface nish
0.004 0.0015 60 nishing 0.004 0.0015 50 Plain Shell end Hollow end Ball end lengt
h/diam 3 usual
8 maximum mtl
Rc 30 usual
Rc 50 maximum Dia 01/8 1/81/4 1/41/2 1/2
24 Dia
5/8 5/8
Process
Subprocess
Face milling
Milling
Peripheral milling
End milling
Tolerance
0.0030.001
0.0040.001
0.0060.001 0.0080.002
0.0100.003 0.0120.004
nish
Drilling
Twist drill Spade drill Trepanning cutter Center drill Combination drill Counter
sink Counterbore
best 0.0004 0.004 100
Deep-hole drill Gun drill
Tolerance 0.0015 0.002 surface nish
Dia 01/2 1/21 12 24 length/dia Dia roughing 03/4 3/41 12 24 46 612 diameter to 1.0
erance 0.001 0.002 0.003 0.001 0.0015 0.002 0.003 0.004 0.005 0.0002 0.0002 0.00
04 0.0008 0.001 0.002 surface nish 250 to 16 nishing Tolerance 5 to 8 straightness
roundness true position surface nish 0.0005 to 0.001 0.001 0.002 0.003 roundness
true position surface nish 0.0005 0.01 125
Tolerance
roughing
nishing 0.0005 0.01 50
Reaming
Shell reamer Expansion reamer Adjustable reamer Taper reamer
Boring
Adjustable boring bar Simple boring bar
0.002 0.003 0.0001 8
Turning Facing Parting Plain Inserted Knurling tool Boring bars Drills Reamers F
orm tool
Turning
Knurling
Boring Drilling Reaming
Broaching
tolerance 0.001 surface
469
nish 125 to 32
470 Cutters length to 0.01 0.01 0.008 roughing nishing 0.2 0.2 0.2 200300 200300 12
5 squareness surface nish cutting rate 36 sq in./min 430 sq in./min 736 sq in./min T
olerances, Surface Finish, etc. Capabilities material to Rc45 to Rc45 to Rc45 lo
cation tol atness surface nish surface nish 0.005 0.001 60 125 Tolerance roughing ni
shing 0.001 0.0005 32 (cast iron) 32 (steel) Dia 01 12 24 48 810 roughing tolerance p
arallelism roundness surface n 0.0005 0.0005 0.0005 8 roughing tolerance parallel
ism surface n 0.001 0.001 32 0.00015 0.0002 0.0003 0.0005 0.0008 0.00005 0.00005
0.0001 0.00013 0.0002 nishing 0.0001 0.0002 0.0001 2 nishing 0.0001 0.0001 2
TABLE 5
(Continued )
Process
Subprocess
Sawing
Hacksaw Bandsaw Circular saw
Shaping
Form tool
Planing
Inserted tool
Internal
Internal grinding
Grinding
Cylindrical grinding Centerless grinding
External grinding Surface grinding
Center ground and centerless
at
0.00080.0
Tolerance
0.00050.0
0.00100.0
0.00080.0
nish roundness
472 C
(Cm
Ct)Tm 60
TECHNOLOGY
The machining time can be estimated by the material removal rate (MRR) and the v
olume (V ) to be removed (current feature volume): Tm
V MRR
60
Therefore, the cost function can be rewritten as: C (Cm
Ct)V MRR
60
The maximum material-removal rate data can be estimated using some simple equati
ons (Chang 1990). First the tool size, feed, and speed need to be found. The vol
ume to be removed can be estimated much more easily than the length of cutter pa
th. Cost data can also be obtained from the accounting department. With these da
ta the processing cost can be calculated. This cost information can be used to s
elect the most economical process to use for machining a volumetric feature. Bec
ause in the processing cost function two variables, Ct and MRR, are related to t
he process, other capabilities of interest are tool cost and materialremoval rate
. These capabilities should be used for selecting economical machining processes
. Example. The hole to be drilled is 3 in. deep. The machine and operator rate i
s $40 / hr. The tool cost is $10 each. What is the production cost of the hole?
V
12 3
2.356 in.3 4
C
(40
2.356
10)
$0.283 6.93
60
The above model does not consider the xed cost of tooling. The tool cost used in
the model is the incremental tool cost. If special tools are needed, a xed cost m
ay be incurred. In that case, the xed cost must be evenly distributed to the enti
re batch of parts made.
3.5.
Tolerance Charting
Tolerance charting is a method for checking the proper in-process dimensions and
tolerances from a process plan. It is used to verify whether the process sequen
ce will yield the designed dimensions and tolerances. In most of the literature,
process is replaced by operation. In this section we will use the term process.
Tolerance charting begins with a part drawing and the process plan. On the proc
ess plan are processes and machines. Consequences of processes in terms of resul
tant dimensions and tolerances are marked on the chart. The processes that were
used to produce the dimension and the tolerance are labeled for trace. This is d
one step by step following the sequence of processes. Finally, the speci ed dimens
ions and tolerances are compared to the resultant dimensions and tolerances. If
any of the dimensions and tolerances are not satis ed, one can trace back to the s
ources. Then either a different process / machine is used to reduce the process
tolerance or the process sequence is changed. Figure 16 illustrates how a simpli e
d tolerance chart works. The example part is a cylindrical part to be turned. Th
e calculation section of the chart is omitted. On the top of the chart is the de
sign. Note that the tolerance chart can handle one dimension at a time. The draw
ing is 2D and features are vertical lines (representing surfaces). The solid lin
e shows the boundary of a cylinder with greater diameter at the center. The dash
ed line shows the stock boundary that encloses the part boundary. The designer h
as speci ed dimensions and tolerances between three feature sets. The dimension va
lues are omitted in this example. The next section is the dimension and toleranc
e section. The thick horizontal lines show where the dimension and tolerance are
speci ed. For example, the overall dimension is 3 and tolerance is 0.01. The foll
owing section is the process plan section. Four cuts are shown in the process pl
an. The rst two cuts (10 and 12) use the left-hand side of the stock as the refer
ence. They create two surfaces: surface C and surface D (at the same time the di
ameters are turned). The next two cuts (20 and 22) create the dimensions between
BD and AD. Dimension AB is the result of cuts 20 and 22. Therefore, the toleran
ce for AB equals the sum of process tolerances for 20 and 22. In this case both
are the same. To achieve the designed tolerance of 0.01, the process
474
TECHNOLOGY
was performed automatically by the system. Process planners simply lled in the de
tails. The storage and retrieval of plans are based on part number, part name, o
r project ID. When used effectively, these systems can save up to 40% of a proce
ss planners time. A typical example can be found in Lockheeds CAP system (1981). A
n example of a modern version is Pro / Process for Manufacturing (launched in 19
96 and since discontinued). Such a system can by no means perform the processpla
nning tasks; rather, it helps reduce the clerical work required of the process p
lanner. The typical organization of using a process-planning system is shown in
Figure 17. A human planner interprets an engineering drawing and translates it i
nto the input data format for a processplanning system. Either interactively or
automatically, a process plan is produced. The plan is then used by production p
lanners for scheduling of production and used by industrial engineers to lay out
the manufacturing cell and calculate production cost and time. A part programme
r follows the instructions on the process plan and the engineering drawing to pr
epare NC (numerical control) part programs. The same organization applies to all
kinds of process planning systems. Perhaps the best-known automated process pla
nning system is the CAM-I automated process planning system (CAPP) (Link 1976).
(CAM-I stands for ComputerAided Manufacturing International, a nonpro t industrial
research organization.) In CAPP, previously prepared process plans are stored i
n a database. When a new component is planned, a process plan for a similar comp
onent is retrieved and subsequently modi ed by a process planner to satisfy specia
l requirements. The tech2" +0.01 -0.01
0.001 A B
10" +0.01 -0.01
A
Handbook
7" +0.05 -0.05
4" +0.01 -0.01
1-4" +0.01 -0.01
3" +0.01 -0.01
B
5" +0.01 -0.01
S.F.64 uinch
Product concept
Handbook lookup CAD
CAM
N0010 G70 G90 T08 M06 N0020 G00 X2.125 Y -0.475 Z4.000 S3157 N0030 G01 Z1.500 F6
3 M03 N0040 G01 Y4.100 N0050 G01 X2.625 N0060 G01 Y1.375 N0070 G01 X3.000 N0080
G03 Y2.625 I3.000 J2.000 N0090 G01 Y2.000 N0100 G01 X2.625 N0110G01 -Y0.100 N012
0 G00 Z4.000 T02 M05 N0130 F9.16 S509 M06 N0140 G81 X0.750 Y1.000 Z-0.1 R2.100 M
03 N0150 G81 X0.750 Y3.000 Z-0.1 R2.100 N0160 G00 X-1.000 Y-1.000 M30
Part program
Figure 17 From Design to Part Program.
476
TECHNOLOGY
be used as a tool to assist in the identi cation of similar plans, retrieving them
and editing the plans to suit the requirements for speci c parts. A variant proce
ss planning system includes the following functions:
Family formation Standard plan preparation Plan editing Databases
In order to implement such a concept, GT-based part coding and classi cation are u
sed as a foundation. Individual parts are coded based upon several characteristi
cs and attributes. Part families are created of like parts having suf ciently common a
ttributes to group them into a family. This family formation is determined by an
alyzing the codes of the part spectrum. A standard plan consisting of a process plan
to manufacture the entire family is created and stored for each part family. Th
e development of a variant process-planning system has two stages: the preparato
ry stage and the production stage. During the preparatory stage, existing compon
ents are coded, classi ed, and later grouped into families (Figure 19). The part f
amily formation can be performed in several ways. Families can be formed based o
n geometric shapes or process similarities. Several methods can be used to form
these groupings. A simple approach would be to compare the similarity of the par
ts code with other part codes. Since similar parts will have similar code charact
eristics, a logic that compares part of the code or the entire code can be used
to determine similarity between parts. Families can often be described by a set
of family matrices. Each family has a binary matrix with a column for each digit
in the code and a row for each value a code digit can have. A nonzero entry in
the matrix indicates that the particular digit can have the value of that row. F
or example, entry (3,2) equals one implies that a code x3xxx can be a member of
the family. Since the processes of all family members are similar, a standard pl
an can be assigned to the family. The standard plan is structured and stored in
a coded manner using operation codes (OP codes). An operation code represents a
series of operations on one machine / workstation. For example, an OP code DRL10
may represent the sequence center drill, change drill, drill hole, change to re
amer, and ream hole. A series of OP codes constitutes the representation of the
standard process plan. Before the system can be of any use, coding, classi cation,
family formation, and standard plan preparation must be completed. The effectiv
eness and performance of the variant process-planning system depends to a very l
arge extent on the effort put forth at this stage. The preparatory stage is a ve
ry time-consuming process. The production stage occurs when the system is ready
for production. New components can be planned in this stage. An incoming compone
nt is rst coded. The code is then sent to a part family search routine to nd the f
amily to which it belongs. Because the standard plan is indexed by the family nu
mber, the standard plan can be easily retrieved from the database. Because the s
tandard
part coding
part family formation
standard plan preparation
part coding
part family search finished process plan
process plan retrieval
Standard process plans & individual process plans
nal plan. It is e
The variant approach is the most popular approach in industry today. Most workin
g systems are of this type, such as CAPP of CAM-I (Link 1976) and Multiplan of O
IR (OIR 1983).
4.2.
Generative Approach
Generative process planning is the second type of computer-aided process plannin
g. It can be concisely de ned as a system that automatically synthesizes a process
plan for a new component. The generative approach envisions the creation of a p
rocess plan from information available in a manufacturing database without human
intervention. Upon receiving the design model, the system is able to generate t
he required operations and operation sequence for the component. A generative pr
ocess-planning system consists of the following important functions:
Design representation Feature recognition Knowledge representation System struct
ures
Knowledge of manufacturing has to be captured and encoded into computer programs
. A process planners decision-making process can be imitated by applying decision
logic. Other planning functions, such as machine selection, tool selection, and
process optimization, can also be automated using generative planning technique
s. A generative process-planning system contains three main components:
Part description Manufacturing databases Decision-making logic and algorithms
The de nition of generative process planning used in industry today is somewhat re
laxed. Thus, systems that contain some decision-making capability in process sel
ection are called generative systems. Some of the so-called generative systems u
se a decision tree to retrieve a standard plan. Generative process planning is r
key developmen
The part to be produ
format. The capt
be incorporate
stics of the competing systems based on weights. Table 6 shows a comparison tabl
e for two process-planning systems. In the table are 12 major categories, most o
f which are divided into several subcategories. Weights can be assigned
MANUFACTURING PROCESS PLANNING AND DESIGN TABLE 6 System Comparison Table System
s Weight 1. Input data 1.1. CAD le format 2. Workpiece understanding 2.1. Shape a
nalysis 2.2. Material analysis 2.3. Speci cation analysis 2.4. Tolerance analysis
Average 3. Process selection 3.1. Feature hierarchy 3.2. Tolerance analysis 3.3.
Process capability model 3.4. Speci cation analysis 3.5. Cost analysis 3.6. Capac
ity analysis Average 4. Machine tool management 4.1. Machine tool selection 4.2.
Machine capability model 4.3. Maintenance planning 4.4. Environmental analysis
4.5. Flow analysis 4.6. Life-cycle analysis 4.7. Supplier development 4.8. Contr
ol speci cation 4.9. Facility planning Average 5. Quality management 5.1. Processcontrol planning 5.2. Gage planning 5.3. Scrap and rework management 5.4. Qualit
y assurance Average 6. Process design 6.1. Tool selection 6.2. Fixture design 6.
3. Gage design 6.4. Tool path generation 6.5. Operation sequencing 6.6. Process
parameters 6.7. Feature accessibility 6.8. Tolerance analysis Average 7. Evaluat
ion and validation 7.1. Target cost 7.2. Tool path veri cation 7.3. Workpiece qual
ity 7.4. Process reliability 7.5. Production times Average 8. Document generatio
n 8.1. Tool / gage orders 8.2. Equipment orders 8.3. Operation sheets 8.4. Proce
ss routing 8.5. Part programs 8.6. Setup drawings Average 9. Machine domain 10.
Part domain 11. Platform 12. Technical support Total System 1 1 3 0 1 1 2 1 2 3
0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 2 0 1 0 0 1 3 1 1 1 2 1 1 0 0 4 0 5 0 0 1 2 3
2 0 2 2 5 3 2 4 4 5 5 65
479
System 2 4 5 1 1 3 3 5 5 5 2 1 2 3 5 5 0 0 0 0 0 0 0 5 0 0 0 0 0 5 4 0 5 5 5 5 4
5 4 0 0 5 3 4 5 4 4 5 5 5 5 4 3 4 101
1 3
3 3 1 3
480
TECHNOLOGY
to either each major category or subcategory. The total weights may be used to d
etermine the nal selection. The comparison is not limited to the pair comparison
as it is in the example; more systems can be added to the table. The meanings of
the categories in Table 6 are de ned below:
Input data: The process-planning system must interface with the existing CAD des
ign system.
The system can either read the native CAD data or can input through an data-exch
ange format such as IGES or STEP. CAD le format: the design data le acceptable, e.
g., Pro / E, CADDS, CATIA, IGES, STEP. Workpiece understanding: Shape analysis:
feature recognition; converting design data into manufacturing feature data. Mat
erial analysis: identi cation of raw material shape, sizes, type (cast, bar, etc.)
, and weight. Speci cation analysis: extracting other manufacturing speci cations fr
om the design (e.g., paint, hardness, surface treatment). Tolerance analysis: ex
tracting tolerance data from the design. Process selection: Feature hierarchy: p
rocess selection based on hierarchically arranged manufacturing features. Tolera
nce analysis: process selection based on the tolerance capabilities of processes
vs. the design speci cations. Process capability model: a computer model that cap
tures the capability of a process. Process models are used in process selection.
They may be customized. Speci cation analysis: understanding other manufacturing
speci cations and using them as the basis for selecting processes. Cost analysis:
estimating and analyzing manufacturing cost and using it as the basis for select
ing processes. Capacity analysis: machine selection with the consideration of th
e throughput of each machine. Machine tool management: Machine tool selection: s
electing appropriate machine tools for the job. Machine capability model: a comp
uter model that captures the process capability of a machine tool. Maintenance p
lanning: maintenance schedule for machine tools. Environmental analysis: assessm
ent of environmental requirements (temperature, pressure, humidity) necessary to
achieve quality targets. Flow analysis: production ow analysis (i.e., throughput
). Life-cycle analysis: the economic analysis, from cradle to grave, of asset co
sts to derive metrics like ROA. Supplier development: identifying supplier for t
he machine tool. Control speci cation: managing NC controller speci cations for post
processors. Facility planning: integration with the facility planning functions.
Quality management Process control planning: generating control charts for proc
ess control. Gage planning: CMM programming, gage calibration, certi cation planni
ng. Scrap and rework management: planning for management of the disposition of p
arts for scrap and rework Quality assurance: generating quality assurance plan f
or certi cation. Process plans may be used for this purpose. Process design Tool s
election: selecting tools to be used for the machining of a part. Fixture design
: designing the xture necessary for the workpiece holding under a given setup. Ga
ge design: designing the gage for inspecting the workpiece. Tool path generation
: generating the NC tool path for the part. Operation sequencing: sequencing the
NC tool path for the part Process parameters: selecting process parameters, e.g
., feed and speed, for each tool / cut. Feature accessibility: analyzing the too
l accessibility of each feature to be machined.
482
TECHNOLOGY
Tolerance analysis: comparing the tolerance speci cation against the tolerance cap
abilities of
the selected tools and machine tools; stacking up tolerance based on the setup a
nd machine accuracy and setup error. Evaluation and validation: Target cost: est
imating the manufacturing cost of that part. Tool path veri cation: graphically si
mulating the tool motion to ensure that the tool path has no collision with the x
ture and the machine and produces correct part geometry. Workpiece quality: con rm
ing that quality indices are met. Process reliability: reliability indexes for t
he processes selected for the part. Production times: estimating the time it tak
es to produce the part. Document generation: Tool / gage orders: job orders for
tools and gages needed. Equipment orders: job orders for equipment used. Operati
on sheets: detailed description of each operation and operation parameters inclu
ding gaging detail / instructions. Process routing: process routing sheet. Lists
operation sequence and processes. Part programs: NC part program and part progr
am format, e.g., RS 274, CL, BCL, APT source, COMPACT II source, postprocessed N
-G code. Setup drawings: drawings of the workpiece with xture for each setup. Mac
hine domain: machine tools that can be modeled and planned by the system. Part d
omain: types of part that can be planned by the system (e.g., turned, prismatic,
casting). Platform: computer platform on which the software can be used. Techni
cal support: technical support provided by the vendor. Total: total composite sc
ore based on the criteria above.
A composite score may not be suf cient for making the nal decision. It is necessary
to record the speci cations in each subcategory. Table 7 illustrates such a table
for the same comparison as in Figure 6. With these two tables, an appropriate c
omputer-aided process planning system can be selected.
6.
CONCLUSIONS
In this chapter, we have provided a general overview of manufacturing process pl
anning and design. We have also discussed the tools used in process planning. As
mentioned earlier in the chapter, automated process planning is especially impo
rtant for small batch production. At a time when competition is keen and product
changeover rapid, shortening the production lead time and production cost are c
ritical to the survival of modern manufacturing industry. Process planning plays
an important role in the product-realization process. In order to achieve the g
oals stated above, it is essential to put more focus on this planning function.
Process planning is a function that requires much human judgment. Over the past
three decades, much effort has been put into automating process planning. Althou
gh progress has been made on geometric modeling and reasoning, knowledge-based s
ystems, and computational geometry for toolpath generation, no robust integrated
process-planning system has been developed. Developing an automated process-pla
nning system is still a challenge to be ful lled. In the meantime, many of these t
echnologies have been integrated into CAD / CAM systems. They provide human plan
ners with powerful tools to use in developing process plans. We can expect an in
creasing amount of planning automation be made available to manufacturing engine
ers. This chapter provides a general background to this important topic. Additio
nal readings are included for those interested in a more indepth investigation o
f the subject.
REFERENCES
Allen, D. K. (1994), Group Technology, Journal of Applied Manufacturing Systems, Vol
. 6, No. 2, pp. 3746. Berra, P. B., and Barash, M. M. (1968), Investigation of Auto
mated Process Planning and Optimization of Metal Working Processes, Report 14, Pur
due Laboratory for Applied Industrial Control, West Lafayette, IN, July. Burbidg
e, J. L. (1975), The Introduction of Group Technology, John Wiley & Sons, New Yo
rk. Chang, T. C. (1990), Expert Process Planning for Manufacturing, Addison-Wesl
ey, Reading, MA, 1990.
depending on each companys strategy. Todays products are becoming much more compl
ex and dif cult to design and manufacture. One example is the automobile, which is
becoming more complex, with computer-controlled ignition, braking, and maintena
nce systems. To avoid long design times for the more complex products, companies
should develop tools and use new technologies, such as concurrent engineering,
and at the same time improve their design and manufacturing processes. Higher qu
ality is the basic demand of customers, who want their moneys worth for the produ
cts they buy. This applies to both consumers and industrial customers. Improved
quality can be achieved
486
TECHNOLOGY
through better design and better quality control in the manufacturing operation.
Besides demanding higher quality, customers are not satis ed with the basic produ
cts with no options. There is a competitive advantage in having a broad product
line with many versions, or a few basic models that can be customized. A brand n
ew concept in manufacturing is to involve users in the product design. With the
aid of design tools or a modeling box, the company allows the users to design th
e products in their own favor. In the past, once a product was designed, it had
a long life over which to recover its development costs. Today many products, es
pecially high-technology products, have a relatively short life cycle. This chan
ge has two implications. First, companies must design products and get them to t
he market faster. Second, a shorter product life provides less time over which t
o recover the development costs. Companies should therefore use new technologies
to reduce both time and cost in product design. Concurrent engineering is one m
ethod for improving product design ef ciency and reducing product costs. Another n
ew paradigm is represented by agile manufacturing, in which the cost and risks o
f new product development are distributed to partners and bene ts are shared among
the partners. This requires changes to or reengineering of traditional organiza
tion structures. Several demographic trends are seriously affecting manufacturin
g employment. The education level and expectations of people are changing. Fewer
new workers are interested in manufacturing jobs, especially the unskilled and
semiskilled ones. The lack of new employees for the skilled jobs that are essent
ial for a factory is even more critical. On the other hand, many people may not
have suf cient education to be quali ed for these jobs (Bray 1988). To win in the gl
obal market, manufacturing companies should improve their competitive ability. K
ey elements include creative new products, higher quality, better service, great
er agility, and low environmental pollution. Creative new products are of vital
importance to companies in the current knowledge economy. Figure 1 presents a market
change graph. From this gure, it can be seen that the numbers for lot size and r
epetitive order are decreasing, product life cycle is shortening, and product va
riety is increasing rapidly. End users or customers always need new products wit
h advancements in function, operation, and energy consumption. The company can r
eceive greater bene ts through new products. A manufacturing company without new p
roducts has little chance of surviving in the future market. Better services are
needed by any kind of company. However, for manufacturing companies, better ser
vice means delivering products fast, making products easy to use, and satisfying
customer needs with low prices and rapid response to customer maintenance reque
sts.
2.2.
Features of a General Manufacturing System
The manufacturing company is a complex, dynamic, and stochastic entity consistin
g of a number of semi-independent subsystems interacting and intercommunicating
in an attempt to make the overall system function pro tably. The complexity comes
from the heterogeneous environment (both hardware and software), huge quantity o
f data, and the uncertain external environment. The complex structure of the sys
tem and the complex relationships between the interacting semi-autonomous subsys
tems are also factors making the system more complicated.
Product Life cycle Repetitive order
Lot size
Variety Year
1970
1980
1990
2000
Figure 1 Market Change.
CELL LEVEL
WORKSTATION LEVEL
MACHINE LEVEL SENSOR/ACTUATOR LEVEL
-control of individual actuation subsystems -processing of sensory information f
eedback
Figure 2 A Seven-Level Manufacturing Hierarchy. (From Rogers et al. 1992)
488
TECHNOLOGY
Other de nitions have pointed out that CIM is a philosophy in operating a manufact
uring company. For example: CIM is an operating philosophy aiming at greater ef cien
cy across the whole cycle of product design, manufacturing, and marketing, there
by improving quality, productivity, and competitiveness (Greenwood 1989). To stres
s the importance of integration, the Computer and Automation Systems Association
of the Society of Manufacturing Engineers gives the following de nition: CIM is the
integration of the total manufacturing enterprise through the use of integrated
systems and data communications coupled with new managerial philosophies that i
mprove organizational and personnel ef ciency (Singh 1996). CIM does not mean replac
ing people with machines or computers so as to create a totally automatic busine
ss and manufacturing processes. It is not necessary to build a fully automatic f
actory in order to implement a CIM system. It is especially unwise to put a huge
investment into purchasing highly automation- exible manufacturing systems to imp
rove manufacturing standards if the bottleneck in the companys competitiveness is
not in this area. In the current situation, the design standards for creative a
nd customized products are more important than production ability in winning the
market competition. The importance of human factors should be emphasized. Human
s play a very important role in CIM design, implementation, and operation. Altho
ugh computer applications and arti cial intelligence technologies have made much p
rogress, even in the future, computers will not replace people. To stress the im
portance of the role of humans, the idea of human-centered CIM has been proposed
. Two views of CIM can be drawn from these de nitions: the system view and the inf
ormation view. The system view looks at all the activities of a company. The dif
ferent functions and activities cannot be analyzed and improved separately. The
company can operate in an ef cient and pro table way only if these different functio
ns and activities are running in an integrated and coordinated environment and a
re optimized in a global system range. The SME CIM wheel (Figure 3) provides a c
lear portrayal of relationships among all parts of an enterprise. It illustrates
a three-layered integration structure of an enterprise.
ng turi fac u an M
Facto ry a uto ma s ti l a Assem ri e t a bly In M g
on
& human resource m ana gem en t
Des ign
sp e tes ction t /
rials Mateessing proc
cess /pro uct od Pr
Documentation sis aly An nd a tion ula
sim
rce mgmt. & co sou re
In tegrated Systems architecture
iction mun mm
Marketing
Finance
lin hand
Informatio ni
Sh flo op or
Mater ia
l
Sch
edu
M
an
ufa
ctu ri
n g p la n n i n g
S tr a te gic p l a n n i n g
Figure 3 The SME CIM Wheel. (From Systems Approach to Computer-Integrated Design
and Manufacturing, by N. Singh, copyright 1996 by John Wiley and Sons, Inc. Rep
rinted by permission of John Wiley & Sons, Inc.)
Q pr uali oc ty fa & ess pl cilit an ie ni s ng
Com
m o n d ata
ling
on &c
tro
l
any problems of the industry, integration has to proceed on more than one operat
ional aspect. The AMICE (European Computer Integrated Manufacturing Architecture
) project identi es three levels of integration covering physical systems, applica
tion and business integration (see Figure 4). Business integration is concerned
with the integration of those functions that manage, control, and monitor busine
ss processes. It provides supervisory control of the operational processes and c
oordinates the day-to-day execution of activities at the application level. Appl
ication integration is concerned with the control and integration of application
s. Integration at this level means providing a suf cient information technology in
frastructure to permit the system wide access to all relevant information regard
less of where the data reside. Physical system integration is concerned with the
interconnection of manufacturing automation and data-processing facilities to p
ermit interchange of information between the so-called islands of automation (in
tersystem communications). The interconnection of physical systems was the rst in
tegration requirement to be recognized and ful lled. Even when business integratio
n has been achieved at a given time, business opportunities, new technologies, a
nd modi ed legislation will make integration a vision rather than an achievable go
al. However, this vision will drive the management of the required changes in th
e enterprise operation.
490
TECHNOLOGY
CIM Integration
Business Integration
Knowledge Based Decision Support
Automated Business Process Monitoring Production and Process Simulation
Application Integration
Portable Applications Distributed Processing Common Services/ Execution Environmen
t Common (Shared) Data Resources
Physical System Integration
Inter System Communication/ Network Configuration & Management Data Exchange Rules
and Conventions Physical System Interconnection
CIM Evolution
Figure 4 Three Levels of Integration. (From Esprit Consortium AMICE 1993. Reprin
ted by permission of Springer-Verlag.)
The classi cation of integration can also be given in another method, which is dif
ferent from that given by CIMOSA. Regarding integration objectives and methods,
integration can be classi ed as information integration, process integration, and
enterprise-wide integration. Information integration enables data to be shared b
etween different applications. Transparent data access and data consistency main
tenance under heterogeneous computing environments is the aim of information int
egration. Information integration needs the support of communication systems, da
tarepresentation standards, and data-transfer interfaces. Communication systems
provide a data transfer mechanism and channel between applications located at di
fferent computer nodes. Data-representation standards serve as common structures
for data used by different applications. Data-transfer interfaces are used to t
ransfer data from one application to another. They ful ll two kinds of functions:
data format transfer (from application-speci ed data structure to common structure
and vice versa) and data transfer from application to interface module and vice
versa. The traditional informationintegration methods include database data int
egration, le integration, and compound data integration. The most ef cient support
tool for information integration is the integration platform (Fan and Wu 1997).
Process integration is concerned with the collaboration between different applic
ations in order to ful ll business functions, such as product design and process c
ontrol. The need to implement process integration arises from companies pursuit o
f shorter product design time, higher product quality, shorter delivery time, an
d high business process ef ciency. Business process reengineering (BPR) (Jacobson
1995) and concurrent engineering (CE) (Prasad 1996) have promoted the research a
nd application of process integration. Business process modeling, business proce
ss simulation, and business process execution are three important research topic
s related to process integration. A number of methods can be used in modeling bu
siness processes: CIMOSA business process modeling, IDEF3 (Mayer et al. 1992), P
etri nets (Zhou 1995), event driven process chain (Keller 1995), and work ow (Geor
gakopoulos et al. 1995). The modeling objective is to de ne the activities within
a business process and the relationships between these activities. The activity
is a basic func-
ion method is shown in Figure 5. From Figure 5, it can be seen that CIMS consist
s of four functional subsystems and two support subsystems. The four functional
subsystems are management information, CAD / CAPP / CAM, manufacturing automatio
n, and computer-aided quality management. These functional subsystems cover the
business processes of a company. The two support subsystems are computer network
and database management. They are the basis that allows the functional subsyste
ms to ful ll their tasks. The arcs denote the interfaces between different subsyst
ems. Through these interfaces, shared data are exchanged between different subsy
stems.
3.2. 3.2.1.
Components of CIMS Management Information System
This section brie y describes the components of CIMS.
Management information system (MIS) plays an important role in the companys infor
mation system. It manages business processes and information based on market str
ategy, sales predictions, business decisions, order processing, material supply,
nance management, inventory management, human resource management, company produ
ction plan, and so on. The aims of MIS are to shorten delivery time, reduce cost
, and help the company to make rapid decision to react to market change.
492
Market information Technical information
TECHNOLOGY
Functional subsystems Management information subsystem CAD/CAPP/CAM subsystem Ma
nufacturing automation subsystem Computer aided quality management subsystem
Sales service information
Support subsystems Database management subsystem
Computer network subsystem
Figure 5 Decomposition of CIMS.
494
TECHNOLOGY
Because JIT and MRPII have their advantages as well as limitations in applicatio
ns, the combination of JIT and MRPII systems in the common framework of CIM may
produce excellent results in production scheduling and control.
3.2.2.
CAD / CAPP / CAM System
CAD / CAPP / CAM stands for computer-aided design / computer-aided process plann
ing / computeraided manufacturing. The system is sometimes called the design aut
omation system, meaning that CAD / CAPP / CAM is used to promote the design auto
mation standard and provide the means to design high-quality products faster. 3.
2.2.1. Computer-Aided Design CAD is a process that uses computers to assist in t
he creation, modi cation, analysis, or optimization of a product design. It involv
es the integration of computers into design activities by providing a close coup
ling between the designer and the computer. Typical design activities involving
a CAD system are preliminary design, drafting, modeling, and simulation. Such ac
tivities may be viewed as CAD application modules interfaced into a controlled n
etwork operation under the supervision of a computer. A CAD system consists of t
hree basic components: hardware, which includes computer and input output devices
, application software, and the operating system software (Figure 7). The operat
ing system software acts as the interface between the hardware and the applicati
on software system. The CAD system function can be grouped into three categories
: geometric modeling, engineering analysis, and automated drafting. Geometric mo
deling constructs the graphic images of a part using basic geometric elements su
ch as points, lines, and circles under the support of CAD software. Wire frame i
s one of the rst geometric modeling methods. It uses points, curves, and other ba
sic elements to de ne objects. Then the surface modeling, solid modeling, and para
metric modeling methods are presented in the area of geometric modeling area. Sa
xena and Irani (1994) present a detailed discussion of the development of geomet
ric modeling methods. Engineering design completes the analysis and evaluation o
f product design. A number of computer-based techniques are used to calculate th
e products operational, functional, and manufacturing parameters, including nite-e
lement analysis, heat-transfer analysis, static and dynamic analysis, motion ana
lysis, and tolerance analysis. Finite-element analysis is the most important met
hod. It divides an object into a number of small building blocks, called nite ele
ments. Finite-element analysis will ful ll the task of carrying out the functional
performance analysis of an object. Various methods and packages have been devel
oped to analyze static and dynamic performance of the product design. The object
ives and methods can be found in any comprehensive book discussion of CAD techni
ques. After the analysis, the product design will be optimized according to the
analysis results. The last function of the CAD system is automated drafting. The
automated drafting function includes 2D and 3D product design drafting, convert
ing a 3D entity model into a 2D representation. 3.2.2.2. Computer-Aided Process
Planning CAPP is responsible for detailed plans for the production of a part or
an assembly. It acts as a bridge between design and manufacturing by translating
design speci cations into manufacturing process details. This operation includes
a sequence of steps to be executed according to the instructions in each step an
d is consistent with the controls indicated in the instructions. Closely related
to the process-planning function are the functions that determine the cutting c
onditions and set the time standards. The foundation of CAPP is group technology
(GT),
Database (CAD model)
Application software Graphics utility User interface
Input-output devices
Device driver
Figure 7 Basic Components of CAD.
496
TECHNOLOGY
Figure 9 is a STEP-based CAD / CAPP / CAM integration system developed at the St
ate CIMS Engineering Research Center of China (located at Tsinghua University, B
eijing). It was developed as a part of the CIMS application integration platform
(Fan and Wu 1997) for manufacturing enterprises. This system focuses on part-le
vel CAD / CAPP / CAM integration. XPRESS language and the STEP development tool
ST-developer are used to de ne and develop the integration interfaces. Different k
inds of CAD, CAPP, and CAM systems can be integrated using the interfaces provid
ed.
3.2.3.
Manufacturing Automation System
Manufacturing automation system is a value-added system. The material ow and info
rmation ow come together in MAS. For a discrete manufacturing company, MAS consis
ts of a number of manufacturing machines, transportation systems, high-bay store
s, control devices, and computers, as well as MAS software. The whole system is
controlled and monitored by the MAS software system. For the process industry, M
AS consists of a number of devices controlled by DCS, the monitor system, and th
e control software system. The objectives of MAS are to increase productivity, r
educe cost, reduce work-in-progress, improve product quality, and reduce product
ion time. MAS can be described from three different aspects: structural descript
ion, function description, and process description. Structural description de nes
the hardware and the software system associated with the production processes. F
unction description de nes the MAS using a number of functions that combine to nish
the task of transforming raw material into products. The inputoutput mapping pre
sented by every function is associated with a production activity of the MAS. Pr
ocess description de nes the MAS using a series of processes covering every activi
ty in the manufacturing process. In the research eld of MAS, a very important top
ic is the study of control methods for manufacturing devices, from NC machines t
o automatic guided vehicles. But the focus of this chapter is on studying MAS fr
om the CIM system point of view. We will describe the shop- oor control and manage
ment system functions and components below. The shop- oor control and management s
ystem is a computer software system that is used to manage and control the opera
tions of MAS. It is generally composed of several modules as shown in Figure 10.
It receives a production plan from the MRPII (ERP) system weekly. It optimizes
the sequence of jobs using production planning and scheduling algorithms, assign
s jobs to speci c devices and manufacturing groups, controls the operation of the
material-handling system, and monitors the operations of the manufacturing proce
ss. Task planning decomposes the order plan from MRPII system into daily tasks.
It assigns job to speci c work groups and a set of machines according to needed op
erations. Group technology and optimization technology are used to smooth the pr
oduction process, better utilize the resources, reduce production setup time, an
d balance the load for manufacturing devices. Hence, good task planning is the b
asis for improving productivity and reducing cost of production.
Application level Application protocol for CAx CAD system CAPP/CAM system
EXPRESS pre-compiler D
Application interface (API, SDAI)
Mode definition Dictionary
Working-form of feature object
as ISOs 9000, 9001, 9002, 9003, and 9004. The computer-aided quality-management
system has also been called the integrated quality system. The computer-aided q
uality system consists of four components: quality planning, inspection and qual
ity data collection, quality assessment and control, and integrated quality mana
gement. The quality-planning system consists of two kinds of functions: computer
-aided product-quality planning and inspection-plan generating. According to the
historical quality situation and productiontechnology status, computer-aided pr
oduct-quality planning rst determines the quality aims and assigns responsibility
and resources to every step. Then it determines the associated procedure, metho
d, instruction le, and quality-inspection method and generates a quality handbook
. Computeraided inspection planning determines inspection procedures and standar
ds according to the quality aims, product model, and inspection devices. It also
generates automatic inspection programs for automatic inspection devices, such
as a 3D measuring machine.
498
TECHNOLOGY
Guided by the quality plan, the computer-aided quality inspection and quality da
ta collection receive quality data during different phases. The phases include p
urchased-material and part-quality inspection, part-production-quality data coll
ection, and nal-assembly quality inspection. The methods and techniques used in q
uality inspection and data collection are discussed in special books on quality
control (Taguchi et al. 1990). Quality assessment and control ful lls the tasks of
manufacturing process quality assessment and control and supply part and suppli
er quality assessment and control. Integrated quality management includes the fu
nctions of quality cost analysis and control, inspection device management, qual
ity index statistics and analysis, quality decision making, tool and xture manage
ment, quality personnel management, and feedback information storage on quality
problems, and quality problems backtrack into manufacturing steps. Quality cost
plays an important role in a companys operation. The quality cost analysis needs
to determine the cost bearer and the most important cost constituent part to gen
erate a quality cost plan and calculate real cost. It also optimizes the cost in
the effort to solve quality problems. Figure 11 presents a quality cost analysi
s owchart.
3.2.5.
Computer Network and Database Management Systems
Computer network and database management systems are supporting systems for CIMS
. The computer network consists of a number of computers (called nodes in the ne
twork) and network devices, as well as network software. It is used to connect d
ifferent computers together to enable the communication of data between differen
t computers. The computer network can be classi ed as a local area network (LAN) o
r a wide area network (WAN). LAN normally means a restricted area network, such
as in a building, factory, or campus. WAN means a much wider area network, acros
s a city or internationally. Network technology is developing rapidly. The Inter
net concept has changed manuQuality cost analysis cost time Direct Indirect Cost data current data manage mai
ntain adjust Estimate store
Cost item calculation inspection cost defect cost prevention cost Cost bearer pr
oduct assembly component part material Cost consumer power process supply manuf
cturing
constituent
Quality cost report trends forecast
problem
Quality cost estimation budget comparison improve control Cost awareness in proc
ess control
Figure 11 Quality Cost Analysis Flowchart.
n the current agile manufacturing era. Improving expansion exibility can signi cant
ly reduce system expansion or change cost, shorten system recon guration time, and
hence shorten the delivery time for new products.
4.1.2.
FMS De nition and Components
An FMS is an automated, mid-volume, mid-variety, central computer-controlled man
ufacturing system. It can be used to produce a variety of products with virtuall
y no time lost for changeover from one product to the next. Sometimes FMS can be
de ned as a set of machines in which parts are automatically transported under comp
uter control from one machine to another for processing (Jha 1991). A more formal
de nition of FMS is that it consists of a group of programmable production machine
s integrated with automated material-handling equipment and under the direction
of a central controller to produce a variety of parts at nonuniform production r
ates, batch sizes, and quantities (Jha 1991).
500
TECHNOLOGY
From this de nition, it can be seen that an FMS is composed of automated machines,
materialhandling systems, and control systems. In general, the components of an
FMS can be classi ed as follows: 1. Automated manufacturing devices include machi
ning centers with automatic tool interchange ability, measuring machines, and ma
chines for washing parts. They can perform multiple functions according to the N
C instructions and thus ful ll the parts-fabrication task with great exibility. In
an FMS, the number of automated machining centers is normally greater than or at
least equal to two. 2. Automated material-handling systems include load / unloa
d stations, high-bay storage, buffers, robots, and material-transfer devices. Th
e material-transfer devices can be automatic guided vehicles, transfer lines, ro
bots, or a combination of these devices. Automated material-handling systems are
used to prepare, store, and transfer materials (raw materials, un nished parts, a
nd nished parts) between different machining centers, load / unload stations, buf
fers, and highbay storage. 3. Automated tool systems are composed of tool setup
devices, central tool storage, toolmanagement systems, and tool-transfer systems
. All are used to prepare tools for the machining centers as well as transfer to
ols between machining centers and the central tool storage. 4. Computer control
systems are composed of computers and control software. The control software ful l
ls the functions of task planning, job scheduling, job monitoring, and machine c
ontrolling of the FMS. Figure 12 shows the FMS layout at the State CIMS Engineer
ing Research Center (CIMS-ERC) of China. (HMC stands for horizontal machining ce
nter and VMC stands for vertical machining center.) Another example of FMS is sh
own in Figure 13, from Kingdream. This system produces oil well drill bits, mini
ng bits, hammer drills, high-pressure drills, and so on.
4.2.
General FMS Considerations
Although FMS was originally developed for metal-cutting applications, its princi
ples are more widely applicable. It now covers a wide spectrum of manufacturing
activities, such as machining, sheet metal working, welding, fabricating, and as
sembly. The research areas involved in the design, implementation, and operation
of an FMS are very broad. Much research has been conducted and extensive result
s obtained. In this section, we present the research topics, problems to be solv
ed, and methods that can be used in solving the problems.
4.2.1.
FMS Design
FMS is a capital investment-intensive and complex system. For the best economic
bene ts, an FMS should be carefully designed. The design decisions to be made rega
rding FMS implementation cover
Tool setup
I/O station
Central tool storage HMC A B VM C A B High-bay storage
Computer control system
Automatic guided vehicle Robot
Load/unload station
Buffers
Washing machine
Turning center
Clear machine
In Out Elevator
3-D measuring machine
Figure 12 FMS Layout at State CIMS-ERC of China.
502
TECHNOLOGY
Figure 14 presents a function model for FMS planning, scheduling, and resource m
anagement. The resource management and real-time control functions of FMS are cl
osely related to the dynamic scheduling system. The resource-management system s
hould be activated by a dynamic scheduling system to allocate resources to produ
ction process to achieve real-time control for FMS. The resources to be controll
ed involve tools, automatic guided vehicles, pallets and xtures, NC les, and human
resources. 4.2.2.1. Planning Planning seeks to nd the best production plan for t
he parts entered into the FMS. Its aim is to make an optimized shift production
plan according to the shop-order and partdue dates. The FMS planning system rece
ives the shop-order plan in the weekly time scale from the MRPII system. Accordi
ng to the product due dates, it analyzes the shop order and generates a daily or
shift production plan. Group technology is used for grouping parts into familie
s of parts. The capacity requirement is calculated for every shift plan generate
d. Capacity balance and adjustment work should be carried out if the required ca
pacity is higher than that provided by machines. After feasibility analysis, cap
acity balancing, and optimization, a shift plan is generated. The shift plan giv
es detailed information for the following questions: 1. 2. 3. 4. 5. 6. What kind
of parts will be machined? In what sequence will the parts enter the FMS? What
operations are needed to process the parts? What is the operation sequence? What
are the start time and complete time for processed parts? What materials are ne
eded? In what time? What kinds of tool are needed?
4.2.2.2. Static Scheduling Static scheduling is the re nement of the shift product
ion plan. It seeks to optimize machine utility and reduce system setup time. Thr
ee functions are performed by a static scheduling system: part grouping, workloa
d allocating and balancing, and part static sequencing. Because all these functi
ons are performed before production starts, static scheduling is also called off
-line sequencing. A number of factors affecting production sequence should be ta
ken into account for static scheduling, such as the part process property, FMS s
tructure, and optimization index. The part process property determines what kind
of production method should be used. Flow-shop, exible- ow-line, and job-shop are
three major strategies for producing parts. Different methods can be used to gen
erate static scheduling for the different production strategies. The second fact
or affecting static scheduling is FMS structure. The main structural properties
are whether a central tool system, a xture system, or bottleneck devices are pres
ent. The third factor is the optimization index chosen. The general optimization
index is a combination of several optimization indexes, that is, the FMS static
scheduling is a multiobjective optimization process. The following parameters h
ave an important effect on implementing optimal static scheduling.
Part due dates
Operation control instructions Minimize system setup time
Shop order plan Planning Part processing time Machine capacity
Shift production plan Static scheduling Part grouping & Sequencing results
Increase key machine utility Increase system resource utility
Group technology Operations & processes System status
Dynamic scheduling
Scheduling instructions Resource management Real-time system states
ods for DEDS modeling and analysis can be used to model an FMS, such as Petri ne
ts, network of queues (Agrawal 1985), and activity cycle diagram (Carrie 1988).
This section brie y introduces Petri nets, their application in FMS modeling, and
the FMS simulation method.
504
TECHNOLOGY
p2 Robot ready
Components available
p3 t1
Picking up
Robot holding a component
p1 t2
Placing
Figure 15 A Simple Petri Net Example. (From M. C. Zhou, Ed., Petri Nets in Flexi
ble and Agile Automation, Figure 1, copyright 1995, with kind permission from Kl
uwer Academic Publishers)
4.2.3.1. Petri Nets and Their Application in FMS Modeling A Petri net (PN) may b
e identi ed as a particular kind of bipartite directed graphs populated by three t
ypes of objects. These objects are places, transitions, and directed arcs connec
ting places to transitions and transitions to places. Pictorially, places are de
picted by circles, transitions by bars or boxes. A place is an input place to a
transition if a directed arc exists connecting this place to the transition. A p
lace is an output place of a transition if a directed arc exists connecting the
transition to the place. Figure 15 represents a simple PN. Where places p1 and p
2 are input places to transition t1, place p3 is the output place of t1. Formall
y, a PN can be de ned as a ve-tuple PN
(P, T, I, O, m0), where 1. P { p1, p2, . . .
, pn} is a nite set of places 2. T {t1, t2, . . . , tm} is a nite set of transiti
ons, P T , P
T
. 3. I:( P
T) N is an input function that de nes directed arcs fro
aces to transitions, where N is a set of nonnegative integers. 4. O:(P T) N is an
output function that de nes directed arcs from transitions to places. 5. m0:P N is
the initial marking. The state of the modeled system is represented by the toke
ns (small dots within the places) in every place. For example, in Figure 15, a s
mall dot in place p1 means components available. The change of the states repres
ents the system evolution. State changing is brought by ring a transition. The re
sult of ring a transition is that for every place connected with the transition,
after the ring of the transition, a token will be removed from its input place an
d a token will be added to its output place. In the example of Figure 15, the rin
g of transition t1 will cause the tokens in places p1, p2 to disappear and a tok
en to be added to place p3. Due to the advantages of its formal theory backgroun
d, natural link with DEDS, and mature simulation tool, PN is well suited to FMS
modeling. Figure 16 shows a two-machine production line to demonstrate the model
ing of FMS using PN.
Robot 1 Machine 1
Robot 2 Machine 2
Parts Conveyors
Final products Pallet
Figure 16 A Two-Machine Production Line.
R2-unload
t3 p3 p4
M2-process
t4
p1
R1-load
Parts ready R2-load R1-unload
p8
R1 available (b)
p9
R2 available
Figure 17 Petri Net Model for the Two-Machine Production Line. (From M. C. Zhou,
Ed., Petri Nets in Flexible and Agile Automation, Figure 8, with kind permissio
n of Kluwer Academic Publishers)
506
TECHNOLOGY
real-world system. This concept is quite close to automated simulation. It has t
he ultimate ease of use. The rst such program for FMS was developed at Purdue, th
e general computerized manufacturing system (GCMS) simulator (Talavage and Lenz
1977). The third approach for FMS modeling uses a base programming language, suc
h as SIMULA and SIMSCRIPT, which provides more model-speci c constructs that can b
e used to build a simulation model. This approach thus has a much stronger model
ing capability. Unfortunately, it is not widely used. One reason may be that few
people know it well enough to use it. Another method for DEDS simulation, calle
d activity cycle diagram (ACD), can also be used in FMS simulation. This is a di
agram used in de ning the logic of a simulation model. It is equivalent to a owchar
t in a general-purpose computer program. The ACD shows the cycle for every entit
y in the model. Conventions for drawing ACDs are as follows: 1. 2. 3. 4. 5. Each
type of entity has an activity cycle. The cycle consists of activities and queu
es. Activities and queues alternate in the cycle. The cycle is closed. Activitie
s are depicted by rectangles and queues by circles or ellipses
Figure 18 presents an ACD for a machine shop. Jobs are arriving from the outside
environment. Jobs are waiting in a queue for the machine. As soon as the machin
e is available, a job goes to the machine for processing. Once processing is ove
r, the job again joins a queue waiting to be dispatched. Because ACDs give a bet
ter understanding of the FMS to be simulated, they are widely used for FMS simul
ation.
4.3.
Bene ts and Limitations of FMS
FMS offers manufacturers more than just a exible manufacturing system. It offers
a concept for improving productivity in mid-variety, mid-volume production situa
tions, an entire strategy for changing company operations ranging from internal
purchasing and ordering procedures to distribution and marketing. The bene ts of F
MS can be summarized as follows: 1. 2. 3. 4. 5. Improved manufacturing system exi
bility Improved product quality, increased equipment utility Reduced equipment c
ost, work-in-progress, labor cost, and oor space Shortened lead times and improve
d market response speed Financial bene ts from the above
Environment
Arrival
Waiting in queue
Dispatch
Machine
Idle Represents movements of parts Represents change in state of machine
Figure 18 Activity Cycle Diagram. (From A. Carrie, Simulation of Manufacturing S
ystems, copyright 1988 John Wiley & Sons Ltd. Reproduced with permission)
ansforming the business process, the manufacturing process, and the product-deve
lopment process into a process view model. The process model is the basis for bu
siness process simulation, optimization, and reengineering. 5.1.1.1. Modeling Me
thod for Process View Process view modeling focuses mainly on how to organize in
ternal activities into a proper business process according to the enterprise goa
ls and system restrictions. Traditional function-decomposing-oriented modeling m
ethods such as SADT and IDEF0, which set up the process based on activities (fun
ctions), can be used in business process modeling. The business description lang
uages WFMC (Work ow Management Coalition 1994), IDEF3, and CIMOSA are process-orie
nted modeling methods. Another modeling method uses object-oriented technology,
in which a business process can be comprehended as a set of coordinated request
/ service operations between a group of objects. Jacobson (1995) presents a meth
od for using object-oriented technology, the use case method, to reengineer the
business process. Object-oriented methods offer intrinsic bene ts: they can improv
e systemic extendibility and adaptability greatly, their services based on objec
t-operated mode can assign systemic responsibility easily, existing business pro
cesses can be reused easily, and distribution and autonomy properties can be des
cribed easily. The main objective of the process view modeling method is to prov
ide a set of modeling languages that can depict the business process completely
and effectively. To depict a business process, it should be able to depict the c
onsequent structure of processes, such as sequence, embranchment, join, conditio
n, and circle, to establish a formal description of the business process. Genera
lly accepted modeling languages today are IDEF3, CIMOSA business process descrip
tion language, and WFMC work ow description language. Some business process descri
ption methods originating in the concepts and models of traditional project-mana
gement tools, such as the PERT chart and other kinds of network chart, are gener
ally
508
TECHNOLOGY
adopted in practical application systems because they can be easily extended fro
m existing project management tool software. If the business process is relative
ly complex, such as existing concurrent or collision activities, some superforma
l descriptions, such as Petri net, should be used. Figure 19 is a work ow model of
a machine tool-handle manufacturing process. The process model is designed usin
g the CIMFlow tool (Luo and Fan 1999). In Figure 19(a), the main sequential proc
ess is described and the icon stands for a subprocess activity. Figure 19(b) is
the decomposition of the subprocess activity Rough Machining in Figure 19(a). Af
ter Turning activity is executed, two conditional arcs are de ned that split the a
ctivity route into two branches. The activity Checking is a decision-making task
that is in charge of the product quality or the progress of the whole process.
5.1.2.
Function View
The function view is used to describe the functions and their relationships in a
company. These functions ful ll the objectives of the company, such as sales, ord
er planning, product design, part manufacturing, and human resource management.
The ef cient and effective operation of these functions contributes to the companys
success in competing in the market. 5.1.2.1. Modeling Method for Function View
Function view modeling normally uses the topdown structural decomposition method
. The function tree is the simplest modeling method, but it lacks the links betw
een different functions, especially the data ow and control ow between different f
unctions, so it is generally used to depict simple function relationships. In or
der to re ect data and control ow relationships between different functions, SADT a
nd IDEF0 (Colquhoun and Baines 1991) methods are used to model function view. Th
e IDEF0 formalism is based on SADT, developed by Ross (Ross 1985). The IDEF0 mod
el has two basic elements: activity and arrow. Figure 20 gives the basic graphic
symbol used in the IDEF0 method. IDEF0 supports hierarchical modeling so that e
very activity can be further decomposed into a network of activities. In order t
o organize the whole model clearly, it is advised that the number of the child a
ctivities decomposed from the parent activity be less than 7 and greater than 2.
Figure 21 gives a demonstration IDEF0 model (A0 model) of the control shop oor o
peration function. In the CIMOSA model, the overall enterprise functions are rep
resented as an event-driven network of domain processes (DPs). An individual dom
ain process is represented as a network of activities. The domain process is com
posed of a set of business processes (BPs) and enterprise activities (EAs). The
BP is composed of a set of EAs or other BPs. An EA is composed of a set of funct
ional operations (FOs). The BP has a behavior property that de nes the evolution o
f the enterprise states over time in reaction to enterprise event generation or
conditions external or internal to the enterprise. It is de ned by means of a set
of rules called procedure rules. The structure property of BP describes the func
tional decomposition of the enterprise functions of enterprise. This can be achi
eved by means of a pair of pointers attached to each enterprise function. EA has
an activity behavior that de nes
(a)
(b)
Figure 19 Process Model for Tool-Handle Manufacturing.
510
TECHNOLOGY
for the relational database management system. The currently used IDEF1X method
is an extension of the entity-relationship model proposed by Chen (1976). Three
phases of the modeling process, a conceptual model, a logical model, and a physi
cal model, are used in designing and implementing an information system. Vernada
t (1996) gives a good introduction to information modeling in the context of ent
erprise modeling.
5.1.4.
Organization View
The organization view is used to de ne and represent the organization model of a c
ompany. The de ned model includes the organization tree, team, faculty, role, and
authority. It also creates an organization matrix. In the organization view, the
relationships between different organization entities are de ned. It provides sup
port for execution of the companys functions and processes. The hierarchical rela
tionship between the organization units forms the organization tree, which descr
ibes the static organization structure. The team describes the dynamic structure
of the company. It is formed according to the requirements of business processe
s. Personnel and organization units are the constituents of the team. Figure 22
shows the organization view structure. The basic elements of the organization vi
ew are organization unit, team, and personnel. The attributes of the organizatio
n unit include organization unit name, position, description, role list, leader,
and the organizations associated activities and resources. The leader and subord
inate relationships between different units are also de ned in the organization un
it. In de ning a team, the attributes needed are team name, description, project o
r process ID, associated personnel, and resources.
5.1.5.
Resource View
The resource view is similar to the organization view. It describes resources us
ed by the processes to ful ll the companys business functions. Three main objects a
re de ned in the resource view model: resource type object, resource pool object,
and resource entity object. Resource type object describes the companys resource
according to the resource classi cation. The resource type object inherits the att
ributes from its parent object. A resource classi cation tree is created to descri
be the companys resource. The resource pool object describes resources in a certa
in area. All the resources located at this area form a resource pool. Resource e
ntity object de nes the atomic resources. An atomic resource is some kind of resou
rce that cannot be decomposed furtherthat is, the smallest resource entity. Figur
e 23 gives the resource classi cation tree structure. In this tree, the parent nod
e resource consists of all its child node resources. It depicts the static struc
ture of the companys resources. The resourceactivity matrix (Table 1) de nes the rel
ationships between resources and process activities. Every cross ( ) means that th
e resource is used by this activity. The resourceactivity matrix presents the dyn
amic structure of the resources.
5.2.
Enterprise Modeling Methods
As pointed out in the previous section, there are many enterprise modeling metho
ds. Here we only give a brief introduction to the CIMOSA, ARIS, and GIM methods.
5.2.1.
CIMOSA
CIMOSA supports all phases of a CIM system life cycle, from requirements speci cat
ion, through system design, implementation, operation and maintenance, even to a
system migration towards a
Team
Organization tree
Personnel Dynamic structure Static structure
Figure 22 Organization View Structure.
Activity n
Resource 1 Resource 2 Resource m
512
Generation Organization Instantiation Resource Information Derivation Function R
equirements definition (RD) modeling level FV Generic RD building blocks Generic
DS building blocks Generic ID building blocks FV Partial RD models Partial DS m
odels Partial ID models FV Particular RD models Particular DS models Particular
ID models IV RV IV Generic OV RV IV Partial OV RV
TECHNOLOGY
Particular OV
Design specification (DS) modeling level
Implementation description (ID) modeling level
Reference Architecture
Particular Architecture
Figure 24 CIMOSA Modeling Framework. (From Esprit Consortium AMICE 1993. Reprint
ed by permission of Springer-Verlag.) The instantiation process is a design prin
ciple that suggests: (1) going from a generic type to particular type (types are
re ned into subtypes down to particular instances); and (2) reusing previous solu
tions (i.e., using particular models or previously de ned models) as much as possi
ble. This process applies to all four views. It advocates going from left to rig
ht of the CIMOSA cube. The derivation process is a design principle that forces
analysis to adopt a structured approach to system design and implementation, fro
m requirements speci cation through design speci cation and nally to full implementat
ion description. This process also applies to all four views. It advocates going
from the top to the bottom of the CIMOSA cube. The generation process is a desi
gn principle that encourages users to think about the entire enterprise in terms
of function, information, resource, and organization views, in that order. Howe
ver, the complete de nition of the four views at all modeling levels usually requi
res going back and forth on this axis of the CIMOSA cube.
5.2.2.
ARIS
The ARIS (architecture of integrated information systems) approach, proposed by
Scheer in 1990, describes an information system for supporting the business proc
ess. The ARIS architecture consists of the data view, function view, organizatio
n view, and control view. The data view, function view, and organization view ar
e constructed by extracting the information from the process chain model in a re
latively independent way. The relationships between the components are recorded
in the control view, which is the essential and distinguishable component of ARI
S. Information technology components such as computer and database are described
in the resource view. But the life-cycle model replaces the resource view as th
e independent descriptive object. The life-cycle model of ARIS is divided into t
hree levels. The requirement de nition level describes the business application us
ing the formalized language. The design speci cation level transfers the conceptua
l environment of requirement de nition to the data process. Finally, the implement
description level establishes the physical link to the information technology.
The ARIS architecture is shown in Figure 25. The ARIS approach is supported by a
set of standard software, such as the application systems ARIS Easy Design, ARI
S toolset, and ARIS for R / 3, which greatly help the implementation of ARIS.
5.2.3.
GIM
GIM (GRAI integrated methodology) is rooted in the GRAI conceptual model shown i
n Figure 26. In this method, an enterprise is modeled by four systems: a physica
l system, an operating system, an information system, and a decision system.
514
TECHNOLOGY
The GRAI method makes use of two basic modeling tools: the GRAI grid and the GRA
I net. The GRAI grid is used to perform a top-down analysis of the domain of the
enterprise to be analyzed. It is made of a 2D matrix in which columns represent
functions and lines represent decision levels. The decision level is de ned by a
horizon H and a period P. Long-term planning horizons are at the top and short-t
erm levels are at the bottom of the grid. Each cell in the matrix de nes a decisio
n center. The grid is then used to analyze relationships among decision centers
in terms of ows of information and ows of decisions. GRAI nets are used to further
analyze decision centers in terms of their activities, resources, and inputoutpu
t objects. With this method, a bottom-up analysis of the manufacturing systems s
tudied can be made to validate the top-down analysis. In practice, several paths
in both ways are necessary to converge to a nal model accepted by all concerned
business. GRAI and GIM are supported by a structured methodology. The goal is to
provide speci cations for building a new manufacturing system in terms of organiz
ation, information technology, and manufacturing technology viewpoints. The meth
odology includes four phases: initialization, analysis, design, and implementati
on.
6.
CIM IMPLEMENTATION
CIM implementation is a very important but also very complex process. It require
s the participation of many people with different disciplines. Bene ts can be gain
ed from successful implementation, but loss of investment can be caused by inade
quate implementation. Therefore, much attention should be paid to CIM implementa
tion.
6.1.
General Steps for CIM Implementation
The general life-cycle model discussed in CIM architecture and modeling methodol
ogy is the overall theoretical background for CIM implementation. In a practical
application, due to the complexity of CIM implementation, several phases are ge
nerally followed in order to derive the best effect and economic bene ts from CIM
implementation. The phases are feasibility study, overall system design, detaile
d system design, implementation, operation, and maintenance. Each phase has its
own goals and can be divided into several steps.
6.1.1.
Feasibility Study
The major tasks of the feasibility study are to understand the strategic objecti
ves, gure out the internal and external environment, de ne the overall goals and ma
jor functions of a CIM system, and analyze the feasibility of CIM implementation
from technical, economical, and social factors. The aim of this phase is to pro
duce a feasibility study report that will include, besides the above, an investm
ent plan, a development plan, and a costbene t analysis. An organization adjustment
proposal should also be suggested. A supervisory committee will evaluate the fe
asibility study report. When it is approved, it will lay the foundation for foll
owing up the phases of CIM implementation. Figure 27 presents the working steps
for the feasibility study.
6.1.2.
516
TECHNOLOGY
6.1.4.
Implementation and Operation
The implementation phase follows a bottom-up approach. Subsystems are implemente
d in order according to the implementation scheduling. When the subsystem implem
entation is nished, integration interfaces between the subsystems are developed a
nd several higher-level subsystems are formed through integration of some low-le
vel subsystems. Finally, the whole CIM system is implemented through an integrat
ion. After the system is built and tested, it becomes an experimental operation,
which will last for three to six months. During that period, errors that occur
in the operation are recorded and system modi cations are carried out. The CIM sys
tem is turned to practical use. In the implementation and operation phase, the f
ollowing steps are generally followed: 1. Building computer supporting environme
nt: including computer network, computer room, network and database server, UPS,
air conditioner, and re-proof system 2. Building manufacturing environment: incl
uding whole system layout setup, installation of new manufacturing devices, and
old manufacturing con guration 3. Application system development: including new co
mmercial software installment, new application system development, old software
system modi cation 4. Subsystem integration: including interface development, subs
ystem integration, and system operation test 5. CIM system integration: includin
g integration and testing of whole CIM system 6. Software documentation: includi
ng user manual writing, operation rule de nition, setting up of system security an
d data backup strategies 7. Organization adjustment: including business process
operation mode, organization structure, and operation responsibility adjustment
8. Training: including personal training at different levels, from top managers
to machine operators 9. System operations and maintenance: including daily opera
tions of CIM system, recording of errors occurring in the operation, application
system modi cation, and recording of new requirements for future development
6.2. 6.2.1.
Integration Platform Technology Requirements for Integration Platform
The complexity of manufacturing systems and the lack of effective integration me
chanisms are the main dif culties for CIMS implementation. Problems include lack o
f openness and exibility, inconvenient and inef cient interaction between applicati
ons, dif culty in integrating a legacy information system, the long time required
for CIMS implementation, and the inconsistency of user interfaces. To meet the r
equirements enumerated above, the integration platform (IP) concept has been pro
posed. IP is a complete set of support tools for rapid application system develo
pment and application integration in order to reduce the complexity of CIMS impl
ementation and improve integration ef ciency. By providing common services for app
lication interaction and data access, IP lls the gaps between the different kinds
of hardware platforms, operating systems, and data storage mechanisms. It also
provides a uni ed integration interface that enables quick and ef cient integration
of different applications in various computing environments.
6.2.2.
The Evolution of Integration Platform Technology
IP has evolved through a number of stages. It was initially considered an applic
ation programming support platform that provided a common set of services for ap
plication integration through API. A typical structure of the early IPs is the s
ystem enabler / application enabler architecture proposed by IBM, shown in Figur
e 28. Under such a structure, the IP provides a common, low-level set of service
s for the communication and data transfer (the system enabler) and also provides
application domain speci c enabling services (the application enabler) for the de
velopment of application systems. Thus, the application developer need not start
from coding with the operating system primitive services. One disadvantage of t
he early IP products was that they only provided support for one or a limited nu
mber of hardware and operating system and the problem of heterogeneous and distr
ibuted computation was not addressed. Also, the released products often covered
a speci c domain in the enterprises, such as the shop- oor control. These early IPs
focused mainly on support for the development of application software, and their
support for application integration was rather weak. Since the 1990s, IP has de
veloped for use in a heterogeneous and distributed environment. An example is sh
own in Figure 29, where the architecture is divided into several layers, the com
munication layer, the information management service layer, and the function ser
vice layer, providing
6.2.3.
MACIP System Architecture
MACIP (CIMS Application Integration Platform for Manufacturing Enterprises) is a
Chinese national high-technology R&D key technology research project. The MACIP
project is designed to develop a research prototype of an application platform
oriented to the new IP technology described above. The MACIP system architecture
is presented in Figure 30. It is a clientserver structured, objectoriented platf
orm with a high degree of exibility. MACIP consists of two layers, the system ena
bling level and the application enabling level. The system enabling level is com
posed of two functions, the communication system and the global information syst
em (GIS). The primary function of these components is to allow for the integrati
on of applications in a heterogeneous and distributed computing environment. The
communication system provides a set of services that allow transparent communic
ation between applications. The global information system allows applications to
have a common means for accessing data sources in a variety of databases and le
systems. These functions are implemented in the form of application independent
API (AI API). Application independence means that these functions are not design
ed for speci c applications but are general services for communication, data acces
s, and le management. Hence, the system enabling level provides the basic integra
tion mechanisms for information and application integration. The application ena
bling level, which utilizes the functions contained within the system enabling l
evel, is composed of three domain sub-integration platforms (SIPs): MIS SIP, CAD
/ CAM / CAPP SIP, and shop- oor control SIP. Each SIP is designed according to th
e requirements of a domain application and provides functions for applications i
n the form of Application Dependent API (AD API). The AD API functions are desig
ned speci cally to enable the quick and easy development of
Application
Application
Application
Specific API and Tools Common API Function Service Information Service Communica
tion Service Heterogeneous distributed operation systems and network protocols
Figure 29 A Multilayer IP Structure.
Middleware
518
TECHNOLOGY
management application
application
application
application
APD-tool AD-API application enabling MIS sublevel integration platform
APD-tool AD-API
APD-tool AD-API
AM-API operation management and control system
shop-floor subCAD/CAPP/CAM subintegration platform integration platform
application independent-application programming interface system enabling level
global information system communication services operating systems, network, dis
tributed databases
Figure 30 System Architecture of MACIP.
domain speci c applications. These functions enable the complete integration of th
e application. Application development tools (APD tools) are developed using the
AD-API. Users can also develop applications using the functions provided by ADAPI. Existing applications are integrated by modifying the data exchange interfa
ce using AD-API functions. An Internet interface is also included in the applica
tion enabling level interfaces and provides access to MACIP through appropriate
Internet technologies. An operation management system was also designed that use
s AI API functions to provide an application management API (AM API) for the use
rs. Users use AM API to develop management applications that manage the IP resou
rces and coordinate the operation of different applications. The development of
MACIP was nished in early 1999. It has since been used in several companies to su
pport the rapid implementation of CIMS.
7.
7.1.
CIMS IN PROCESS INDUSTRY
Introduction
Process industry, by which we refer to continuous or semicontinuous production i
ndustry processes, principally includes the petroleum industry, the electric pow
er industry, the metallurgical industry, the chemical industry, the paper indust
ry, the ceramic industry, the glass industry, and the pharmaceutical industry. P
rocess industry is a kind of highly complicated industrial system that not only
includes biochemical, physical, and chemical reactions but also transmission or
transition of matter and energy. Most process industries are subject to the inte
rlocked relations of enterprise decision making, business marketing, schedule pl
anning, material supplying, repertory transportation, and product R&D, in additi
on to the characteristics of continuity in wide scope, uncertainty, high nonline
arity, and strong coupling. All these factors are responsible for the unusual di
f culty of comprehensive management, scheduling, optimization, and control in proc
ess industry enterprises. Therefore, these problems cannot be solved relying on
either control and optimization theory, which are based on accurate mathematical
models and exact analytical mathematical methods, or automation techniques alon
e (Ashayberi and Selen 1996). The CIMS technique is one possible solution to com
plex, comprehensive automation of process industry.
7.1.1.
De nitions
Process industry: Those industries in which the values of raw materials are incr
eased by means
of mixing and separating, molding, or chemical reaction. Production can be conti
nuous or batch process. The characteristics of process industry must be consider
ed when CIMS is applied to those industries.
which the concepts are very clear. The PERA frame is very suitable for the de niti
on of every phase in the CIMS life cycle, which considers every human factor tha
t will affect the enterprise integration.
520
TECHNOLOGY
7.2.1.
Architecture Structure Model
The architecture structure model of CIMS in process industry has four phases (Ag
uiar and Weston 1995): the strategic planning, requirement analysis, and de nition
phase; the conceptual designs phase; the detailed design and implementation pha
se; and the operation and maintenance phase, as shown in Figure 31. They re ected
all the aspects of building process of CIMS. The strategic planning and requirem
ent de nition phase relates to senior management. The models in this phase manipul
ate information with reference to enterprise models and external factors to asse
ss the enterprises behavior, objective, and strategy in multiview and multidomain
so as to support decision making. The conceptual design phase is the domain of
system analysis. According to the scope de ned in the previous phase, the models i
n this phase give a detailed description of a system in formalized system modeli
ng technology. A solution will be found that satis es the demands for performance
and includes what and how to integrate. In general, the solution is expressed in
the form of functions. Detailed design and implementation will be carried out i
n the system design phase. In this phase, the physical solutions should be speci e
d, which include all subsystems and components. The models given in this phase a
re the most detailed. The models in the operation and maintenance phase embody t
he characteristics of the system in operation. These models, which de ne all activ
e entities and their interaction, encapsulate many activities in enterprise oper
ation. The reference models depicted in Figure 31 consist of run-time models, re
source models, integration models, system models, and business models, which cor
respond to the descriptions of the AS-IS system and the TO-BE system in the proc
ess of designing CIMS in process industry. Their relationships are abstracted st
ep by step from down to up, opposite to the process of building CIMS.
Run-time models encapsulate the information related to system operation, such as
dynamic math
models of production, inputoutput models, and order management models.
Resource models contain the information related to relationships between the res
ource and the
satisfaction of demands. In these models, the resource information that designer
s will add for some special functions is still included.
Y Hierarchical Axis
X Application Axis
Y Time Axis Strategic Decision-making Analysis
Information uses applies_to
Control
Objective
Decision Human and Organization
Business Models
identifies effect
Performance Improvement
System Resource
Control Objective map_into
Strategic Planning/ Requirement Analysis & Definition Level
described_by applies_to
Decision
uses
System Model
System Analysis
Human and Organization
Suitable Solution
identifies applies_to described_by specifies
Conceptual Design Level
Objective map_into
System Resource
Information Control
Resource Models
uses
System Building
Decision Human and Organization
implements construct Resolution
Integration Models Realization
System Resource
map_into
Detailed Design/ Development & Implementation Level
described_by manages
maintain
Run-time Models
Enterprise Management Operation Operation Optimization
Application System Run-time Workbench
aim_at expects driven_by
Run-Time Maintenance Level
Reference Models
Figure 31 Architecture Structure Model. (From Aguiar and Weston 1995)
522
TECHNOLOGY
uling system level, it formulates process tactics and conducts the actions at th
e direct control system level, including operation optimization, advanced contro
l, fault diagnosis, and process simulation. Production scheduling system level:
At this level, the production load is determined and the production planning is
decomposed into ve days rolling work planning for every month, according to the i
nformation from the decision-making system and the material-stream and energystr
eam data. By optimizing scheduling, allocating energy, and coordinating operatio
ns in every workshop, the production becomes balanced, stable and highly ef cient.
Management information system level: The system at this level accomplishes the
MIS function for the whole enterprise and carries out integrated management of p
roduction and business information. According to the instructions from the decis
ion-making system, it makes logical decisions. It is in charge of day-to-day man
agement, including business management and production management. Enterprise dec
ision-making system level: The system at this level comes up with decisions supp
orting enterprise business, product strategy, long-term objectives, and developi
ng planning and determines the strategy of production and business. Within the c
ompany group, it aims at integration optimization in the whole enterprise so as
to yield the maximum bene t.
7.3.
Approach to Information Integration for CIMS in Process Industry
The core of CIMS in process industry is the integration and utilization of infor
mation. Information integration can be described as follows: The production proc
ess of an enterprise is a process of obtaining, processing, and handling informa
tion. CIMS should ensure that accurate information is sent punctually on in the
right form to the right people to enable them to make correct decisions.
7.3.1.
Production Process Information Integration
Production is the main factor to be considered in CIMS design for a process indu
stry. Driven by the hierarchical structure model discussed in Section 7.2.2, the
se information models of every subsystem at all levels are built. In these model
s, the modeling of production process information integration is the crux of the
matter. This model embodies the design guidance, centering on production in thr
ee aspects: 1. Decision comprehensive planning planning decomposition scheduling
process optimization advanced control 2. Purchase material management maintenan
ce 3. Decision comprehensive planning planning decomposition scheduling product
storage and shipment control The computation of material equilibrium and heat eq
uilibrium and the analysis / evaluation of equipment, material and energy can be
done using this model so as to realize the optimized manipulations in the overa
ll technological process.
7.3.2.
Model-Driven Approach to Information Integration
Figure 33 depicts the mapping relationship of the models in the building process
of CIMS in process industry. It demonstrates that the designed function related
to every phase of building CIMS can be depicted from the design view using the
structural and model-driven approach (Aguiar and Weston 1995). The realization o
f model mapping relies on the building of software tools supporting every phase
in a hierarchical way. Model mapping refers to the evolutionary relationships of
models between the phases of building CIMS. As the enterprise hierarchy is deve
loped downwards, the description in the models becomes more detailed. In contras
t, with the increasing widening of modeling scope, the granularity of descriptio
ns in models will be reduced so as to form more abstract models. For example, at
the detailed design and implementation level, various dynamic math models shoul
d be used, and detailed IDEF0 and static math models should be used in the conce
ptual design phase. We can conclude that the models of various phases in the bui
lding CIMS can be evolved step by step in the model-driven approach from up to d
own. In the previous analysis, the realization of the model-driven information i
ntegration method requires a workbench. This consists of a series of tools, such
as modeling tools from entity to model, simulating tools supporting the simulat
ions in various levels from higher-level strategic planning to lower-level detai
led design, and assessing tools appraising the performance of solution simulatio
n at various levels.
7.4.1.
Re nery Planning Process
The re nery enterprise consists of many production activities (Kemper 1997). If th
e blend operation day is called the original day, then the production activities
on the day 90 days before that day include crude oil evaluation, making of prod
uction strategy, and crude oil purchasing. In the same way, the production activ
ities on the day 1030 days after the original day include stock transportation an
d performance adjustment of oil products. Every activity in the production proce
ss is relevant to each other activity. For example, in crude oil evaluation, the
factors in the activities following the making of production strategy must be a
nalyzed. In another example, people in the activity of crude oil evaluation need
to analyze those production activities following the re nery balance in detail. D
eep analysis of those activities in the re nery enterprise is the basis of design
of CIMS in that enterprise. Figure 34 depicts the re nery planning process.
7.4.2.
Integrated Information Architecture
By analyzing of the re nery planning process, we can construct the integration fra
me depicted in Figure 35.
524
Time Days Before Shipment -90 Supply Planning -60 -30 -20 -10 0
TECHNOLOGY
10 20 30
Fee dstoc k Process Product Delivery Operations Blending
Product Shipped
Crude Evaluation Manufacturing Strategy Crude Purchase Refinery Balances Product
Sales
Operations Planning and Scheduling
Transport Planning Feedstock Receipt Planning Operations Planning Blend Operatio
n Stock Management Performance Reconciliation
Figure 34 Re nery Planning Process.
Using the model-driven approach to the modeling of all subsystems, the informati
on integration model in this re nery enterprise could be built as shown in Figure
36 (Mo and Xiao 1999). The model includes the business decision-making level, th
e planning and scheduling level and the process supervisory control level. Their
integration is supported by two database systems. The relevant information, suc
h as market, costing, nancial affairs, and production situation, is synthesized t
o facilitate business decisions of the enterprise, and crude oil supply and oil
product sale planning are both determined at the business decision-making level.
The planning and scheduling level synthesizes management information, decompose
s production planning to short-term planning and executes the daily scheduling,
and gives instructions directly to process supervisory control level. In the mea
ntime, it accomplishes the management and control of oil product storage and tra
nsport, including the management and optimized scheduling control of the harbor
area and oil tank area. The process supervisory control accomplishes process opt
imization, advanced control, fault diagnosis, and oil product optimized blending
.
7.4.3.
Advanced Computing Environment
The information integration model of the giant re nery depicted in Figure 36 is bu
ilt using the modeldriven method. The model is the design guidance of the realiz
ation of CIMS in the enterprise. Figure
Information Management Decision Supporting
Production Planning
Production Scheduling
Real Time Database System Relation Database System
Figure 35 Integration Frame.
DCS
D&R
Supervisory Control
BENEFITS OF CIMS
Many bene ts can be obtained from the successful implementation and operation of a
CIM system in a manufacturing company. The bene ts can be classi ed into three kind
s: technical, management, and human resources quality.
8.1.
Technical Bene ts
Technical bene ts obtained from implementation CIM system are: 1. Reducing invento
ry and work-in-progress: This can be accomplished through the utilization of an
MRPII or ERP system. Careful and reliable material purchasing planning and produ
ction planning can to a great extent eliminate high inventory and work-in-progre
ss level, hence reducing capital overstock and even waste through long-term mate
rial storage.
526
External Environment Market Supplier
TECHNOLOGY
Distributor Government
Business Information System Finance Accounting Decision Support Marketing Crude
Supply
Refinery Management System Refinery Operations Production Planning/Scheduling/Op
timization Performance Monitoring
RDB
N E T W O R K W A N L A N
EUC End User Computing Management General Planning Finance Accounting Personnel
Marketing Supply Production
Unit Simulation On-line Optimization
Data Reconciliation Real Time Database System
RDB
Planning Scheduling Manufacturing Engineering
Office Automation
Process Control ACS Advanced Control System DCS Distributed Control System
RTDB
Electronic Mail Electronic File
Laboratory
Process
Utilities
Tanks
Blending & Shipping
Figure 37 Advanced Computing Environment.
2. Improving production ef ciency: Through the integration of a production system,
planning system, and material supply system, the production processes can be op
erated in a wellorganized way and hence production can be carried out with the s
hortest possible waiting times and machine utilization greatly increased. Throug
h the integration of CAD, CAPP, and CAM systems, the setup time for NC machines
can be reduced signi cantly. The improvement of production ef ciency will bring econ
omic returns from investment in the CIM system. 3. Improving product quality: Th
e integration of the companys business processes, design processes, and productio
n processes will help in improving product quality. TQM can be put into effect i
n the CIM integrated environment. 4. Reducing cost: This is the direct effect ob
tained from the above three bene ts. 5. Improving product design ability: Through
the integration of CAD, CAPP, and CAM systems, by using the current engineering
method, the product design ability of the company can be signi cantly improved. Ne
w and improved products can be designed and developed in a shorter time, and the
company can win the market competition with these products.
8.2.
Management Bene ts
The following management bene ts can be obtained from the CIM sysem implementation
: 1. Standardizing processes: The business processes, design processes, and prod
uction processes can be standardized. This can help to streamline the companys pr
ocesses and reduce errors caused by uncontrolled and random operations. 2. Optim
izing processes: The business processes, design processes, and production proces
ses can be optimized. This can help to locate bottlenecks in the processes and c
ost-intensive activities and thus to provide methods to reduce the cost. 3. Impr
oving market response ability: The traditional pyramid organization structure wi
ll be changed to at structure that can greatly improve the speed of response to m
arket change and user requirements.
8.3.
Human Resource Quality
Almost all employees will be involved in the implementation of the CIM system. T
hrough different courses of training, from CIM philosophy to computer operation,
the total quality of the employees can be improved at all levels, from manageme
nt staff to production operators. More importantly,
528
TECHNOLOGY
to design and produce products in a cooperated and integrated way. The company w
ill fully satisfy user requirements and produce products quickly and cheaply. Ma
terials and products will be delivered on time.
REFERENCES
Agrawal, S. C. (1985), Metamodeling: A Study of Approximations in Queueing Model
s, MIT Press, Cambridge, MA. Aguiar, M. W. C., and Weston, R. H. (1995). A Model-D
riven Approach to Enterprise Integration, International Journal of Integration Man
ufacturing, Vol. 8, No. 3, pp. 210224. Ashayberi, J., and Selen, W. (1996), Compute
r Integrated Manufacturing in the Chemical Industry, Production and Inventory Mana
gement Journal, Vol. 37, No. 1, pp. 5257. Ayres, R. U. (1991), Computer Integrate
d Manufacturing, Vol. 1, Chapman & Hall, New York. Bray, O. H. (1988), Computer
Integrated Manufacturing: The Data Management Strategy, Digital Press, Bedford,
MA. Buffa, E. S. (1984). Meeting the Competitive Challenge: Manufacturing Strate
gies for U.S. Companies, Dow Jones-Irwin, Homewood, IL. Harrington, J. (1979), C
omputer Integrated Manufacturing, R. E. Krieger, Malabar, FL. Carrie, A. (1988),
Simulation of Manufacturing, John Wiley & Sons, New York. Carter, M. F. (1986),
Designing Flexibility into Automated Manufacturing Systems, in Proceedings of the S
econd ORSA / TIMS Conference on Flexible Manufacturing Systems: Operations Resea
rch Models and Applications, K. E. Stecke and R. Suri, Eds. (Ann Arbor, MI, Augu
st 1215), Elsevier, Amsterdam, pp. 107118. CCE-CNMA Consortium (1995), CCE-CNMA: A
n Integration Platform for Distributed Manufacturing Applications, Springer, Ber
lin. Chen, P. P. S. (1976), The Entity-Relationship Model: Toward a Uni ed View of D
ata, ACM Transactions on Database Systems, Vol. 1, No. 1, pp. 936. Chen, Y., Dong,
Y., Zhang, W., and Xie, B. (1994), Proposed CIM Reference Architecture, in Proceedin
gs of 2nd IFAC / IFIP / IFORS Workshop on Intelligent Manufacturing Systems (Vie
nna, June). Colquhoun, G. J., and Baines, R. W. (1991), A Generic IDEF0 Model of P
rocess Planning, International Journal of Production Research, Vol. 29, No. 11, pp
. 239257. Doumeingts, G., Vallespir, B., Zanettin, M., and Chen, D. (1992), GIM, GR
AI Integrated Methodology: A Methodology for Designing CIM Systems, Version 1.0, R
esearch Report, LAP / GRAI, University of Bordeaux I. Esprit Consortium AMICE, E
ds. (1993), CIMOSA: Open System Architecture for CIM, 2nd Ed. Springer, Berlin.
Fan, Y., and Wu, C. (1997), MACIP: Solution for CIMS Implementation in Manufacturi
ng Enterprises, in Proceedings of IEEE International Conference on Factory Automat
ion and Emerging Technology (Los Angles, September), pp. 16. Fujii, M., Iwasaki,
R., and Yoshitake, A. (1992), The Re nery Management System under the Concept of Com
puter Integrated Manufacturing, National Petroleum Re ners Association Computer Conf
erence (Washington, DC). Georgakopoulos, D., Hornick, M., and Sheth, A. (1995), An
Overview of Work ow Management: From Process Modeling to Work ow Automation Infrast
ructure, Distributed and Parallel Databases, Vol. l3, No. 2, pp. 119153. Goldman, G
. L., and Preiss, K., Eds. (1991), 21st Century Manufacturing Enterprise Strateg
y: An Industry-Led View, Harold S. Mohler Laboratory, Iacocca Institute, Lehigh
University, Bethlehem, PA. Goldman, G. L., Nagel, R. N., and Preiss, K. (1995),
Agile Competitors and Virtual Organization: Strategies for Enriching the Custome
r, Van Nostrand Reinhold, New York. Greenwood, E. (1989). Introduction to Comput
er-Integrated Manufacturing, Harcourt Brace Jovanovich, New York. Gupta, Y. P.,
and Goyal, S. (1989), Flexibility of Manufacturing Systems: Concepts and Measureme
nts, European Journal of Operational Research, Vol. 43, pp. 119135. Hall, R. W. (19
83), Zero Inventories, Dow Jones-Irwin, Homewood, IL. Hammer, M., and Champy, J.
(1993). Reengineering the Corporation: A Manifesto for Business Revolution, Nic
holas Brealey, London.
Legal Requirements
Traditional command-and-control requirements that primarily targeted the manufac
turing phase of the product life cycle increased signi cantly in the past 20 years
in the United States. For example, the number of environmental laws passed in t
he United States increased from 7 between 1895 and 1955 to 40 between 1955 and 1
995 (Allenby 1999). Similarly, the number of environmental agreements in the Eur
opean Union has generally escalated from 1982 to 1995, as described in European
Environmental Agency (1997). Many of the regulations in the Asia-Paci c region mir
ror those in the United States and Europe (Bateman 1999a,b).
TABLE 2 States Medium Material Pollutant Waste Energy
Example of Information Sources for Macrolevel Inventory Data for the United Repo
rt Toxic Release Inventory (TRI) Aerometric Information Retrieval System (AIRS)
Resource Conservation and Recovery Act Biennial Report System (RCRA BRS) Manufac
turing Energy Consumption Survey U.S. EPA U.S. EPA U.S. EPA U.S. Department of E
nergy Agency
532
TECHNOLOGY
Manufacturers must also follow local legislation such as mandates for permits to
install or operate processes with regulated ef uents. In addition to local and fe
deral mandates where manufacturing and sales take place, manufacturers must also
keep abreast of global agreements. For example, the manufacture of chloro uorocar
bon (CFC) solvents, which were used for cleaning electronic assemblies, was bann
ed in 1995 (Andersen 1990). Environmental law is discussed in Chapter 19. Additi
onal legal and service trends are discussed in the next section.
2.3.
Responsibility Trends
This section outlines three emerging trends that directly affect the manufacture
rs responsibility for environmental impacts: extended product responsibility, ext
ended services, and environmental information reporting. The rst trend calls for
producers to prevent pollution associated with their products over the products l
ife cycles. For the second trend, rather than solely selling products, some manu
facturers are expanding their business to offer service packages that include th
e use of their products. In the third trend, the availability and mandatory repo
rting requirements for environmental information for customers are increasing. T
hese trends are discussed in the next three subsections.
2.3.1.
Extended Product Responsibility
There is a trend in Europe and East Asia toward product life cycle responsibilit
y legislation that requires manufacturers to minimize environmental impacts from
materials extraction to manufacturing to distribution / packaging to repair to
recycling to disposal. Essentially, extended product responsibility shifts the p
ollution prevention focus from production facilities to the entire product life
cycle (Davis et al. 1997). For example, proposed legislation may require that ma
nufacturers not only recycle in-plant wastes but also recycle their discarded pr
oducts (Denmark Ministry of the Environment 1992; Davis 1997). The evaluation of
life cycle stages and impacts are discussed further in Section 4.3.
2.3.2.
Manufacturers as Service Providers
In recent years, as manufacturers have assumed the additional role of service pr
ovider, responsibility for environmental impact has shifted from the user to the
manufacturer. For example, a chemical supplier may be reimbursed per total auto
bodies cleaned rather than for the procurement of chemicals for auto body clean
ing. Under such an arrangement, there is a nancial incentive for the supplier to
reduce material consumption (Johnson et al. 1997). In another example, an electr
onic component manufacturer may use a chemical rental program. The supplier prov
ides chemical management from purchasing and inventory management to waste treat
ment and disposal (Johnson et al. 1997). Thus, chemical suppliers are gaining a
broader responsibility for their products throughout their products life cycles.
Another important service trend is the replacement of products with services. Fo
r example, telecommunications providers offer voice mail rather than selling ans
wering machines. Another example is electronic order processing rather than pape
r processing. These service trends result in dematerialization, the minimization
of materials consumed to accomplish goals (Herman et al. 1989).
2.3.3.
Environmental Information Provided by Manufacturers
The third trend is the increasing amount of environmental information that manuf
acturers communicate to customers. Three general approaches for communicating en
vironmental attributes to corporate procurement and consumers have emerged: ecolabels, self-declaration, and life cycle assessment. Eco-labels are the simplest
format for consumers but the most in exible format for manufacturers in that they
require that 100% of their standard criteria be met. Examples of eco-labels inc
lude the Energy Star label in the United States and the Blue Angel in Germany. B
ecause over 20 different eco-labels with different criteria are in use around th
e world, manufacturers may need to consider multiple eco-label criteria sets (Mo
dl 1995). Another type of label, self-declaration, allows producers to select me
thods and metrics. However, comparisons among competing products or services are
dif cult. Self-declaration is the most exible form for manufacturers, but its use
depends on the manufacturers environmental reputation among customers. The ECMA,
a European industry association that proposes standards for information and comm
unication systems, has proposed product-related environmental attribute standard
s (Granda et al. 1998). Full life-cycle assessment, a comprehensive method to an
alyze the environmental attributes of the entire life cycle of a product, requir
es environmental engineering expertise. Life cycle assessment is described in Se
ction 4.3. Consumers may learn about environmental impacts from eco-labels, self
-declaration, and life cycle assessment studies. Industrial engineers may learn
about clean manufacturing as universities integrate industrial ecology concepts
into business and engineering programs (Santi 1997; Stuart 2000). Important clea
n manufacturing concepts are de ned in the next section.
CLEAN MANUFACTURING
533
3.
HIERARCHY OF IMPORTANT CONCEPTS
To provide a framework for clean manufacturing methods, this section will de ne im
portant basic concepts. Interestingly, many of the concepts, such as recycling,
are de ned differently by government, societal, industrial, and academic entities
(Allen and Rosselot 1997). Other terms are used interchangeably; for example, po
llution prevention is often de ned as source reduction. In Table 3, sustainable de
velopment and industrial ecology top the hierarchy in clean manufacturing. Indus
trial ecology is an emerging study that attempts to lessen the environmental imp
acts of manufacturing activities through planning and design. Industrial ecology
is a systems approach to optimizing materials and energy cycles of products and
processes (Graedel and Allenby 1995). Methods for clean manufacturing and indus
trial ecology are described in the next section.
4.
METHODS
Traditional methods for clean manufacturing focus on waste or energy audits, whi
ch are summarized in Section 4.1. New methods focus on life cycle design, life c
ycle assessment, production planning models with environmental considerations, a
nd environmental management systems, which are described in Sections 4.2, 4.3, 4
.4, and 4.5, respectively.
4.1.
Waste / Energy Audits for Waste / Energy Minimization
Waste and energy audits require a detailed inventory analysis of waste generatio
n and energy consumption. The point of origin of each waste and the breakdown of
the equipment energy consumption patterns must be determined. Audits are used t
o identify signi cant sources of waste and energy costs. Because some environmenta
l impacts are interconnected, both individual source and system impacts must be
considered. General guidelines are given for waste audits and energy audits in t
he next two subsections.
4.1.1.
Waste Audits
Waste audits may be performed at the waste, product, or facility level. Waste-le
vel audits simply require that each waste stream and its source be identi ed. Alth
ough this approach is the simplest, it ignores the implications and interactions
of the waste stream as a whole. Waste audits performed at the product level are
product life cycle inventory assessments, which are discussed in Section 4.3. F
acility waste audits are the most common type of audit because most environmenta
l laws require discharge reporting by facility. Estimating plant-wide emissions
is discussed in Chapter 19. Facility waste audits require process ow charts, prod
uct material information (commonly from the bill of materials), process material
information (such as cutting uids in a machine shop), and
TABLE 3 Term
Hierarchy of Terms in Clean Manufacturing a De nition . . . to meet the needs of the
534
TECHNOLOGY
environmental information (solid, liquid, and gaseous wastes). Waste auditing gu
ides are available from state-funded programs (e.g., Paci c Northwest Pollution Pr
evention Resource Center 1999). Allen and Rosselot suggest that waste audits ans
wer the following series of questions: What waste streams are generated by the f
acility? in what quantity? at what frequency? by which operations? under what le
gal restrictions or reporting requirements? by which inputs? at what ef ciency? in
what mixtures? (Allen and Rosselot 1997) Waste audits require identi cation of so
lid wastes, wastewater, direct and secondary emissions. In the United States, so
lid wastes may be classi ed as nonhazardous or hazardous according to the Resource
Conservation and Recovery Act (RCRA). In general, wastewater is the most signi ca
nt component of total waste load Allen and Rosselot (1997). Several methods for
estimating the rates of direct (fugitive) and secondary emissions are outlined w
ith references for further information in Allen and Rosselot (1997). Once compan
ies have identi ed their major wastes and reduced them, they can turn their focus
toward prevention. Pollution-prevention checklists and worksheets are provided i
n U.S. EPA (1992) and Cattanach et al. (1995). Process- and operations-based str
ategies for identifying and preventing waste are outlined in (Chadha 1994). Case
studies sponsored by the U.S. Department of Energy NICE3 program detail success
stories for cleaner manufacturing or increased energy ef ciency for several major
industries (see Of ce of Industrial Technologies, NICE3, www.oit.doe.gov/nice3/).
4.1.2.
Energy Audits
Energy audits may be performed at either the facility or equipment level. Plantwide energy audits are most common because utility bills summarize energy usage
for the facility. Facility energy audits focus on characteristics of use such as
occupancy pro les, fuel sources, building size and insulation, window and door al
ignment, ventilation, lighting, and maintenance programs. (Facility audit forms
and checklists are available on the Web from the Washington State University Coo
perative Extension Energy Program, www.energy.wsu.edu/ten/energyaudit.htm.) Some
industries have developed specialized audit manuals. For example, an energy aud
it manual for the die-casting industry developed with funds from the state of Il
linois and the Department of Energy describes how to assess energy use for an en
tire die casting facility (Grif th 1997). In addition to industry-speci c energy con
sumption information, the U.S. Department of Energy Industrial Assessment Center
s provide eligible smalland medium-sized manufacturers with free energy audits t
o help them identify opportunities to save energy and reduce waste (Of ce of Indus
trial Technologies 1999). Energy management is described in Chapter 58. At the e
quipment level, energy usage may be determined through engineering calculations
or monitors placed on the equipment in question. Identifying equipment with sign
i cant energy consumption may lead to actions such as adding insulation or perform
ing maintenance. Waste and energy audits are performed to identify existing prob
lems. In the next four subsections, new approaches are presented that focus on p
revention through life cycle design, life cycle assessment, production planning
models with environmental considerations, and environmental management systems.
4.2.
Life-Cycle Design*
The design and implementation of manufacturing activities and products have envi
ronmental impacts over time. Thus, industrial ecology requires consideration of
the materials and energy consumption as well as ef uents from resource extraction,
manufacturing, use, repair, recycling, and disposal. Environmental consideratio
ns for product design and process design are summarized in the next two subsecti
ons.
4.2.1.
Product Design
Product design guidelines for clean manufacturing are scattered throughout the i
ndustrial, mechanical, environmental, and chemical engineering, industrial desig
n, and industrial ecology literature with labels such as life-cycle design, design for
environment (DFE), environmentally conscious design, and green design. Traditional
duct design and materials selection criteria included geometric, mechanical, phy
sical, economic, service environment, and manufacturing considerations. Industri
al ecology introduces criteria such as reducing toxicity, avoiding resource depl
etion, increasing recyclability, and improving product upgradeability. The produ
ct design criteria in Table 4 are categorized by component and assembly level. A
s design functions for complex products are increasingly distributed, it is impo
rtant to recognize product level considerations so that local, component design
efforts are not cancelled out. For example, if a simple repair module is inacces
sible, the design efforts for easy maintenance will be lost.
* This section has been adapted and reprinted with permission from Stuart and So
mmerville (1997).
Use Stage
Refurbishment Repair Upgrade
and weight to minimize packaging and energy consumption. Minimize special storag
e and transport requirements that lead to extra packaging (e.g., reduce fragilit
y, sharp edges, and unusual shapes). Design to avoid secondary, tertiary, and ad
ditional packaging levels. Design for bulk packaging. Design components with mul
tiple functions. Consider renewable or rechargeable energy sources. Minimize ene
rgy consumption during start-up, use, and standby modes. Minimize hazardous mate
rial content. Minimize material content of dwindling world supply or requiring d
amaging extraction. Use standard components. components. Use repairable componen
ts. Maximize durability / rigidity. Consider easily replaceable logos for second
market. Maximize reliability of components. Maximize use of renewable and / or
recyclable materials.
Consider replaceable
Reclamation / materials recycling
Avoid encapsulates,
llers,
transport requirements that lead to extra packaging (e.g., reduce fragility, sha
rp edges, and unusual shapes). Design to avoid secondary, tertiary, and addition
al packaging levels. Design for bulk packaging. Design product with multiple fun
ctions. Consider renewable or rechargeable energy sources. Minimize energy consu
mption during start-up, use, and standby modes. Minimize use of hazardous joinin
g materials. Minimize toxicity, quantity, and number of different wastes and emi
ssions. Maximize ease of disassembly (access and separation). Minimize orientati
on of components. Design upgradeable modules. Maximize durability / rigidity. Co
nsider easily replaceable logos for second market. Maximize reliability of assem
bly. Minimize the number of different materials and different colors; minimize i
ncompatible material combinations. Use easily identi able, separable materials; ma
ke high-value parts and materials easily accessible with standard tools. Minimiz
e number of different components and fasteners.
Because design decisions may result in environmental burden transfers from one s
tage in the life cycle to another or from one medium to another, it is important
to recognize life cycle environmental impacts. Therefore, Table 4 is also categ
orized by ve life-cycle stages: process, distribution, use, refurbishment, and re
cycling. Note that the process design stage for clean manufacturing is detailed
separately in Section 4.2.2 and in Table 5. One of the emerging themes in Table
4 is dematerialization. Dematerialization focuses on using fewer materials to ac
complish a particular task (Herman et al. 1989). For example, consumers may subs
cribe to periodicals and journals on the Web rather than receive printed paper c
opies. Clearly, miniaturization, information technology, and the World Wide Web
are increasing the potential for
536 TABLE 5
TECHNOLOGY Process Design and Material Selection Guidelines
Minimize use of materials with extensive environmental impacts. Minimize toxic e
missions. Minimize material and water consumption. Minimize energy consumption.
Consider materials that allow in-process, on-site, and off-site recycling. Perfo
rm preventive maintenance to reduce environmental impacts over time. Minimize se
condary processes such as coatings. Eliminate redundant processes. Minimize clea
ning process requirements. Capture wastes for recycling, treatment, or proper di
sposal.
Adapted and reprinted with permission from Stuart and Sommerville (1997).
dematerialization. Evaluating the criteria in Table 4 to avoid resource depletio
n or to use renewable materials requires assumptions to be made regarding the un
certainties in technological improvement, material substitutability, and rates o
f extraction and recycling (Keoleian et al. 1997). The design criterion to incre
ase product life reduces the number of products discarded over time. In the mid1970s, DEC VT100 terminals could be disassembled quickly without tools to access
the processor for maintenance and upgrades (Sze 2000). In 1999, the Macintosh G
4 was introduced with a latch on the side cover that provides quick access to up
grade memory and other accessories (Apple 1999). Product life extension is espec
ially important for products with short lives and toxic materials. For example,
battery manufacturers extended the life of nickelcadmium batteries (Davis et al.
1997). An example of product toxicity reduction was the change in material compo
sition of batteries to reduce mercury content while maintaining performance (Til
lman 1991). In another example, popular athletic shoes for children were redesig
ned to eliminate mercury switches when the shoes were banned from land lls (Consum
er Reports 1994). The criteria related to recyclability may apply to product mat
erial content as well as the processing materials described in the next section.
4.2.2.
Process Design
The criteria for process design for clean manufacturing focus on minimizing poll
ution, energy consumption, water consumption, secondary processes, or redundant
processes. Table 5 provides a summary of suggested guidelines for materials sele
ction and process design. Careful process design can reduce environmental impact
s and processing costs. For example, many companies eliminated the cleaning step
for printed circuit card assembly by changing to lowsolids ux and controlled atm
ospheres. These companies eliminated the labor, equipment, materials, and waste
costs as well as the processing time associated with the cleaning step (Gutierre
z and Tulkoff 1994; Cala et al. 1996; Linton 1995). Another example of reducing
processing material consumption is recycling coolants used in machine shops. Rec
ycling coolant reduces coolant consumption as well as eliminates abrasive metal
particles that can shorten tool life or scar product surfaces (Waurzyniak 1999).
4.3.
Product Life-Cycle Assessment*
Life-cycle assessment (LCA) is a three-step design evaluation methodology compos
ed of inventory pro le, environmental impact assessment, and improvement analysis
(Keoleian and Menerey 1994). The purpose of the inventory step is to examine the
resources consumed and wastes generated at all stages of the product life cycle
, including raw materials acquisition, manufacturing, distribution, use, repair,
reclamation, and waste disposal. Materials and energy balance equations are oft
en used to quantify the inputs and outputs at each stage in the product life cyc
le. Vigon et al. (1993) de nes multiple categories of data for inventory analysis,
including individual process, facility-speci c, industry-average, and generic dat
a. The most desirable form of data is the rst data category, data collected from
the process used for a speci c product. However, this data category may require ex
tensive personnel, expertise, time, and costs. A three-step methodology for acti
vity-based environmental inventory allocation is useful in calculating data for
the rst step of life cycle assessment. First, the process ow and system boundary
* The following section is adapted and reprinted with permission from Stuart et
al. (1998). Copyright MIT Press Journals.
CLEAN MANUFACTURING
537
are determined. Then the activity levels and the activity percentages of the inp
uts and outputs are identi ed. Finally, the activity percentages are used to deter
mine the actual quantities of the inputs and outputs and assign them to the prod
uct and process design combination responsible for their generation. A detailed
example of this method is given in Stuart et al. (1998). As industrial engineers
assist companies in calculating the allocation of their wastes to the product r
esponsible, they can help managers make more informed decisions about product an
d process design costs and environmental impacts. Industry-average and generic d
ata must be used with caution because processes may be run with different energy
requirements and ef ciencies or may exhibit nonlinear behavior (Barnthouse et al.
1998; Field and Ehrenfeld 1999). For example, different regions have different
fuel-producing industries and ef ciencies that will have a signi cant effect on the
LCA if energy consumption is one of the largest impacts (Boustead 1995). Once th
e inputs and outputs are determined, the second and third steps of LCA, impact a
nalysis and improvement analysis, can be pursued (Fava et al. 1991). For impact
analysis, the analyst links the inventory of a substance released to an environm
ental load factor such as acid deposition, which is de ned in Table 6 (Potting et
al. 1998). Environmental load factors are a function of characteristics such as
location, medium, time, rate of release, route of exposure, natural environmenta
l process mechanisms, persistence, mobility, accumulation, toxicity, and thresho
ld of effect. Owens argues that because inventory factors do not have the spatia
l, temporal, or threshold characteristics that are inherent to the environmental
load, other risk-assessment tools should be used to evaluate a local process (O
wens 1997). LCA software tools and matrices may be used to estimate environmenta
l load factors for impact analysis (Graedel 1998; Graedel et al. 1995). Alting a
nd Legarth (1995) review 18 LCA tools for database availability, impact assessme
nt methodology, and complex product capability. The results of life-cycle impact
assessment (LCIA) provide relative indicators of environmental impact. Eight ca
tegories for LCIA are de ned in Table 6. Details regarding how to use life cycle i
mpact assessment categories are provided in Barnthouse et al. (1998) and Graedel
and Allenby (1995). Life cycle assessment is a comprehensive, quantitative appr
oach to evaluate a single product. An extensive survey of the use of mathematical
programming to address environmental impacts for air, water, and land is given i
n [Greenberg (1995)]. A review of applied operations research
TABLE 6
Environmental Categories for Life-Cycle Impact Assessment Description Lower atmo
spheric warming from trapped solar radiation due to an abundance of CO2, CH4 (me
thane), N2O, H2O, CFCl3, CF2Cl2, and O3 (ozone) (Graedel and Crutzen 1990) Losse
s in stratospheric ozone due to CFCl3 and CF2Cl2 (Graedel and Crutzen 1990) Reac
tions in the lower troposphere caused by emissions such as NOx gases and hydroca
rbons from automotive exhaust (Theodore and Theodore 1996) Consumption of nonliv
ing (nonrenewable) substances Acidic precipitation that deposits HNO3 and H2SO4
into soil, water, and vegetation (Graedel and Crutzen 1990) Large deposits of ph
osphorous and nitrogen to a body of water that leads to excessive aquatic plant
growth, which reduces the waters oxygen levels and capacity to support life (Theo
dore and Theodore 1996; Riviere 1990) Substances that degrade the ecosystem eith
er directly (acute) or over time (chronic) (Barnthouse et al. 1998; Graedel and
Allenby 1995) Encroachment by humans into biologically diverse areas, resulting
in the displacement and extinction of wildlife Living species that constitute th
e complex food web (Graedel and Allenby 1995)
Environmental Category Greenhouse effect Global warming Ozone depletion Photoche
mical smog Consumption of abiotic resources Acid deposition Eutrophication
538
TECHNOLOGY
papers on supply chain analysis and policy analysis with respect to environmenta
l management is in [Bloemhof-Ruwaard et al. (1995)] (Stuart et al. 1999). Models f
or production planning with environmental considerations are summarized in the n
ext section.
4.4.
Production Planning with Environmental Considerations
Introduction of product designs and process innovation requires a company to evalu
ate complex cost and environmental tradeoffs. In the past, these have not includ
ed environmental costs (Stuart et al. 1999). In this section, production planning
models are described for the entire product life cycle as well as for different
stages of the product life cycle.
4.4.1.
Models for Production Planning over the Product Life Cycle
Stuart et al. (1999) developed the rst mixed integer linear programming model to se
lect product and process alternatives while considering tradeoffs of yield, reli
ability, and business-focused environmental impacts. Explicit constraints for en
vironmental impacts such as material consumption, energy consumption, and proces
s waste generation are modeled for speci ed assembly and disassembly periods. The
constraint sets demonstrate a new way to de ne the relationship between assembly a
ctivities and disassembly con gurations through take-back rates. Use of the model
as an industry decision tool is demonstrated with an electronics assembly case s
tudy in Stuart et al. (1997). Manufacturers may run what if scenarios for proposed l
egislation to test the effects on design selection and the bottom line cost impa
cts. The effects over time of pollution prevention or product life extension are a
nalyzed from a manufacturers and potentially lessors perspective. Several new mode
ls explore the relationship between product and component reuse and new procurem
ent. These models include deterministic approaches using mixed integer linear pr
ogramming (Eskigun and Uzsoy 1998) and stochastic approaches using queueing theo
ry (Heyman 1977) and periodic review inventory models (Inderfurth 1997; van der
Laan and Salomon 1997). Location of remanufacturing facilities are analyzed in J
ayaraman (1996); Bloemhof-Ruwaard et al. (1994, 1996); Fleischmann et al. (1997)
. Scheduling policies for remanufacturing are presented in Guide et al. (1997).
4.4.2.
Production Planning Models for the Manufacturing and Assembly Stage
Early models focused on reducing the environmental impacts concurrent with proce
ss planning for continuous processes in the petroleum and steel industries (Russ
ell 1973; Russell and Vaughan 1974). Recently, models with environmental conside
rations focus on process planning for discrete product manufacturing (Bennett an
d Yano 1996, 1998; Sheng and Worhach 1998).
4.4.3.
Disassembly Planning Models
Based on graph theory, Meacham et al. (1999) present a fast algorithm, MAXREV, t
o determine the degree of disassembly for a single product. They are the rst to m
odel selection of disassembly strategies for multiple products subject to shared
resource constraints. They use their MAXREV algorithm to generate maximum reven
ue disassembly con gurations for their column generation procedure for multiple pr
oducts. Other disassembly models based on graph theory approaches focus on deter
mining economic manual disassembly sequences for a single product (Ron and Penev
1995; Penev and Ron 1996; Zhang and Kuo 1996; Johnson and Wang 1995; Lambert 19
97). A process planning approach to minimize worker exposure hazards during disa
ssembly is given in Turnquist et al. (1996). Disassembly may be economically adv
antageous for module and component reuse. However, for material recovery, escala
ting labor costs favor bulk recycling.
4.4.4.
Production Planning Models for Bulk Recycling
Production planning for bulk recycling is in the early stages of development. Mo
dels include a macrolevel transportation model for paper recycling (Glassey and
Gupta 1974; Chvatal 1980) and a goalprogramming model for recycling a single pro
duct (Hoshino et al. 1995). Sodhi and Knight (1998) develop a dynamic programmin
g model for oatsink operations to separate materials by density. Spengler et al. (
1997) present a mixed integer linear programming model to determine the manual d
isassembly level and recycling quantity. Stuart and Lu (2000) develop a multicom
modity ow model to select the output purity by evaluating various processing and
reprocessing options for bulk recycling of end-of-life products. Isaacs and Gupt
a (1998) use goal programming to maximize disassembler and shredder pro ts subject
to inventory balance constraints for the automobile recycling problem. Krikke e
t al. (1998) propose dynamic programming for disassembly planning and an algorit
hm to maximize revenue from material recycling. Realff et al. (1999) use mixed i
nteger linear programming to select sites and determine the amount of postconsum
er material collected, processed, stored, shipped, and sold at various sites.
CLEAN MANUFACTURING
539
4.5.
Environmental Management Systems
An environmental management system (EMS) is a management structure that addresse
s the longterm environmental impact of a companys products, services, and process
es. An EMS framework should include the following four characteristics: 1. 2. 3.
4. Environmental information collection and storage system Management and emplo
yee commitment to environmental performance Accounting and decision processes th
at recognize environmental costs and impacts Commitment to continuous improvemen
t of environmental performance
Federal U.S. EMS guidelines and information are documented in (Department of Ene
rgy 1998). International standards for EMS will be discussed in Section 4.5.3. E
nvironmental management systems (EMS) include environmental policies, goals, and
standards, which are discussed in the next three subsections.
4.5.1.
Corporate Environmental Policies
Corporate environmental policies require the commitment and resources of senior
management. These policies often focus on actions that can prevent, eliminate, r
educe, reuse, and recycle, respectively. These policies should be incorporated i
nto all employees practices and performance evaluations. Communication of environ
mental policies and information is integral to the success of the policies. Sett
ing viable goals from corporate environmental policies is the subject of the nex
t section.
4.5.2.
Environmental Goals and Metrics
Traditional environmental metrics often focus on compliance with legislation. Go
als may concentrate on state-dependent benchmark metrics such as reducing emissi
ons, reducing the volume or mass of solid waste, or reducing gallons of waste wa
ter to a speci ed level. On the other hand, goals may focus on non-state-dependent
improvement metrics such as reducing the percentage of environmental treatment
and disposal costs. It is also important to distinguish between local and aggreg
ate data when developing goals. Metrics may focus on local product or process go
als or system-wide facility or company goals. An example of a local goal might b
e to lengthen tool life and reduce cutting uid waste disposal costs. Sometimes lo
cal goals may translate to system goals. One machinists use of a new oil-free, pr
otein-based cutting uid that eliminates misting and dermatitis problems but provi
des the necessary lubricity and cooling may be a candidate for a system-wide pro
cess and procurement change (Koelsch 1997). With local goals, it is important to
investigate their potential positive and negative impacts if implemented throug
hout the system. An example of a system-wide goal might be to reduce the percent
age of polymer sprues and runners discarded at a particular facility by implemen
ting regrinding and remolding or redesigning the mold. For system goals, it is i
mportant to identify the most signi cant contributors through Pareto analysis and
target them for improvement.
4.5.3.
ISO 14000 Series Standards
540
TECHNOLOGY
REFERENCES
Alexander, F. (1996), ISO 14001: What Does It Mean for IEs? IIE Solutions, January,
pp. 1418. Allen, D. T., and Rosselot, K. S. (1997), Pollution Prevention for Chem
ical Processes, John Wiley & Sons, New York. Allenby, B. R. (1999), Industrial E
cology: Policy Framework and Implementation, Prentice Hall, Upper Saddle River,
NJ. Alting, L., and Legarth, J. B. (1995), Life Cycle Engineering and Design, CIRP G
eneral Assembly, Vol. 44, No. 2, pp. 569580. Andersen, S. O. (1990), Progress by th
e Electronics Industry on Protection of Stratospheric Ozone. in Proceedings of the
40th IEEE Electronic Components and Technology Conference (Las Vegas, May 2123),
IEEE, New York, pp. 222227. Apple Computer, Inc., Power Mac G4 Data Sheet, December
1999, www.apple.com / powermac / pdf / PowerMac G4 DS-e.pdf Barnthouse, L., Fava
, J., Humphreys, K., Hunt, R., Laibson, L., Noesen, S., Norris, G., Owens, J., T
odd, J., Vigon, B., Weitz, K., and Young, J. (1998). Life Cycle Impact Assessmen
t: The State of the Art, Society of Environmental Toxicology and Chemistry, Pens
acola, FL. Bateman, B. O. (1999a), Sector-Based Public Policy in the Asia-Paci c Reg
ion: Planning, Regulating, and Innovating for Sustainability, United StatesAsia Env
ironmental Partnership, Washington, DC. Bateman, B. O. (1999b), personal communi
cation. Bennett, D., and Yano, C. (1996), Process Selection Problems in Environmen
tally Conscious Manufacturing, INFORMS Atlanta, Invited Paper. Bennett, D., and Ya
no, C. A. (1998), A Decomposition Approach for a Multi-Product Process Selection P
roblem Incorporating Environmental Factors, Working Paper, University of Californi
aBerkeley. Block, M. R. (1997), Implementing ISO 14001, ASQC Quality Press, Milwa
ukee. Bloemhof-Ruwaard, J. M., Solomon, M., and Wassenhove, L. N. V. (1994), On th
e Coordination of Product and By-Product Flows in Two-Level Distribution Network
s: Model Formulations and Solution Procedures, European Journal of Operational Res
earch, Vol. 79, pp. 325339. Bloemhof-Ruwaard, J. M., Solomon, M., and Wassenhove,
L. N. V. (1996), The Capacitated Distribution and Waste Disposal Problem, European
Journal of Operational Research, Vol. 88, pp. 490503. Boustead, I. (1995), Life-Cyc
le Assessment: An Overview, Energy World, No. 230, pp. 711. Cala, F., Burress, R.,
Sellers, R. L., Iman, R. L., Koon, J. F., and Anderson, D. J. (1996), The NoClean
Issue, Precision Cleaning, Vol. 4, No. 2, pp. 1524. Cattanach, R. E., Holdreith, J.
M., Reinke, D. P., and Sibik, L. K. (1995), The Handbook of Environmentally Con
scious Manufacturing, Irwin Professional Publishing, Chicago. Chadha, N. (1994),
Develop Multimedia Pollution Prevention Strategies, Chemical Engineering Progress,
Vol. 90, No. 11, pp. 3239. Chvatal, V. (1980), Linear Programming, W.H. Freeman &
Co., New York. Consumer Reports (1994), Light-up Sneakers: A Dim-bulb Idea Takes
a Hike. Davis, G. A., Wilt, C. A., Dillon, P. S., and Fishbein, B. K. (1997), Extend
ed Product Responsibility: A New Principle for Product-Oriented Pollution Preven
tion. EPA-530-R-97-009, Knoxville, TN. Davis, J. B. (1997). Regulatory and Policy Tr
ends, Cutter Information Corp., Arlington, MA, pp. 1013. Denmark Ministry of the En
vironment (1992), Cleaner Technology Action Plan. Department of Energy (1998), Env
ironmental Management Systems Primer for Federal Facilities, Of ce of Environmental
Policy & Assistance, es.epa.gov / oeca / fedfac / emsprimer.pdf. Eskigun, E., an
d Uzsoy, R. (1998), Design and Control of Supply Chains with Product Recovery and
Remanufacturing. Under revision for European J. of Operational Res. European Envir
onmental Agency (1997), New Environmental Agreements in EU Member States by Year:
Pre-1981 through 1996, in Environmental Agreements: Environmental Effectiveness, N
o. 3, Vol. 1, European Communities, Copenhagen. Fava, J. A., Denison, R., Jones,
B., Curran, M. S., Vigon, B. W., Selke, S., and Barnum, J. (1991). A Technical Fr
amework for Life-cycle Assessments, Society of Environmental Toxicology and Chemis
try Foundation, Pensacola.
CLEAN MANUFACTURING
541
Field, F. R., III, and Ehrenfeld, J. R. (1999), Life-Cycle Analysis: The Role of E
valuation and Strategy, in Measures of Environmental Performance and Ecosystem Con
dition, P. Schulze, Ed., National Academy Press, Washington, DC. Fleischmann, M.
, Bloemhof-Ruwaard, J. M., Dekker, R., van der Laan, E., van Nunen, J. A. E. E.,
and van Wassenhove, L. N. (1997), Quantitative Models for Reverse Logistics: A Re
view, European Journal of Operational Research, Vol. 103, No. 1, pp. 117. Glassey,
C. R., and Gupta, V. K. (1974), A Linear Programming Analysis of Paper Recycling, Ma
nagement Science, Vol. 21, pp. 392408. Graedel, T. E. (1998), Streamlined Life-Cy
cle Assessment, Prentice Hall, Upper Saddle River. NJ. Graedel, T. E., and Allen
by, B. R. (1995), Industrial Ecology, Prentice Hall, Englewood Cliffs, NJ. Graed
el, T. E., and Crutzen, P. J. (1990), The Changing Atmosphere, in Managing Planet Ea
rth: Readings from Scienti c American Magazine, W.H. Freeman & Co., New York. Grae
del, T. E., Allenby, B. R., and Comrie, P. R. (1995), Matrix Approaches to Abridge
d Life Cycle Assessment, Environmental Science & Technology, Vol. 29, No. 3, pp. 1
34A139A. Granda, R. E., Hermann, F., Hoehn, R., and Scheidt, L.-G. (1998), ProductRelated Environmental Attributes: ECMA TR / 70An Update, in Proceedings of the Inte
rnational Symposium on Electronics and the Environment (Oak Brook, IL), pp. 13. G
reenberg, H. J. (1995), Mathematical Programming Models for Environmental Quality
Control, Operations Research, Vol. 43, No. 4, pp. 578622. Grif th, L. E. (1997), Conduc
ting an Energy Audit for a Die Casting Facility, NADCA, Washington, DC. Guide, V.
D. R., Jr., Srivastava, R., and Spencer, M. S. (1997), An Evaluation of Capacity P
lanning Techniques in a Remanufacturing Environment, International Journal of Prod
uction Research, Vol. 35, No. 1, pp. 6782. Gutierrez, S., and Tulkoff, C. (1994),
Benchmarking and QFD: Accelerating the Successful Implementation of No Clean Sold
ering, in Proceedings of the IEEE / CPMT Int. Electronics Manufacturing Technology
Symposium, pp. 389392. Herman, R., Ardekani, S., and Ausubel, J. (1989), Demateria
lization, in Technology and Environment, J. H. Ausubel and H. E. Sladovich, Eds.,
National Academy Press, Washington, DC, pp. 5069. Heyman, D. P. (1977), Optimal Dis
posal Policies for a Single-item Inventory System with Returns, Naval Research Log
istics Quarterly, Vol. 24, pp. 385405. Horkeby, I. (1997). Environmental Prioritiza
tion, in The Industrial Green Game: Implications for Environmental Design and Mana
gement, D. J. Richards, Ed., National Academy Press, Washington, DC, pp. 124131.
Hoshino, T., Yura, K., and Hitomi, K. (1995), Optimisation Analysis for Recycle-or
iented Manufacturing Systems, International Journal of Production Research, Vol. 3
3, No. 8, pp. 20692078. Inderfurth, K. (1997), Simple Optimal Replenishment and Dis
posal Policies for a Product Recovery System with Leadtimes, OR Spektrum, Vol. 19,
pp. 111122. Isaacs, J. A., and Gupta, S. (1998), Economic Consequences of Increasi
ng Polymer Content for the US Automobile Recycling Infrastructure, Journal of Indu
strial Ecology, Vol. 1, No. 4, pp. 19 33. Jayaraman, V. (1996), A Reverse Logistics
and Supply Chain Management Model within a Remanufacturing Environment, INFORMS,
Atlanta, Invited Paper. Johnson, J. K., White, A., and Hearne, S. (1997), From Sol
vents to Suppliers: Restructuring Chemical Supplier Relationships to Achieve Env
ironmental Excellence, in Proceedings of the Int. Symposium on Electronics and the
Environment (San Francisco), pp. 322325. Johnson, M., and Wang, M. (1995), Product
Disassembly Analysis: A Cost / Bene t Trade-Off Approach, International Journal of
Environmentally Conscious Design and Manufacturing, Vol. 4, No. 2, pp. 1928. Keol
eian, G. A., and Menerey, D. (1994), Sustainable Development by Design: Review of
Life Cycle Design and Related Approaches, Journal of the Air & Waste Management As
sociation, Vol. 44, No. 5, pp. 645668. Keoleian, G., Kar, K., Manion, M. M., and
Bulkley, J. W. (1997), Industrial Ecology of the Automobile: A Life Cycle Perspe
ctive, Society of Automotive Engineers, Warrendale, PA. Koelsch, J. R., (1997), Lu
bricity vs. the Environment: Cascades of Cleanliness, Manufacturing Engineering, M
ay, Vol. 118, pp. 5057. Krikke, H. R., Harten, A. V., and Schuur, P. C. (1998), On
a Medium Term Product Recovery and Disposal Strategy for Durable Assembly Produc
ts, International Journal of Production Research, Vol. 36, No. 1, pp. 111139.
542
TECHNOLOGY
Lambert, A. J. D. (1997), Optimal Disassembly of Complex Products, International Jou
rnal of Production Research, Vol. 35, pp. 25092523. Linton, J. (1995), Last-Minute
Cleaning Decisions, Circuits Assembly, November, pp. 3035. Meacham, A., Uzsoy, R.,
and Venkatadri, U. (1999), Optimal Disassembly Con gurations for Single and Multiple
Products, Journal of Manufacturing Systems, Vol. 18, No. 5, 311322. Modl, A. (1995
), Common Structures in International Environmental Labeling Programs, in Proceeding
s of the IEEE International Symposium on Electronics and the Environment (Orland
o, FL), pp. 3640. Norberg-Bohm, V., Clark, W. C., Bakshi, B., Berkenkamp, J., Bis
hko, S. A., Koehler, M. D., Marrs, J. A., Nielson, C. P., and Sagar, A. (1992), In
ternational Comparisons of Environmental Hazards: Development and Evaluation of
a Method for Linking Environmental Data with the Strategic Debate Management Pri
orities for Risk Management, CSIA Discussion Paper 92-09, Kennedy School of Govern
ment, Harvard University, Cambridge, MA. Of ce of Industrial Technologies (1999), In
dustrial Assessment Centers, U.S. Department of Energy, www.oit.doe.gov / iac / .
Owens, J. W. (1997), Life Cycle Assessment: Constraints on Moving from Inventory t
o Impact Assessment, Journal of Industrial Ecology, Vol. 1, No. 1, pp. 3749. Paci c N
orthwest Pollution Prevention Resource Center (1999), Business Assistance: How to
Inventory Your Wastes for Environmental Compliance, Paci c Northwest Pollution Preve
ntion Resource Center, www.pprc.org / pprc / sbap / workbook / toc all.html. Pen
ev, K. D., and Ron, A. J. D. (1996), Determination of a Disassembly Strategy, Intern
ational Journal of Production Research, Vol. 34, No. 2, pp. 495506. Potting, J.,
Schlopp, W., Blok, K., and Hauschild, M. (1998), Site-Dependent Life-Cycle Impact
Assessment of Acidi cation, Journal of Industrial Ecology, Vol. 2, No. 2, pp. 6387. P
residents Council on Sustainable Development (1996), Sustainable America, Preside
nts Council on Sustainable Development, Washington, DC. Realff, M. J., Ammons, J.
C., and Newton, D. (1999), Carpet Recycling: Determining the Reverse Production S
ystem Design, Journal of Polymer-Plastics Technology and Engineering, Vol. 38, No.
3, pp. 547567. Riviere, J. W. M. L. (1990), Threats to the Worlds Water, in Managing
Planet Earth: Readings from Scienti c American Magazine, W.H. Freeman & Co., New Y
ork. Ron, A. D., and Penev, K. (1995), Disassembly and Recycling of Electronic Con
sumer Products: An Overview, Technovation, Vol. 15, No. 6, pp. 363374. Russell, C.
S. (1973), Residuals Management in Industry: A Case Study in Petroleum Re ning, Jo
hns Hopkins University Press, Baltimore. Russell, C. S., and Vaughan, W. J. (197
4), A Linear Programming Model of Residuals Management for Integrated Iron and Ste
el Production, Journal of Environmental Economics and Management, Vol. 1, pp. 1742.
Ryding, S. O., Steen, B., Wenblad, A., and Karlsson, R. (1993), The EPS System: A
Life Cycle Assessment Concept for Cleaner Technology and Product Development St
rategies, and Design for the Environment, in Proceedings of the Design for the Env
ironment: EPA Workshop on Identifying a Framework for Human Health and Environme
ntal Risk Ranking (Washington, DC), pp. 2123. Santi, J. (1997), Directory of Poll
ution Prevention in Higher Education: Faculty and Programs, University of Michig
an, Ann Arbor. Sheng, P., and Worhach, P. (1998), A Process Chaining Approach towa
rd Product Design for Environment, Journal of Industrial Ecology, Vol. 1, No. 4, p
p. 3555. Socolow, R. (1999), personal communication. Sodhi, M., and Knight, W. A.
(1998), Product Design for Disassembly and Bulk Recycling, Annals of the CIRP, Vol.
47, No. 1, pp. 115118. Spengler, T., Puchert, H., Penkuhn, T., and Rentz, O. (19
97), Environmental Integrated Production and Recycling Management, European Journal
of Operational Research, Vol. 97, pp. 308326. Stuart, J. A. (2000), Integration of
Industrial Ecology Concepts into Industrial Engineering Curriculum, Proceedings of
the American Society of Engineering Education Annual Conference, St. Louis. Stu
art, J. A., and Lu, Q. (2000), A Model for Discrete Processing Decisions for Bulk
Recycling of Electronics Equipment, IEEE Transactions on Electronics Packaging Man
ufacturing, Vol. 23, No. 4, pp. 314320.
CLEAN MANUFACTURING
543
Stuart, J. A., and Sommerville, R. M. (1997), A Review of Life Cycle Design Challe
nges, International Journal of Environmentally Conscious Design and Manufacturing,
Vol. 7, No. 1, pp. 43 57. Stuart, J. A., Turbini, L. J., and Ammons, J. C. (1997
), Investigation of Electronic Assembly Design Alternatives through Production Mod
eling of Life Cycle Impacts, Costs, and Yield, IEEE Transactions for Components, P
ackaging, and Manufacturing Technology, Part C: Manufacturing, Vol. 20, No. 4, p
p. 317326. Stuart, J. A., Turbini, L. J., and Ammons, J. C. (1998), Activity-Based
Environmental Inventory Allocation, Journal of Industrial Ecology, Vol. 2, No. 3,
pp. 95108. Stuart, J. A., Ammons, J. C., and Turbini, L. J. (1999), A Product and P
rocess Selection Model with Multidisciplinary Environmental Considerations, Operat
ions Research, Vol. 47, No. 2, pp. 221234. Sze, Cedric (2000), personal communica
tion. Theodore, M. K., and Theodore, L. (1996), Major Environmental Issues Facin
g the 21st Century, Prentice Hall PTR, Upper Saddle River, NJ. Tillman, J. (1991
), Achievements in Source Reduction and Recycling for Ten Industries in the United
States, EPA / 600 / 2-91 / 051, Cincinnati, OH. Turnquist, M. A., List, G. F., Kj
eldgaard, E. A., and Jones, D. (1996), Planning Tools and Techniques for Product E
valuation and Disassembly, INFORMS, Atlanta, Invited Paper. U.S. Environmental Pro
tection Agency (1992), Facility Pollution Prevention Guide, EPA / 600 / R92 / 088, W
ashington, DC. van der Laan, E., and Salomon, M. (1997), Production Planning and I
nventory Control with Remanufacturing and Disposal, European Journal of Operationa
l Research, Vol. 102, pp. 264278. Vigon, B. W., Tolle, D. A., Cornaby, B. W., Lat
ham, H. C., Harrison, C. L., Boguski, T. L., Hunt, R. G., Sellers, J. D., and Cu
rran, M. A. (1993), Life-Cycle Assessment: Inventory Guidelines and Principles, EPA
/ 600 / R-92 / 245, Cincinnati, OH. Waurzyniak, P. (1999). Recycle Your Coolant, C
hips, Manufacturing Engineering, March, pp. 110 116. Zhang, H., and Kuo, T. (1996),
Graph-Based Approach to Disassembly Model for End-Of-Life Product Recycling, in Pro
ceedings of the 1996 IEEE / CMPT International Electronic Manufacturing Technolo
gy Symposium (Austin, TX), pp. 247254.
vement and autonomation. These prerequisites are discussed in the sections below
. In addition, achievement of high quality levels is also essential to implement
JIT. For this purpose, JIT and TQM efforts should be closely linked with each o
ther, as will be discussed later. Though some references for further information
on JIT are provided in this chapter, no attempt is made to review the thousands
of articles and books now available. For more exhaustive reviews of the early J
IT literature, the reader is referred to Keller and Kazazi (1993), Golhar and St
amm (1991), and Sohal et al. (1989). Reviews of research focusing on the modelin
g / design of JIT and kanban systems are provided by Akturk and Erhun (1999) and
Groenevelt (1993). General references on Toyotas JIT system include the rst publi
cation in 1977 by Sugimori et al., as well as Ohno (1988), Shingo (1989), Monden
(1998), and Fujimoto (1999).
2.1.
Smoothing of Volume and Variety
Production levelingin other words, smoothing of volume as well as variety to achi
eve uniform plant loadingis imperative for JIT implementation. What would happen
if the production volume
546
TECHNOLOGY
of an item were to uctuate every day and we blindly pursued a pull policy of with
drawing the needed quantity of parts at the needed time from the preceding proce
ss? In that situation, the preceding process must always maintain an inventory a
nd workforce suf cient to meet the highest possible quantity demanded. Consequentl
y, its production volume will exhibit uctuations larger than those of the downstr
eam process. Since a typical production system involves many sequential processe
s, there are many stages for these uctuations to be transmitted from nal assembly
backward through to the suppliers. At each stage, the uctuation is transmitted in
ampli ed form to the previous stage, with the result that upstream processes and
suppliers may incur vast inventory and waste. This is the so-called bullwhip eff
ect known in the eld of supply chain management. In order to prevent these undesi
rable effects, an effort must be made to smooth out the production quantities of
each item in the nal assembly schedule and then keep that schedule xed for at lea
st some period of time. Smoothing the production schedule minimizes variation in
the quantities of parts needed and enables the preceding processes to produce e
ach part ef ciently at a constant speed per hour or per day. Figure 1 shows a simp
le illustration of production leveling on a daily basis. Based on demand forecas
ts and rm customer orders, a three-month production schedule is created and share
d with relevant suppliers. Each month the production schedule is revised on a ro
lling basis to re ect the latest information. The rst month of the schedule is then
frozen and broken down into a daily production schedule as follows. Suppose tha
t the monthly production schedule for nal assembly indicates demand for 2400 unit
s of product A, 1200 units of product B, and 1200 units of product C. The level
daily production schedule is determined by dividing these monthly demand quantit
ies by 20 working days per month, that is, 120 As, 60 Bs, and 60 Cs per day, along
with the corresponding numbers of parts. Minor adjustment of these quantities ma
y occur on, say, a weekly basis, but alterations must be small (within 10% empirica
lly) so as to avoid introducing excess uctuations into the system. Though supplie
rs use this leveled daily production schedule as a guideline for planning, the a
ctual day-to-day production quantities are determined by actual demand from down
stream stages and coordinated through the kanban system or similar approach. By
initiating production only as demand occurs, kanban eliminates the possibility o
f waste due to excess production while absorbing minor, unavoidable uctuations on
the factory oor. Next, the daily production schedule is smoothed for product var
iety through the generation of a production sequence. Rather than all As being ma
de in one batch, then all Bs in a following batch, and so on, a mix of items shou
ld ow through the system. Because the demand for As, Bs, and Cs has a ratio of 2:1:1
in this case, an appropriate mix or sequence for their production would be a cy
cle of A B A C repeated through the day. For determining sequences in more compl
ex situations, refer to algorithms in Monden (1998) and Vollmann et al. (1997).
minor ad justment on week ly basis
monthly production schedule
ers ord
item A 2400
three months advance notice of order
supplier
item B 1200
item C 1200
...
supplier
one month (20 days)
leveled daily production A:120 B:60 C:60
kanban
dealers
mixed model assembly sequence final assembly line
...
A
C A B
...
A C A
B
plant warehouse
. . .
Figure 1 Smoothing Volume and Variety to Achieve One-Piece Flow Production.
548
(a) Long single line (about 300m)
TECHNOLOGY
U-shaped line
Multiple lines arranged by similar functions
multi-operation worker (multi-skill within a line) multi-function worker (multiskill over multiple lines) A1 A2 A3 A4 T1 T2
Trimming Interior Trimming Electronic Wiring Piping
T3
C1 C2 C3 Short lines (30m/15 operators each)
(b)
assembly 1 assembly 2 assembly 3 adjustment 1 adjustment 2
multi-function worker
sub assy 1 Inspection
sub assy 2
sub assy 3
sub assy 4 adjustment 3
cover attachment
assembly 4
Figure 2 (a) Layout Types and Their Evolution in Automobile Final Assembly Lines
. (b) Layout of Self-Contained Cell Line.
2.3.
Continuous Improvement and Autonomation
As previously mentioned, JIT is a production system that dynamically seeks everhigher performance levels. Elimination of waste and defects is an important goal
in this regard. Indeed, to achieve true just-in-time, all parts must be defectfree, as there is no excess inventory to draw upon for replacement. To achieve d
efect-free production, JIT production systems emphasize continuous improvement a
nd utilize autonomation and other related techniques for assuring quality at the
source. The original meaning of autonomation in JIT / TPS is to stop a machine
automatically when abnormal conditions are detected so as not to produce defecti
ve products. This is made possible by installing automatic detection and stoppin
g devices in machines or equipment, thus making them capable of operating autono
mously. The purpose of autonomation is to keep the machine working always for ad
ded value only and to prevent it from producing defects or waste. As discussed b
elow, the idea of autonomation is extended in two ways: poka-yoke (mistake-proo ng
) and visual control. Whereas autonomation is the application of detection and s
topping devices to the operation of equipment, poka-yoke can be considered as th
e application of error detection and stopping devices to the activities of worke
rs and the workermachine interface. It is assumed that human errors are inevitabl
e, so rather than rely on the vigilance of workers, various poka-yoke devices, s
uch as jigs, sensors, and cognitive aids, are built into the work process (cf. S
hingo 1986; Nikkan 1989). A simple example of poka-yoke is the use of color codi
ng. Certain colors can be designated for the position and setting of a machine,
so that the presence of any other color alerts the worker to an error and the ne
ed to take action. By reducing errors and the resulting defects, poka-yoke assum
es an important role in assuring quality. In terms of job safety as well, poka-y
oke is an indispensable means for protecting workers and preventing accidents in
work operations. Similarly, visual control systems (cf. Nikkan 1995) may also b
e viewed as an extension of the autonomation concept because they utilize variou
s devices to share information and make abnor-
dy full container of the desired parts is located and its production kanban is r
emoved, with the withdrawal kanban now attached in its place. The full container
with withdrawal kanban attached is now ready to be transferred to the inbound s
tockpoint of the subsequent process, while the empty container is left at the pr
eceding process for later use. In addition, the production kanban that was just
removed is now placed in a collection box at the preceding process. These produc
tion kanbans are frequently collected and become work orders authorizing the pro
duction of one more full container of parts. When a new, full container is produ
ced, the production kanban is attached to it and the container is placed in the
outbound stockpoint to complete the cycle. It can be seen that circulation of th
e kanbans is triggered only by actual usage of parts at the subsequent process.
Only what is needed is withdrawn from the preceding process, and then only what
is needed for replacement is produced. This chain of usage and production ripple
s back through upstream stages to the suppliers, with the kanbans functioning as
a sort of decentralized, manual
550
1) container with withdrawal kanban attached
W
TECHNOLOGY
P
4) production kanban is detached and placed in a collection box
P W
2) withdrawal kanban is detached when even one part is used
W
3) withdrawal kanban is attached to full container and transferred to the subseq
uent process
W P
withdrawal authorizing kanban production ordering kanban
outbound stock point of the preceding process
inbound stock point of the subsequent process
Figure 3 Flow of Kanban Cards and Containers between two Processing Areas.
coordination mechanism. Given that major uctuations in demand have been smoothed
and frozen by the leveled production schedule, the kanbans can self-regulate to
maintain the needed production quantities while absorbing the inevitable minor v
ariations in everyday operations.
3.2.
Control Parameters of Kanban System
In considering kanban as a decentralized control system, the following control p
arameters are necessary: number of kanbans in circulation; number of units in th
e kanban standard container; and kanban delivery cycle a-b-c (where b is number
of deliveries per a days and c indicates the delivery delay factor as an indicat
ion of replenishment lead time). For example, 1-4-2 means that every 1 day the c
ontainers are delivered 4 times and that a new production order would be deliver
ed by the 2nd subsequent delivery (in this case, about a half-day later, given f
our deliveries per day). Minimizing work-in-process inventory is a goal of JIT.
In keeping with this, the number of units per standard container should be kept
as small as possible, with one being the ideal. Furthermore, the number of deliv
eries per day (b / a) should be set as frequently as possible so as to synchroni
ze with the takt time of the subsequent process, and the delivery delay factor c
should be kept as short as possible. Ideally, the number of kanbans in circulat
ion between two adjacent workstations also should be minimized. However, in cons
ideration of practical constraints, a tentative number of kanbans may be calcula
ted as follows: Number of kanbans
number of units required per day a
(1
c
stock factor) number of units in standard container b
This tentative number of kanbans is reconsidered monthly because the daily level
ed production requirement may differ under the new monthly production schedule.
In addition, the number of kanbans is sometimes further reduced by systematicall
y removing them from circulation in the system. The resultant reduction of workin-process inventory will stress the production system and reveal the weakest po
safety
int for further improvement. Thus, the kanban system is part of the approach use
d in JIT to move toward the goal of stockless production and achieve continuous
improvement of the production system.
3.3.
Kanbans Limitations and Alternatives
As previously noted, the kanban system is particularly appropriate for high-volu
me, repetitive manufacturing environments. However, in comparison to the situati
on when the kanban system was originally created, many industries now face treme
ndous increases in the variety of products and parts coupled with lower volumes
for each individual item. This is seen also in the automobile industry, where mo
re than 50% of parts now have their respective kanbans circulate less than once
per day (Kuroiwa 1999). Such a low frequency of circulation leads to undesirably
high levels of work-in-process inventory, and the kanban system ends up functio
ning the same way as the classic double-bin inventory system. Several alternativ
es are available when production is not repetitive and / or high volume in natur
e. For example, when demand is lumpy and cannot be smoothed into a level product
ion schedule, a
tion schedule cycle was reduced from one week to one day and then eventually to
two hours, thereby necessitating many more setups and a reduction in setup times
. The quilting machines, for example, now have 60 times as many setup changes. K
anban was introduced for the more popular models so as to produce and hold only
the average number of units ordered per day. When a popular model is shipped fro
m the plant to ll an order, its kanban card is returned to the production line as
a signal to make a replacement. Lower-volume models are produced only after an
order is received. When a large order is received from a hotel, its production i
s spread out among other orders in order to maintain a level production schedule
. As a result of such changes and improvements, the plant is now able to produce
a wider variety of end items in higher volumes and with shorter lead times and
higher productivity. By 1997, for example, it produced 850 styles of mattresses
with a daily production volume of 550 units and only 1.5 days of nished-goods inv
entory. At the same time, units produced per person had increased from 8 to 26,
and overall productivity had increased by a factor of 2.08.
4.
4.1.
COMPLEMENTARY PARADIGMS OF JUST-IN-TIME
3Ts as Mutually Reinforcing Activities for Quality, Cost, Delivery Performance
Like JIT, total quality management (TQM) and total productive maintenance (TPM)
are managerial models that were formulated and systematized in Japan. Each of th
ese managerial paradigms is made
552
TECHNOLOGY
up of concepts and tools of great signi cance for operations management. With TPS
used as the name for just-in-time, together with TQM and TPM, a new acronym 3Ts can b
e created to denote these three paradigms and their vital contribution to a rms co
mpetitiveness in terms of quality, cost, and delivery performance (QCD) (Enkawa
1998). From a historical perspective, it is somewhat dif cult to determine when a
framework of their joint implementation began to emerge. This certainly took pla
ce after the individual advents of JIT / TPS in the 1950s, TQC / TQM in the 1960
s, and TPM in the 1970s. If we view these paradigms as a historical sequence, it
can be asserted that pressing needs to meet new competitive requirements were t
he driving impetus behind each of their creations. Being the later-emerging para
digms, TQM and TPM can be considered as systematized activities to support JIT w
ith areas of overlap as well as unique emphasis. This interrelationship is depic
ted in Figure 4, wherein it may be seen that the three paradigms have roles of m
utual complementarity in terms of Q, C, and D.
4.2.
Total Quality Management (TQM)
The primary focus of TQM is on achieving customer satisfaction through the desig
n, manufacture, and delivery of high-quality products. This addresses JITs requir
ement for strict quality assurance and elimination of defects. Though having roo
ts in American approaches, TQM broadened and matured as a management paradigm in
Japanese industry. By the 1980s, industries worldwide began emulating the Japan
ese model of quality management (e.g., Ishikawa 1985; Akiba et al. 1992) and hav
e subsequently adapted and reshaped it in new directions. With the concept of co
ntinuous improvement at its core, Japanese-style TQM seeks to boost performance
throughout the organization through participation of all employees in all depart
ments and levels. In addition to the tenet of total employee involvement, TQM in
corporates the following beliefs and practices: management based on facts and da
ta, policy deployment, use of cross-functional improvement teams, systematic app
lication of the Plan, Do, Check, Act (PDCA) cycle, and the ability of all employ
ees to use fundamental statistical techniques such as the seven basic quality im
provement tools. One mainstay of TQM is its involvement of front-line employees
in organized QC circle activities wherein the workers themselves identify and co
rrect problems. These small-group activities are instrumental in diffusing conti
nuous improvement capability throughout the company. Besides supporting successf
ul JIT implementation, these activities also improve employee morale and enhance
skills. The essence of TQM can also be understood from the many maxims that hav
e appeared to express new approaches and ways of thinking necessitated by compet
itive change:
Quality is built in at the process (instead of by inspection). Focus on and corr
ect the process rather than the result (emphasizing prevention). Emphasize syste
m design and upstream activities. The next process is your customer (to emphasiz
e customer orientation). Quality rstthe more quality is improved, the lower cost b
ecomes. Three-gen principle: observe the actual object (genbutsu) and actual sit
uation (genjitsu) at the actual location (genba).
TQM
Quality improvement (Q)
TPM
Production system effectiveness (C)
Continuous improvement
Just-in-time/ Inventory reduction (D/C)
JIT/TPS
Figure 4 Mutual Overlap and Complementarity of the 3Ts (JIT / TPS, TQM, TPM) with
respect to quality, cost, and delivery performance (QCD).
Since the 1980s, the diffusion of the three paradigms from Japanese manufacturer
s to overseas manufacturers has progressed at an accelerated rate. As evidenced
by the coinage of terms such as 3T s and TPQM ( TPM
TQM), many cases now exist of
the simultaneous implementation of part or all of the three paradigms, both insi
de and outside of Japan. This is underscored by the results of a mail survey of
members of American Manufacturing Excellence, an organization whose members are
known as leaders in manufacturing practice (White et al. 1999). On average, manu
facturers included in the sample had implemented 7 out of 10 listed practices re
lating to JIT. As a whole, these 10 practices encompass a breadth of activity vi
rtually equivalent to the 3Ts. The percentages of manufacturers implementing each
practice is as follows, broken down by size of company (small rms sample size n
174; large rms sample size n
280): quality circles (56.3%; 70.4%); total quality
control (82.2%; 91.4%); focused factory or reduction of complexities (63.2%; 76.
8%); TPM
554 JIT / TPS TQM TPM Equipment life cycle Preventive maintenance Materials ow Supe
rmarket-style inventory control Supply chain management (from source through user)
Inventory (muri, muda, mura) Kaizen Load smoothing Inductive Application of PDC
A cycle / data emphasis Managerial methods with emphasis on statistical thinking
Kaizen Market-in orientation Product life cycle Statistical quality control (SQ
C) Upstream activities (towards new product development) Quality of product and
process Preventive approach (towards maintenance prevention / design issues) Los
ses related to equipment, workers, materials Kaizen Utmost equipment effectivene
ss / zero loss Deductive Pursuit of ideal / technological theory and principles
PM analysis, 5S activities, basic technologies, etc. Deductive / simple is best Expo
sing system weaknesses through inventory reduction Mechanisms that activate impr
ovement (autonomation, andon, kanban, etc.) Multifunctional workers Reliance on
formal organization / staff Company-speci c Basic technologies-related skills Over
lapping small-group activities QC education appropriate to level Policy deployme
nt / quality audits by top management Cross-functional management QC circles Com
pany-speci c Special committees / project teams Autonomous maintenance 12-step imp
lement-ation program
TABLE 1
A Comparison of the JIT / TPS, TQM, and TPM Paradigms
Criteria
Scope Origin
Focus
Direction of expanded focus
Control attribute
Fundamental attitude Concept / premise
Means
Approach to problem solving Means for problem solving
Key methods
Human resources development Vertical integration
Organizational Issues
Horizontal integration
Bottom-up involvement Implementation approach
hat it uses half of the various production resources (labor, manufacturing space
, tool investment, engineering hours, inventory, etc.) used in the Ford-style ma
ss production that was prevalent into the 1980s. The essence of lean production
may be summarized into four aspects: (1) lean plant; (2) lean supplier network;
(3) lean product development; and (4) relationship with distributors and custome
rs. Of these, the lean plant forms the original core and may be considered as eq
uivalent to the JIT / TPS concept presented earlier. The remaining three aspects
delineate how the lean plant interacts with and mutually in uences other entities
. Thus, lean production may also be considered as an extended paradigm of JIT th
at includes new intraorganizational and interorganizational aspects. It is inter
esting to note that Toyota itself was in uenced by the systematic, encompassing fr
amework of lean production as presented by the MIT study. Though Toyotas practice
s indeed formed the basis for the lean production model, many of those practices
had evolved gradually in response to local organizational needs or involved his
torically separate organizational units. The lean production model consequently
provided a conceptual thread for understanding its many aspects as a synergistic
, integrated whole. Lean supplier network, referring to various innovative pract
ices in supplier relationships, can be considered as an interorganizational exte
nsion of JIT. It is well known that Toyota reorganized its
556
TECHNOLOGY
suppliers into functional tiers in the 1950s. Its aim was to provide suppliers w
ith incentive for improvement and promote learning mechanisms among suppliers an
d workers alike while maintaining long-term relationships. In this system, diffe
rent responsibilities are assigned to the suppliers in each tier. First-tier sup
pliers are responsible not only for the manufacture of components, but also for
their development while interacting and sharing information with Toyota and othe
r rst-tier suppliers. In turn, each rst-tier supplier forms its own second tier of
suppliers. They are assigned the job of fabricating individual parts and occasi
onally assist in their design, within the con nes of rough speci cations provided by
the rst-tier companies. In many cases, even a third tier of suppliers may exist
to manufacture parts according to given speci cations. Roughly only 30% of Toyotas
parts production is done in-house, with the remainder allotted to the supplier n
etwork. This high degree of outsourcing affords Toyota exibility in coping with c
hange. At the same time, it underscores the criticality of a competitive supply
chain. Toyotas supplier network is neither a rigid, vertical integration nor an a
rms-length relationship. Furthermore, it attempts to maintain a delicate balance
between cooperative information sharing and the principle of competition. For th
is purpose, Toyota has employed several organizational strategies. One approach
for promoting information sharing is the formation of supplier associations and
the cross-holding of shares with supplier-group rms and between rst-tier suppliers
. Another is the twoway sharing of personnel with Toyota and between supplier-gr
oup rms. As an approach for encouraging competition, there is ample opportunity f
or a supplier to move up the ladder if it exhibits very high performance in qual
ity, cost, delivery, and engineering capability. A parts-manufacturing specialis
t, for example, can expand its role to include the design of parts and even inte
grated components. Further incentive is provided by a degree of competition betw
een suppliers over their share of business for a given part. Contrary to common
assumption, parts are not sole-sourced except for certain complex systems that r
equire considerable investments in tools. Rather, the business is typically divi
ded between a lead supplier and one or more secondary suppliers. Each suppliers r
elative share of business may grow or shrink depending on performance. By provid
ing substantial incentives to suppliers, this exible system has spurred a variety
of improvements relating to quality, cost, and JIT delivery of parts. Another a
spect of the lean production enterprise concerns the way that new products are d
esigned and developed. Lean product development differs from traditional approac
hes on four key dimensions: leadership, teamwork, communication, and simultaneou
s development (Womack et al. 1990). In the case of automobile development, hundr
eds of engineers and other staff are involved. Consequently, strong leadership i
s the rst condition for doing a better job with less effort. Toyota adopted the s
husa system, also known as the heavyweight project manager system (Clark and Fuj
imoto 1991). The shusa (literally the chief engineer) is the leader of a project
team whose job is to plan, design, and engineer a new product as well as to ram
p up production and deliver it to market. Great responsibility and power are del
egated to the shusa. Most importantly, the shusa assembles a at team with members
coming from the various functional departments throughout the company, from mar
keting to detail engineering. Though retaining ties to their original department
, the team members are now responsible on a day-to-day basis to the shusa. The d
evelopment process used to explore the optimum design solution has been called b
y names such as set-based engineering (Liker et al. 1995). It can be de ned as an
approach for engineers to explicitly consider and communicate sets of design alt
ernatives at both conceptual and parametric levels within clearly de ned constrain
ts. They gradually narrow these design sets by eliminating inferior alternatives
until they eventually freeze the detailed product speci cations based on feasibil
ity studies, reviews, and analysis. This broad, sweeping approach helps the desi
gn team avoid premature design speci cations that might appear attractive based on
narrow considerations but that would suboptimize the overall design. Furthermor
558
TECHNOLOGY
on strengthening it, instead of applying effort uniformly over all the links. Th
is analogy implies that the focus for continuous improvement must be on the area
that will bring about the greatest bene t in relation to the effort extended. Fur
thermore, this area for improvement may lay outside the manufacturing plant beca
use the constraint may not be a physical one but may involve managerial policies
or market factors. Consequently, in such a situation, plant-wide improvement ef
forts in carpet bombing fashion may not lead to an improved bottom line for the comp
any. The second point of difference concerns the focus of improvement efforts. W
hereas JIT, TQM, and TPM concentrate on changing the production system as the me
ans to improve it, TOC explicitly considers options for increasing pro ts while ma
king the constraint work more effectively as it is. This concept is embodied in
steps 2 and 3 of the ve-step focusing process. In particular, step 3 corresponds
to the famous scheduling solution called drum-buffer-rope (DBR). As illustrated
in Figure 5(a), DBR is easily understood using Goldratts analogy of a Boy Scout t
roop. Each Scout stands for one workstation or process, while the distance the t
roop moves during a speci c period is throughput earned and the length of the Scou
t troop column corresponds to work-in-process. Under the condition that we canno
t eliminate statistical uctuations and disruptions in each Scouts hiking performan
ce, the objective is to have the Scout troop advance as much distance as possibl
e while keeping the troops length short. The solution to this problem lies in hav
ing the slowest Scout (constraint) set the pace of the troop by beating a drum a
nd tying a rope with some slack (work-in-process or time buffer) between him and
the leading Scout. The rope prevents the troop length from increasing unnecessa
rily. At the same time, the slack in the rope allows the slowest Scout to work a
t his full capability without any interference from disruptions in the preceding
Scouts performance, thereby enabling the troop as a whole to advance the maximum
attainable distance. For reference, the corresponding analogy for a kanban syst
em is shown in Figure 5(b). In a kanban system, all adjacent pairs of Scouts are
tied together by a rope whose length represents the number of kanbans in circul
ation, and the pace of the drum beat must be constant, in keeping with JITs prere
quisite of leveled production. It should be noted that there is no individual Sc
out who corresponds to a constraint or bottleneck. Instead, once any Scout exper
iences dif culty walking with the given length of rope, that Scout is identi ed as a
constraint and countermeasures are taken straightaway to increase his capabilit
y. Next, the length of rope between each pair is further shortened and another c
onstraint is discovered to repeat the cycle of continuous improvement. In this s
ense, kanban should be considered as a means to expose the constraint, rather th
an a means to schedule it and fully exploit its existing capability. In sum, TOC
adds a bene cial and indispensable viewpoint to the 3Ts in that it helps to clarif
y the goal of improvement and to identify where improvement should be focused so
as to achieve maximum nancial bene ts from a global optimum standpoint. In additio
n, TOC complements JIT, TQM, and TPM by emphasizing that improvement should not
always be the rst option. Rather, quick returns are often possible from rst exploi
ting the constraint as it is, making it work to its own utmost limit.
(a)
inventory quantity (lead time)
buffer rope drum final process constraint process first process
(b)
inventory quantity (lead time)
drum
rope (kanbans)
rope (kanbans)
rope (kanbans)
rope (kanbans) first process
final process
Figure 5 Analogy of Drum-Buffer-Rope in (a) Theory of Constraints and (b) Kanban
System.
REFERENCES
Akiba, M., Schvaneveldt, S. J., and Enkawa, T. (1992), Service Quality: Methodolog
ies and Japanese Perspectives, in Handbook of Industrial Engineering, 2nd Ed., G.
Salvendy, Ed., John Wiley & Sons, New York, pp. 23492371. Akturk, M. S., and Erhu
n, F. (1999), An Overview of Design and Operational Issues of Kanban Systems, Intern
ational Journal of Production Research, Vol. 37, No. 17, pp. 38593881. Billesbach
, T., and Schneidemans, J. (1989), Application of Just-in-Time Techniques in Admin
istration, Production and Inventory Management Journal, Vol. 30, No. 3, pp. 4045. B
owen, D. E., and Youngdahl, W. E. (1998), Lean Service: In Defence of a Production-L
ine Approach, International Journal of Service Industry Management, Vol. 9, No. 3,
pp. 207225. Chase, R. B., and Stewart, D. M. (1994), Make Your Service Fail-Safe, Sl
oan Management Review, Vol. 35, No. 3, pp. 3544. Clark, K. B., and Fujimoto, T. (
1991), Product Development Performance: Strategy, Organization, and Management i
n the World Auto Industry, Harvard Business School Press, Boston. Dettmer, H. W.
(1997), Goldratts Theory of Constraints: A Systems Approach to Continuous Improv
ement, ASQC Quality Press, Milwaukee.
560
TECHNOLOGY
Duclos, L. K., Siha, S. M., and Lummus, R. R. (1995), JIT in Services: A Review of
Current Practices and Future Directions for Research, International Journal of Se
rvice Industry Management, Vol. 6, No. 5, pp. 3652. Enkawa, T. (1998), Production E
f ciency Paradigms: Interrelationship among 3TTPM, TQC / TQM, and TPS (JIT), in 1998
World-Class Manufacturing and JIPM-TPM Conference (Singapore), pp. 19. Fujimoto,
T. (1999), The Evolution of a Manufacturing System at Toyota, Oxford University
Press, New York. Goldratt, E. M. (1990), Theory of Constraints, North River Pres
s, Croton-on-Hudson, NY. Goldratt, E. M., and Cox, J. (1992), The Goal 2nd Rev E
d., North River Press, Croton-on-Hudson, NY. Golhar, D. Y., and Stamm, C. L. (19
91), The Just-in-Time Philosophy: A Literature Review, International Journal of Prod
uction Research, Vol. 29, No. 4, pp. 657676. Green, G. C., and Larrow, R. (1994),
Improving Firm ProductivityLooking in the Wrong Places, CPA Chronicle, Summer. Groen
evelt, H. (1993), The Just-in-Time System, in Logistics of Production and Inventory,
S. C. Graves, A. H. G. Rinnooy Kan, and P. H. Zipkin, Eds., Handbooks in Operat
ions Research and Management Science, Vol. 4, North-Holland, Amsterdam, pp. 62967
0. Imai, M. (1997), Gemba Kaizen, McGraw-Hill, New York. Ishikawa, K. (1985), Wh
at Is Total Quality Control? The Japanese Way, Prentice Hall, Englewood Cliffs,
NJ. Karmarkar, U. (1989), Getting Control of Just-in-Time, Harvard Business Review,
Vol. 67, No. 5, pp. 122131. Keller, A. Z., and Kazazi, A. (1993), Just-in-Time Manu
facturing Systems: A Literature Review, Industrial Management and Data Systems, Vo
l. 93, No. 7, pp. 132. Krafcik, J. F. (1988), Triumph of the Lean Production System
, Sloan Management Review, Vol. 30, No. 1, pp. 4152. Kuroiwa, S. (1999), Just-in-Time
and Kanban System, in Seisan Kanri no Jiten, T. Enkawa, M. Kuroda, and Y. Fukuda,
Eds., Asakura Shoten, Tokyo, pp. 636646 (in Japanese). Liker, J. K., Ettlie, J.
E., and Campbell, J. C., Eds. (1995), Engineered in Japan: Japanese TechnologyMa
nagement Practices, Oxford University Press, New York. Liker, J. K. (1998), Beco
ming Lean: Inside Stories of U.S. Manufacturers, Productivity Press, Portland, O
R. Liker, J. K., Fruin, W. M., and Adler, P. S., Eds. (1999), Remade in America:
Transplanting and Transforming Japanese Management Systems, Oxford University P
ress, New York. Mathis, R. H. (1998), How to Prioritize TQC and TPM in the Quest f
or Overall Manufacturing Excellence, Masters Thesis, Darmstadt University of Techno
logy / Tokyo Institute of Technology. Mehra, S., and Inman, R. A. (1990), JIT Impl
ementation in a Service Industry: A Case Study, International Journal of Service I
ndustry Management, Vol. 1, No. 3, pp. 5361. Miyake, D. I., and Enkawa, T. (1999)
, Matching the Promotion of Total Quality Control and Total Productive Maintenance
: An Emerging Pattern for the Nurturing of Well-Balanced Manufacturers, Total Qual
ity Management, Vol. 10, No. 2, pp. 243269. Monden, Y. (1998), Toyota Production
System: An Integrated Approach to Just-in-Time, 3rd Ed., Institute of Industrial
Engineers, Atlanta. Motwani, J., and Vogelsang, K. (1996), The Theory of Constrai
nts in PracticeAt Quality Engineering, Inc., Managing Service Quality, Vol. 6, No.
6, pp. 4347. Nakajima, S., Yamashina, H., Kumagai, C., and Toyota, T. (1992), Maint
enance Management and Control, in Handbook of Industrial Engineering, 2nd Ed., G.
Salvendy, Ed., John Wiley & Sons, New York, pp. 19271986. Nikkan Kogyo Shimbun, E
d. (1989), PokaYoke: Improving Product Quality by Preventing Defects, Productivit
y Press, Portland, OR. Nikkan Kogyo Shimbun, Ed. (1995), Visual Control Systems,
Productivity Press, Portland, OR. Noreen, E., Smith, D., and Mackey, J. (1995),
The Theory of Constraints and Its Implications for Management Accounting, North
River Press, Great Barrington, MA. Ohno, T. (1988), Toyota Production System: B
eyond LargeScale Production, Productivity Press, Portland, OR. Olson, C. (1998), Th
e Theory of Constraints: Application to a Service Firm, Production and Inventory M
anagement Journal, Vol. 39, No. 2, pp. 5559.
NEAR-NET-SHAPE PROCESSES
563
medium series or large-lot productiondepends on the parts destination. In conclusi
on, the type of manufacturing mainly results from a wide range of possible preco
nditions. The nature of manufacturing itself is to transform in steps the differ
ent variants of the initial material that is available for manufacturing technol
ogy into the nal state prede ned by the components target geometry. For this procedu
re, very different process stages, depending on the component itself and the man
ufacturing conditions, are required. These steps can be realized by various mech
anical, thermal, and chemical manufacturing techniques. The process chain repres
ents the technological sequence of the individual process steps, which guarantee
s adequate fabrication of the component (see Figure 1). As a rule, to manufactur
e a given component, different process chains are possible, posing the question
of optimizing manufacturing processes and subsequently the process chains that s
hould be applied. The rst goal is to keep the process costs to a minimum at const
ant or even enhanced product quality. Another goal is to improve process product
ivity. Environmental protection, careful use of resources, and use of reproducti
ve materials are increasingly important considerations. Within this context, nea
r-net-shape (nns), which involves optimizing the manufacturing process and parti
cularly shortening the process chains, describes one trend in part manufacturing
development.
1.1.
Near-Net-Shape Processes: De nition and Limitations
For a better understanding of the near-net-shape principle, all explanations sho
uld be based on the different characteristics of the manufacturing techniques em
ployed for shaping within the fabrication of components. These are methods that,
on the one hand, create shapes by removing material from an initial state, incl
uding cutting methods such as turning, drilling, milling, and grinding as well a
s some special techniques. In contrast, shaping can be realized by distributing
or locating material in an intentional way. The primary shaping methods (e.g., c
asting, powder metallurgy), metal-forming technology, as well as similar special
techniques are based on this principle.
Figure 1 Process Chain Variants: Examples.
564
TECHNOLOGY
Starting with this distinction and bearing in mind the heterogeneous terminology
in technical literature, we can de ne near-net-shape production as follows:
Near-net-shape production is concerned with the manufacture of components, where
by shaping is realized mostly by nonchipping manufacturing techniques and nishing
by cutting is reduced to a minimum. Nearnet-shape manufacturing involves such o
bjectives as reducing the number of process stages, minimizing costs, and guaran
teeing enhanced product quality.
Essentially nonchipping shaping is necessary if materials such as compounds and
ceramics are characterized by poor machinability. The near-net-shape principle c
arried to its limits nally results in net-shape production that makes nishing by c
utting totally unnecessary. In net-shape manufacturing, shaping is performed onl
y with nonchipping techniques. The decision on the extent of residual cutting wo
rkthat is, the extent to which nishing by cutting can be reasonably diminished or
even completely eliminateddepends on the associated manufacturing costs. Thus, in
some cases it may be more economical to manufacture a shape element by cutting
than by metal forming. Maintaining a certain minimum value for nishing by cutting
may be necessary for technical reasons as well. For example, a disturbing surfa
ce layer (skin after casting or decarburized skin caused by hot working) that re
sults from a nonchipping shaping procedure will have to be removed by cutting. N
ear-net-shape manufacturing can cover the whole component, but also only parts o
f it, such as functional surfaces of outstanding importance.
1.2.
Goals and Bene ts
To keep the manufacturing industry competitive in todays global markets, innovati
ve products must constantly be developed. But high product quality also has to b
e maintained at reasonable costs. At the same time, common concerns such as prot
ecting the environment and making careful use of resources must be kept in mind.
Near-net-shape manufacturing has to cope with these considerations. The goals o
f near-net-shape production are to reduce the manufacturing costs and make the m
anufacturing procedure related to a given product more productive. These goals c
an be achieved by employing highly productive nonchipping shaping techniques and
minimizing the percentage of cutting manufacturing, thus generating more ef cient
process chains consisting of fewer process steps. Value adding is thus shifted
to the shaping methods as well. For that reason, enhancement of shaping procedur
es involves great development potential. An increasing variety of advanced produ
cts with new functionality and design are being developed. However, novel compon
ents characterized by higher manufacturing demands are emerging. Lightweight man
ufacturing, reduction of moving mass at energy-consuming assemblies or products,
and the application of sophisticated multifunctional parts that save space and
weight are typical manifestations. Frequently, when nonchipping shaping methods
are used, geometrically complex components can be realized much better than woul
d be possible with conventional cutting techniques perhaps used in conjunction w
ith joining operations. With enhancing near-net-shape manufacturing as a must, t
he speci c features of the different chipless shaping methods, such as low-waste a
nd environmentally compatible production, increase in product quality, and bette
r characteristics for use, can be taken advantage of.
1.3.
Preconditions for Near-Net-Shape Manufacturing
As a precondition for developing near-net-shape manufacturing for and applying i
566
TECHNOLOGY
Figure 2 Preconditions for Developing Near-Net-Shape Processes.
2.1. 2.1.1.
Component Shaping Techniques (Selection) Casting and Powder Metallurgical Techni
ques (Primary Shaping)
Primary shaping is the manufacturing of a solid body from an amorphous material
by creating cohesion. Thus, primary shaping serves originally to give a componen
t an initial form starting out from a material in an amorphous condition. Amorph
ous materials include gases, liquids, powder, bers, chips, granulates, solutions,
and melts. With respect to a product shape and follow-up processing, primary sh
aping can be divided into three groups: 1. Products made by primary shaping that
will be further processed by forming, cutting, and joining. With respect to sha
pe and dimensions, the nal product is no longer similar to the product originally
made by primary shaping. In other words, shape and dimensions are also essentia
lly altered by means of techniques from other main groups of manufacturing proce
sses. The manufacturing of at products from steel, which is widely realized by ca
sting, is one nearnet-shape application. Thereby, later shaping by rolling can b
e kept to a minimum. 2. Products made by primary shaping whose forms and dimensi
ons are essentially similar to those of the nished components (e.g., machine part
s) or nal products. The shape of these products corresponds to the products purpos
e to the maximum extent. Obtaining the desired nal form as well as nal dimensions
mostly requires only a few, as a rule chipping, operations to be carried out at
functional surfaces. 3. Metal powders produced by primary shaping, whereby the p
owders are atomized out of the melt. From powder, sintering parts are produced a
s a result of powder metallurgical manufacturing. The fabrication of moldings fr
om metals in foundry technology (castings) and powder metallurgy (sintered parts
) as well as of high-polymer materials in the plastics processing industry yield
s signi cant economic advantages, such as:
The production of moldings is the shortest way from raw material to nished produc
t. Forming,
including all related activities, is thereby bypassed. A form almost equal to th
e components nal shape is achieved in only one direct operation, and masses rangin
g from less than 1 g to hundreds of tons can thus be handled.
NEAR-NET-SHAPE PROCESSES
567
Maximum freedom for realizing shapes that cannot be achieved by any other manufa
cturing
technique is enabled by manufacturing of moldings that are primarily shaped from
the liquid state. Primary shaping can also be applied to materials that cannot
be processed with other manufacturing techniques. An advantageous material and e
nergy balance is ensured by the direct route from raw material to the molding or
the nal product. Components and nal products of better functionality, such as mol
dings of reduced wall thickness, diminished allowances, fewer geometric deviatio
ns, and enhanced surface quality (nearnet-shape manufacturing), can be produced
to an increasing extent due to the constantly advancing primary shaping methods.
Great savings in materials and energy can be realized with castings and sintere
d parts. Positive consequences for environmental protection and careful use of n
atural resources can be traced back to these effects. 2.1.1.1. Primary Shaping:
Manufacturing Principle In general, the technological process of primary shaping
methods can be divided into the following steps:
Supply or production of the raw material as an amorphous substance Preparation o
f a material state ready for primary shaping Filling of a primary shaping tool w
ith the material in a state ready for primary shaping Solidi cation of the materia
l in the primary shaping tool, e.g., usually the casting mold Removal of the pro
duct of primary shaping from the primary shaping tool
For a survey of primary shaping methods, see Figure 3.
2.1.2.
Bulk Metal Forming Techniques
Aiming at the production of parts described by de ned geometry and dimensions, all
bulk metal forming techniques are based on the plastic alteration of the form o
f a solid body (whose raw material is massive in contrast to sheet metal forming
) whereby material cohesion is maintained. The form is altered (forming procedur
e) due to forces and moments applied from the outside and generating a stress co
ndition able to deform the materialthat is, to transform it into a plastic state,
permanently deformable inside the region to be formed (forming zone). The form
to be produced (target form) is
Figure 3 Survey of Primary Shaping Techniques.
568
TECHNOLOGY
mapped by the form of the tools active surfaces and the kinematic behavior of eac
h forming method. The requirements to be ful lled by the material to be formedsumma
rized as the material formability propertyresult from the precondition that inter
nal material cohesion not be destroyed during manufacturing. Formability of a ma
terial is in uenced by the conditions under which the metal forming procedure is p
erformed. Forming temperature, strain rate, and the stress condition within the
forming zone are essential factors in uencing formability. A selection of essentia
l bulk metal forming techniques to be used for realizing near-net-shape technolo
gies in part manufacturing is given in Table 3. All bulk metal forming technique
s can be run at different temperature ranges dictated by the characteristics of
each material (which in turn acts on the forming conditions), design of the proc
ess chain, and the forming result. According to their temperature ranges, cold,
semihot, and hot forming processes can be distinguished. 2.1.2.1. Cold Forming I
n cold forming, the forming temperature is below the recrystallization temperatu
res of the materials to be formed. The workpiece is not preheated. In this tempe
rature range, workpieces with close dimensional tolerances and high surface qual
ities can be manufactured. The cold forming process results in some work-hardeni
ng of the metal and thereby in enhanced mechanical components properties. As a mi
nus, the forces and energy required for cold forming as well as the tool stresse
s are much higher than for hot forming, and the formability of the material in c
old forming is less. 2.1.2.2. Semihot Forming The warm forming temperature range
starts above room temperature and ends below the recrystallization temperature
from which the grain of metal materials such as steels begin to be restructured
and the material solidi cation due to deformation degraded. For most steel materia
ls, the semihot forming temperature ranges from 600900C. In comparison to hot form
ing, higher surface qualities and closer dimensional tolerances are obtained. Th
e forces and energy required for semihot forming as well as the tool stresses ar
e less than for cold forming. However, the additional thermal stress acting on t
he tool is a disadvantage. Consequently, in manufacturing, the requirements for
exact temperature control are high. 2.1.2.3. Hot Forming In hot forming processe
s, the workpiece is formed at temperatures above the temperature of recrystalliz
ation of the corresponding metal. For the majority of materials, formability is
higher due to higher forming temperatures. The forces and energy required for ho
t forming, as well as the tool stresses, are essentially lower than those for co
ld and semihot forming. Surfaces of poor quality due to scaling and decarburized
skin (for steel materials: reduced carbon content in marginal layers near the s
urface) and wider tolerances at the components functional surfaces, which in turn
result in increased allowances for the necessary follow-up chipping operation,
are disadvantageous. Further limitations are caused by the forming tools grown th
ermal stress and the high expenditure involved in heating and reworking the part
.
2.2.
Special Applications
The near-net-shape technologies of part manufacturing also involve applications
whose type of shaping can be assigned to none of the conventional primary shapin
g or forming technologies. This concerns, for instance, the thixoforming method,
which involves processing metals at temperatures between the molten and partial
ly solidi ed state so that the working conditions are between hot forming and cast
ing. Depending on whether the component is shaped on a die-casting or forging pr
ess, the procedure is termed thixocasting or thixoforging. A speci c globular stru
cture of the initial material is required to transform metals into a thixotropic
state. During heating to produce the thixotropic condition, the matrix phase, w
hich becomes liquid at a lower temperature, is molten rst and the metal becomes p
NEAR-NET-SHAPE PROCESSES
571
Development in shaping by casting is focused on two directions. First, the compo
nents become increasingly closer to the nished parts. Second, many single parts a
re aggregated to one casting (integral casting). Both directions of development
are realized in all variants of casting technology. However, in precision castin
g, casting by low pressure, and pressure die casting, the material and energy sa
vings to be achieved are especially pronounced. For evaluation, the manufacturin
g examples in Figures 4 and 5 were considered starting from melting up to a comm
ensurable part state. Figure 4 shows a technical drawing of a at part that had pr
eviously been produced by cutting starting from a bar, that is now made as a cas
ting (malleable cast iron) using the sand-molding principle. In cutting from the
semi nished material, material is utilized at only 25.5%. As a result of shaping
by casting, utilization of material was increased to 40%. The effects of shaping
by casting become evident in the energy balance (primary energy). For cutting t
he at part from the solid, 49.362 GJ / t parts are required. For shaping by casti
ng, 17.462 GJ / t parts are required. Consequently, 64.6% of the energy can be s
aved. Compared to cutting of semi nished steel material, for part manufacturing ab
out a third as much primary energy is required. The doorway structure of the Air
bus passenger door (PAX door: height about 2100 mm; width about 1200 mm) is illu
strated in Figure 5. In conventional manufacturing of the doorway structure as p
racticed until now, apart from the standard parts such as rivets, rings, and peg
s, 64 milling parts were cut from semi nished aluminum materials with very low uti
lization of material. Afterwards, those parts were joined by about 500 rivets. A
s an alternative technological variant, it is proposed that the doorway structur
e be made of three cast segments (die castinglow pressure). Assuming almost the s
ame mass, in production from semi nished materials, the ratio of chips amounted to
about 63 kg, whereas in casting, it can be reduced to about 0.7 kg. Thus, in ca
sting, the chip ratio amounts to only 1% in comparison to the present manufactur
ing strategy. In the method starting from the semi nished material, about 175 kg o
f materials have to be molten, however, in shaping by casting, this value is abo
ut 78 kgthat is, 44.6%. As a result of the energy balance (primary energy), about
34,483 MJ are required for manufacturing the doorway structure from the semi nish
ed material. However, in shaping by casting, 15,002
Designation of shape elements: 1 Cuboid (sawing, milling: roughing and nishing) 2
External radius (milling: roughing and nishing) 3 Pocket (rough milling: roughin
g and nishing) 4 Pocket (milling, nishing) 5 Hole (drilling into the solid, boring
) 6 Square pro le (drilling into the solid, broaching) 7 Pocket element (milling)
Figure 4 Example: Flat Part.
572
TECHNOLOGY
Figure 5 Example: Airbus Doorway.
MJ are neededthat is about 46%. The result of having drastically diminished the c
utting volume due to nns casting can be clearly proven in the energy balance: in
the variant starting from the semi nished material, 173 MJ were consumed for cutt
ing; in casting, less than 2 MJ. Today, the Airbus door constructions that have,
in contrast to the studies mentioned above, cast as one part only are tried out
. In this variant, 64 parts are aggregated to one casting (integral casting). Fi
gure 6 illustrates a welding-constructed tool holder consisting of seven steel p
arts (on the left), for which one consisting of two precision castings has been
substituted (steel casting, GS-42CrMo4, on the right). Comprehensive cutting and
joining operations can be saved through substitution by investment casting. In
the variant where the nns design is based on precision casting, only a little cu
tting rework is required.
3.2. 3.2.1.
Powder Metallurgy: Manufacturing Examples Hot Isostatic Pressing (HIP)
Since the 1950s, hot isostatic pressing technology has been developing rapidly i
n the eld of powder metallurgy. Assuming appropriate powders are used, precision
parts of minimum allowances can be produced. Applications provide solutions espe
cially in cases of maraging steels, wear steels with high carbon content, titani
um and superalloys, and ceramic materials. As usual in this process, after the p
roduction of the appropriate powder, powder is enclosed in an envelope under an
almost complete vacuum or inert gas atmosphere. Within this envelope, the powder
is compacted under high pressure and temperature (hot isostatic pressing or com
paction, see Figure 7). Almost isotropic structural properties are achieved by i
dentical pressure acting from all sides. Thereby, the pressing temperature is pa
rtially far below the usual sintering temperature, which results in a ne-grained
structure apart from the almost 100% density. HIP technology can be applied to
Postcompaction of (tungsten) carbide Postcompaction of ceramics
NEAR-NET-SHAPE PROCESSES
573
Figure 6 Tool Holder for a CNC Machine Tool (with Hinge).
Figure 7 Hot Isostatic Pressing (HIP): Process Flow. (From
y ABB, Va stera s, Sweden, 1989)
rm prospectus, courtes
574
TECHNOLOGY
Metal powder parts
Ceramic part
Casting
Figure 8 Workpieces Made Using HIP Technology. (From
Va stera s, Sweden, 1989)
Powder compaction Postcuring of casting structures Reuse and heat treatment of g
as turbine blades Diffusion welding Manufacturing of billets from powders of dif c
ult-to-form metals for further processing
Workpieces made using the HIP process are shown in Figure 8. These parts are cha
racterized by high form complexity and dimensional accuracy. With regard to the
mechanical properties, parts made using HIP are often much better than conventio
nally produced components due to their isotropic structure.
3.2.2.
Powder Forging
The properties of powder metallurgical workpieces are greatly in uenced by their p
ercentage of voids and entrapments (see Figure 9). For enhanced material charact
eristics, a density of 100% density must be striven for. The application of powd
er or sintering forging techniques can provide one solution to achieving this ob
jective. The corresponding process ow variants are given in Figure 10. Regarding
the process sequence and the equipment applied, the powder metallurgical backgro
und of the process basically conforms to shaping by sintering. According to thei
r manufacturing principle, the powder-forging techniques have to be assigned to
the group of precision-forging methods. Depending on part geometry, either forgi
ng starting from the sintering heat or forging with inductive reheating (rapid h
eating up) is applied. When forging the nal shape, an almost 100% density is simu
ltaneously generated in the highly stressed part sections. The part properties r
esulting from manufacturing by powder forging are equivalent to or frequently ev
en more advantageous than those of usual castings. This becomes evident especial
ly in the dynamic characteristics.
Figure 9 Workpiece Properties vs. the Percentage of Voids and Entrapments. (From
Lorenz 1996)
NEAR-NET-SHAPE PROCESSES
575
Figure 10 Powder and Sinter Forging. (From Lorenz 1996)
Figure 11 shows the stages of powder-forging processes for forming rotationally
symmetrical parts. The advantages of powder-forged parts can be summarised as fo
llows:
Material savings ( ashless process) Low mass uctuations between the individual part
s High accuracy (IT8 to IT11) High surface quality Low distortion during heat tr
eatment due to the isotropic structure High static and dynamic characteristic va
lues
Some typical powder forging parts are shown in Figure 12.
3.3. 3.3.1.
Cold-formed Near-Net-Shape Parts: Examples of Applications Extrusion
Cylindrical solid and hollow pieces including an axial direction slot of de ned wi
dth and depth at the face are examples of typical mechanical engineering and car
components. Parts like these can be found in diesel engines and pumps (valve ta
ppets), in braking devices (as pushing elements), and for magnetic cores.
576
TECHNOLOGY
GE, RadFigure 11 Powder Forging: Process Sequence. (From
KREBSO evormwald, Germany, 1994)
rm prospectus, courtesy
The valve tappet for large diesel motors made of case-hardening steel (16MnCr5),
illustrated in Figure 13, is usually produced by cutting from the solid materia
l, whereby the material is utilized at a rate of 40%. A near-net-shape cold-extru
sion process was developed to overcome the high partmanufacturing times and low
utilization of material and cope with large part quantities. Starting from soft
annealed, polished round steel, the valve tappet was cold extruded following the
process chain given in Figure 14. The initial shapes were produced by cold shea
ring and setting, but also by sawing in the case of greater diameters. Surface t
reatment (phosphatizing and lubricating) of the initial forms is required for co
ping with the tribological conditions during forming. Near-netshape extrusion ca
n be carried out both in two forming steps (forward and backward extrusion) and
in a one-step procedure by combined forward / backward extrusion. The number of
forming steps depends on the workpiece geometry, particularly the dimensional ra
tios b / d and D / d. Modi ed knee presses up to a nominal force of 6300 kN are em
ployed to extrude the valve tappets. The output capacity is 2540 parts per minute
. As a result of the near-net-shape process, the material is utilized to a highe
r extentabout 80%. The sizes of slot width and hole diameter are delivered at an
accuracy ready for installation, whereas the outer diameter is pressed at IT qua
lity 910 ready for grinding. The number of process steps and part manufacturing t
ime were reduced.
GE, Figure 12 Workpieces Made Using Powder Forging. (From prospectus, courtesy K
REBSO Radevormwald, Germany, 1994)
NEAR-NET-SHAPE PROCESSES
577
Figure 13 Valve Tappet for Large Diesel Motors: Ranges of Dimensions, Stamping D
ie, and Finished Part. (Courtesy ZI-Kaltumformung Oberlungwitz, Germany)
Apart from these effects, material cold solidi cationcaused by the nature of this t
echnologyand continuous ber orientation resulted in an increase in internal part s
trength.
3.3.2.
Swaging
Swaging is a chipless incremental forming technique characterized by tool segmen
ts radially quickly pressing in an opposite direction on the closed-in workpiece
. A distinction is made between the feeding method for producing long reduced pr
o les at relatively at transition angles and the plunge method for locally reducing
the cross-section at steep transition angles (see Figure 15).
Figure 14 Process Chain to Manufacture Valve Tappets for Large Diesel Motors by
Cold Forming. (Courtesy ZI Kaltumformung GmbH, Oberlungwitz, Germany)
578
TECHNOLOGY
Figure 15 Schematic Diagram of Plunge and Feed Swaging.
A sequence as used in practice, from blank to nns nished part, consisting of six
operations is described with the aid of the process chain shown in Figure 16. In
the rst process step, feed swaging is performed upon a mandrel to realize the re
quired inner contour. The blank is moved in the working direction by oscillating
tools. The forming work is executed in the intake taper. The produced crossFigure 16 Process Chain Illustrated by Part Stages in Car Transmission Shaft Pro
duction. (Courtesy FELSS, Ko nigsbach-Stein, Germany)
NEAR-NET-SHAPE PROCESSES
579
section is calibrated in the following cylinder section. Feeding is carried out
when the tools are out of contact. In the second stage of operation, the require
d internal serration is formed by means of a pro led mandrel. The semi nished materi
al having been turned 180 in the third process step, the opposite workpiece ank is
formed upon the mandrel by feed swaging. To further diminish the crosssection i
n the marked section, plunge swaging is following as the fourth process step. Th
is is done by opening and closing the dies in a controlled way superimposed upon
the actual tool stroke. The workpiece taper angles are steeper than in feed swa
ging. The required forming work is performed in both the taper and the cylinder.
In the plunge technique, the size of the forming zone is de ned by the tools lengt
h. A mandrel shaped as a hexagon is used to realize the target geometry at one w
orkpiece face by plunge swaging ( fth step). In the sixth and nal process step, an
internal thread is shaped. The obtainable tolerances to guarantee the nns qualit
y at the outer diameter are basically commensurate with the results achieved on
precise machine tools by cutting. Depending on material, workpiece dimensions, a
nd deformation ratio, the tolerances range from 0.01 mm to 0.1 mm. When the inner diam
eter is formed upon the mandrel, tolerances of 0.03 mm are obtainable. The surfac
es of swaged workpieces are characterized by very low roughness and a high beari
ng percentage. As a rule, for plunging, the roughness is usually Ra
0.1 m, whereas
in feed swaging, Ra is less than 1.0 m. During plastic deformation, ber orientati
on is maintained rather than interrupted as in cutting. Thus, enhanced functiona
lity of the nal product is guaranteed. The material that can be saved and the opt
imization of weight are additional key issues in this forming procedure. The ini
tial mass necessary for the production of bars and tubes diminishes because the
usual cutting procedure is replaced by a procedure that redistributes the materi
al. For an abundance of workpieces, it is even possible to substitute tubes for
the bar material, thus reducing the component mass. In swaging, the material has
an almost constant cold work hardening over the entire cross-section. Neverthel
ess, even at the highest deformation ratios, a residual strain remains that is s
uf cient for a follow-up forming process. Swaging rotationally symmetric initial s
hapes offers the following advantages:
Short processing time High deformation ratios Manifold producible shapes Materia
l savings Favourable ber orientation Smooth surfaces Close tolerances Environment
al friendliness because lubricant lm is unnecessary Easy automation
Swaging can be applied to almost all metals, even sintered materials, if the cor
responding material is suf ciently strained.
3.3.3.
Orbital Pressing
Orbital pressing is an incremental forming technique for producing demanding par
ts that may also have asymmetric geometries. The method is particularly suited t
o nns processing due to its high achievable dimensional accuracy. The principle
design of orbital pressing process chains is illustrated in Figure 17.
Preform with precise volume Sawing shearing turning forming
Heating (optionally)
Surface preparation Graphite phosphate + soap oil
Orbital forming
Finishing
Temperature up to 720C for steel (semihot forming)
Figure 17 Process Chain for Orbital Pressing.
580
TECHNOLOGY
An accuracy of about IT 12 is typical for the upper die region. In the lower die
, an accuracy up to IT 8 is achievable. Parts are reproducible at an accuracy of
0.05 mm over their total length. Roughness values of Ra
0.3 m can be realized at fu
nctional surfaces. The accuracies as to size and shape are mainly determined by
the preforms volumetric accuracy, whereas the obtainable roughness values depend
on die surface and friction. The advantages of orbital pressing compared to noni
ncremental forming techniques such as extrusion are a reduced forming force (rou
ghly by factor 10) and higher achievable deformation ratios. As a result of redu
cing forming forces, the requirements for die dimensioning are lowered at dimini
shed tool costs. Orbital pressing can be economically employed for the productio
n of small to medium part quantities. For lower dies, tool life quantities of 6,
000 to 40,000 parts are obtainable. The lifetime is about four times higher for
upper dies. Concerning the usual machine forces, orbital presses are usually ava
ilable at 2,000, 4,000, and 6,300 kN. For special applications, manufacturing fa
cilities up to a pressing force of 1,600,000 kN are also available. Generally, w
orkpieces of up to 5 kg maximum weight, 250 mm diameter, and 220 mm maximum blan
k height are formed of steel types with low carbon content and nonferrous metals
, such as copper and aluminum alloys. The example in Figure 18 indicates an extr
emely high-stressed component of a large diesel motor (part weight 1.1 kg; ange d
iameter 115 mm; ange height 16 mm; total height 55 mm). With the use of the orbit
al pressing technique, for this workpiece, the process chainoriginally consisting
of turning, preforging, sandblasting, and cold coining of the pro lecould be reduc
ed to only two steps. Furthermore, the cavity could be formed at a greater depth
and at higher accuracy, thus yielding signi cantly higher load capacity, reliabil
ity, and component life. In this example, the part costs were reduced by about 6
0% due to the application of orbital forging.
3.4. 3.4.1.
Semihot-formed Near-Net-Shape Components: Examples of Applications Semihot Extru
sion
In the automotive industry, the functional parts in particular are subjected to
high-quality requirements. The successful use of semihot extrusion is demonstrat
ed, for example, by the manufacture of triple-recess hollow shafts used in homok
inematic joints in the drive system. The complex inner contours of the shaft hav
e to be produced to high accuracy and surface quality. Producing these parts by
cutting is not economically viable due to the high losses of material and the hi
gh investment volume required, so the inner contour is shaped completely (net sh
ape) by forming. In forming technology, the required inner contour tolerance of 0.0
3 mm can only by achieved by calibrating a preform of already high accuracy at r
oom temperature. However, the material Cf 53, employed due to the necessary indu
ction hardening of the inner contour, is characterized by poor cold formability.
For that reason, shaping has to be performed at increased forming temperature b
ut still with a high demand for accuracy. Shaping can be realized by means of a
multistep semihot extrusion procedure carried out within a temperature interval
between 820C and 760C in only one forming step. Semihot extrusion is performed on
an automatic transfer press at the required accuracy, at optimum cost and also a
voiding scaling. The component is shaped by semihot extrusion starting out from
a cylindrical rod section without thermal and chemical pretreatment. The corresp
onding steps of operation are reducing the peg, upsetting the head, centering th
e head, and pressing and compacting the cup (see Figures 19 and 20). The advanta
ges most relevant for calculation of pro tability in comparing semihot
Figure 18 Basic Body of a Rotocup for Large Diesel Motors. (Courtesy ZI-Kaltumfo
rmung Oberlungwitz, Germany)
NEAR-NET-SHAPE PROCESSES
581
Figure 19 Original Production Stages of a Triple-Recess Hollow Shaft. (Courtesy
Schuler Pressen, Go ppingen, Germany)
extrusion of shafts to hot extrusion are caused by the low material and energy c
osts as well as postprocessing and investment expenditures.
3.4.2.
Semihot Forging
Forging within the semihot forming temperature interval is usually performed as
precision forging in a closed die. With closed upper and lower dies, a punch pen
etrates into the closed die and pushes the material into the still-free cavity.
Bevel gears completely formed, including the gear being ready for installation (
net shape process), are a typical example of application. Such straight bevel ge
ars (outer diameter 60 mm or 45 mm and module about 3.5 mm) are employed in the
differential gears of car driving systems. For tooth thickness, manufacturing to
lerances of 0.05 mm have to be maintained. In forming technology, the required tole
rance can only be obtained by calibrating the teeth preformed at high accuracy a
t room temperature. The preforms tooth ank thickness is not to exceed about 0.1 mm
compared to the nal dimension. The shaping requirements resulting from accuracy,
shape complexity, and the solid, alloyed case-hardening steel can only be met b
y semihot forming. Processing starts with a rod section that is preshaped withou
t the teeth in the rst stage (see Figure 21). The upset part is of a truncated co
ne shape. The actual closed die forging of the teeth is carried out as a semihot
operation in the second stage. The bottom (thickness about 8 mm) is pierced in
the last semihot stage. The cost ef ciency effects of the semihot forming process
compared to precision forging carried out at hot temperatures are summarized in
Table 4. Today, differential bevel gears of accuracy classes 8 to 10 are still p
roduced by cutting, despite their large quantity. Low-cost forming technologies
perspectively represent the better alternative, even though they require relativ
ely high development costs.
3.5. 3.5.1.
Hot-formed Near-Net-Shape Components: Examples of Applications Precision Forging
Which hot-forming technique to select for a given workpiece from a low-cost manu
facturing viewpoint depends mainly on the part quantity and the extent to which
the forming part is similar to the nished shape. In precision forging, where the
forming part can be better approximated to the nished
Figure 20 Semihot Forming Process Stages of a Triple-Recess Hollow Shaft. (From
Ko rner and Kno dler 1992)
582
TECHNOLOGY
Figure 21 Production Stages of Axle-Drive Bevel Pinions. (From Ko rner and Kno d
ler 1992)
part, the total cost, including tool and material costs as well as costs for the
forming and cutting processes, can be decreased despite increasing blank costs.
The accuracy values obtained are above the die forge standards (e.g. in Germany
, DIN 7526, forging quality E). As a goal, one process step in nishing by cutting
is to be saved (this means either grinding or nishing is necessary). In forging,
some dimensions of importance for the nearnet-shape have closer tolerances. The
steering swivel shown in Figure 22 is an example of nns manufacturing using pre
cision forging. The xing points determining the spatial location are tted to the d
imensions of the nished part by additional hot calibrating. Forged parts (hot for
ming) that make the subsequent cutting operation unnecessary or whose accuracy n
eed only be enhanced at certain shape elements of the part (IT9 to IT6) by follo
w-up cold forming can also be produced by precision forging. Thus, formed compon
ents of even cold-pressed parts accuracy can be made this way. This is especially
important if cold forming cannot be realized due to too-high forming forces and
tool stresses or too-low formability of the part material. From an economic vie
wpoint, precision forging is also more advantageous if the surfaces to be genera
ted could otherwise only be produced by relatively expensive cutting techniques.
3.5.2.
Hot Extrusion
Field spiders are usually made by die forging with burr and subsequent cold cali
brating. High pressing forces are required due to formation of burr. The forged
shape is approximated to the nal shape only to a minor extent. For that reason, h
igh forces also have to act in cold calibration. A nns manufacturing variant for
eld spiders consists of combining hot extrusion and cold forming.
TABLE 4
Cost-ef ciency: Hot and Semihot-Forged Axle Drive Bevel Pinions in Comparison HotForged Bevel Gear Cost Share 16% 8% 33% 7% 10% 17% 9% 100% Semihot-Forged Bevel
Gear 0.5 kg 1 Sawing 2 Heating (without inert gas) Forming 3 Controlled cooling
4 Cold calibrating 5 Turning Cost Share 13% 6% 27% 1% 10% 11% 9% 77%
Material Manufacturing steps
0.6 kg 1 Sawing, chamfering 2 Heating (with inert gas) Forming 3 B-type graphite
annealing (with inert gas) 4 Cold calibrating 5 Turning
Tool wear Production costs
From Hirschvogel 1997.
NEAR-NET-SHAPE PROCESSES
583
Figure 22 Precision Forging of a Steering Swivel (hot forged, simultaneously deb
urred, pierced and hot calibrated, weight 8.8 kg, material 41Cr4). (From Adlof 1
995)
The process chain is characterized by the following procedural stages: 1. Sheari
ng a rod section of hot-rolled round steel (unalloyed steel with low carbon cont
ent) with a mass tolerance of 1.5% 2. Heating in an induction heating device (780C)
3. Hot extrusion on a 6,300-kN knee press performing the following forming stage
s (see also Figure 23): Upsetting Cross-extrusion Setting of the pressed arms Cl
ipping 4. Cold forming (see Figure 24): Backward extrusion of the hub Forming of
erected arms Piercing of the hub Calibrating. An automated part-handling and sp
ecial tool-cooling and lubricating system are installed to keep the forming temp
eratures and the tribologic conditions exactly according to the requirements for
nns processes. The advantages of hot extrusion / cold calibrating vs. die forgi
ng with burr / cold calibrating of eld spiders can be summarized as follows:
Figure 23 Hot Extrusion of Field Spiders. (From Ko rner and Kno dler 1992)
584
TECHNOLOGY
Figure 24 Cold Forming to Enhance Accuracy by Calibrating (applied to
). (From Ko rner and Kno dler 1992)
eld spiders
Hot extrusion: Pressing forces demand a 6,300-kN press Low heating temperature (
780C) Descaling unnecessary Cold calibrating: at 3,300 kN
Die forging with burr: Pressing forces demand a 20,000-kN press High heating tem
perature (1100C) Descaling required Cold calibrating: at 10,000 kN
In calibrating the hot extruded part, forming is required to a lower extent due
to better approximation to the nal shape. This excludes the danger of cracks in t
he eld spider.
3.5.3.
Axial Die Rolling
In axial forgingdie rolling, the advantages of die forging and hot rolling are li
nked. Disk-like parts with or without an internal hole can be manufactured. On t
he component, surfaces (e.g., clamping surfaces) can be shaped in nns quality, t
hus enabling later part nishing by cutting in only one clamping. Accuracy values
of IT9 to IT11 are obtained because the axial die rolling machines are very stif
f and the forming forces relatively low. Furthermore, the desired component shap
e can be approximated much better because no draft angles are necessary and smal
l radii or sharp-edged contours can be formed. Within the workpiece, a favorable
ber orientation, resulting in reduced distortion after heat treatment, is achiev
ed by burrless forming. The advantages of the nns hot-forming technology using a
xial die rolling vs. conventional die forging can be shown by a comparison of ma
nufacturing costs of bevel gears for a car gear unit (see Figure 25). Cost reduc
tions up to 22% as well as mass savings up to 35% are achieved by the nns techno
logy.
3.6. 3.6.1
Special Technologies: Manufacturing Examples Thixoforging
Thixoforming processes make use of the favorable thixotropic material behavior i
n a semiliquid / semi solid state between the solid and liquid temperatures. Wit
hin this range, material cohesion, which makes the metal still able to be handle
d as a solid body and inserted into a die mold, is kept at a liquid rate of 4060%
. This suspension is lique ed under shear stresses during pressing into a die mold
. Due to the suspensions high owability, complex geometries can be lled at very low
forces, which represents the main bene t of this technique because part shapes of
intricate geometry (thin-walled components) can be produced at low forming forc
es. The process stages between thixocasting and thixoforging are different only
in their last stage of operation (see Figure 26). With an appropriately produced
feedstock material, the strands are cut into billets, heated up quickly, and fo
rmed inside a forging or casting tool. A component made of an aluminum wrought a
lloy by thixoforging is illustrated in Figure 27. The forging has a very complex
geometry but it was reproducible very accurately. The thick-walled component re
gions as well as parts with essential differences in wall thickness are characte
rized by a homogeneous structure free of voids.
NEAR-NET-SHAPE PROCESSES
585
Figure 25 Axial Die Rolling of Car Bevel Gears. (From
Eumuco, Leverkusen, Germany)
Industrial experience shows that components with ligree and complex part geometri
es, unfavorable mass distribution, undercuts, and cross-holes can be produced in
one stage of operation in a near-net-shape manner by means of thixoprocesses. C
omparison of thixoforming to die casting shows the following results:
20% improvement in cycle times About 20% increase in tool life Raw material cost
s still up to 20% higher
The thixoprocesses are regarded as an interesting option for both the forging in
dustries and foundries. With them near-net-shape technology is added to conventi
onal procedures.
Figure 26 Thixoforging and Thixocasting: Process Stages. (Courtesy EFU, Simmerat
h, Germany)
586
TECHNOLOGY
Figure 27 Steering Knuckle Made by Thixoforging. (Courtesy EFU, Simmerath, Germa
ny)
3.6.2
Near-Net-Shape Processes for Prototyping
The ability to create prototypes of properties conforming totally with the serie
s properties quickly at low cost has become increasingly important because the e
ngineering lead time dictated by the market has drastically diminished. Apart fr
om the classical methods of generating prototypes, such as cutting and casting,
generic manufacturing techniques are increasingly being used. In cutting or cast
ing, tools, dies, or xtures are required for fabricating the prototype, whereas p
rototyping by means of generic techniques is based on joining incremental solid
elements. Thus, complex components can immediately be made from computerized dat
a, such as from a CAD le, without comprehensive rework being necessary. In contra
st to cutting technologies, where the material is removed, in generic manufactur
ing techniques (rapid prototyping), the material is supplied layer by layer or i
s transformed from a liquid or powder into a solid state (state transition) by m
eans of a laser beam. The most essential rapid prototyping techniques (also call
ed solid freeform manufacturing or desktop manufacturing) are:
Stereolithography (STL) Fused deposition modeling (FDM) Laminated object manufac
turing (LOM) Selective laser sintering (SLS)
These techniques are able to process both metals and non metals. As a result of
intensive research, disadvantages of these techniques, such as high fabrication
time, insuf cient accuracy, and high cost, have been widely eliminated. Today, max
imum part dimension is limited to less than 1000 mm. Near-net-shape processes fo
r prototyping and small-batch production by means of rapid prototyping technique
s can be characterised by the process chains listed in Figure 28. In principle,
prototypes or patterns that in turn enable the manufacture of prototypes and sma
ll batches in combination with casting processes can be produced directly by mea
ns of rapid prototyping techniques. Rapid prototyping can also be applied to man
ufacturing primary shaping and forming tools to be used for the fabrication of p
rototypes.
4.
NEAR-NET-SHAPE PRODUCTION: DEVELOPMENT TRENDS
Near-net-shape production will continue to expand in the directions of applicati
on described in this chapter. An important question for decision making is how f
ar the objectives of industrial production and common global interests in reduct
ion of costs, shorter process chains and production cycles, and environment- and
resource-protecting manufacturing can be made compatible one with each other an
d whether these objectives are feasible in an optimal manner. Besides chipless s
haping, for nearnet-shape process planning, the state of the art of both chiples
s shaping and cutting techniques has to be paid attention to because a reasonabl
e interface between chipless and cutting manufacturing is highly signi cant. Thus,
for instance, in the case of a nns strategy consisting of chipless shaping, hea
t treatment, and grinding, in future, grinding could be substituted for to a gre
ater extent by
NEAR-NET-SHAPE PROCESSES
587
3D CAD model CAD data processing Building process preparation RP building Postpr
ocessing
Part prototype
Primary shaping tool / forming tool Primary shaping or forming
Master Casting toolmaking Casting process
Prototype/small series Prototype/small series
Figure 28 Near-Net-Shape Process Chain for Producing Prototypes and Small Series
by Means of Generic Manufacturing Techniques (RP techniques).
machining of hard materials (turning, milling). As a result, reduced manufacturi
ng times and lower energy consumption could be achievable and the requirements f
or chipless shaping could possibly be diminished. In cases where the nal contours
can be economically produced to the required accuracy using chipless shaping, t
he change to net-shape manufacturing is a useful development goal.
REFERENCES
Adlof, W. W. (1995), Schmiedeteile: Gestaltung, Anwendung, Beispiele, Informatio
nsstelle Schmiedestu ck-Verwendung im Industrieverband Deutscher Schmieden e.V.,
Hagen. Hirschvogel, M. (1997), Potentiale der Halbwarmumformung: ein Verfahrensve
rgleich, in Tagungsband des 6. Umformtechnischen Kolloquiums (Darmstadt), pp. 19520
5. Ko rner, E., and Kno dler, R. (1992), Mo glichkeiten des Halbwarm iesspressens in
Kombination mit dem Kalt iesspressen, Umformtechnik, Vol. 26, No. 6, pp. 403408. Lor
enz, B. (1996), Ein Beitrag zur Theorie der Umformung pulvermetallurgischer Anfang
sformen, Freiberger Forschungshefte, Vol. 281, pp. 1127.
ADDITIONAL READING
Altan, T., and Kno rr, M., Proze- und Werkzeugauslegung fu r die endkonturnahe Fert
igung von Verzahnungen durch Kalt ie pressen, in Konferenzbericht, 5. Umformtechnische
s Kolloquium (Darmstadt, 1994), pp. 20.120.11. Beitz, W., and Ku ttner, K.-H., Du
bbel: Taschenbuch fu r den Maschinenbau, 17th Ed. Springer, Berlin, 1990. Doege,
E., Thalemann, J., and Bru ggemann, K., Massnahmen zur Qualita tssicherung beim P
ra zisionsschmieden, Umformtechnik, Vol. 29, No. 4, 1995, pp. 243249. Doege, E., Th
alemann, J., and Westercamp, C., Pra zisionsschmieden von Zahnra dern, wtProduktion
und Management, Vol. 85, No. 3, 1995, pp. 8589.
588
TECHNOLOGY
Overview
Regulations designed to protect and
ole in the design and management of
ld be incorporated into the initial
Handbook of Industrial Engineering:
Edition. Edited by Gavriel Salvendy
589
590
TECHNOLOGY
process design and not as an afterthought. This section provides a brief overvie
w of some major environmental laws and organizations.
2.2.
Environmental Protection Act
Prior to the Environmental Protection Act, environmental regulations were divide
d along media lines (air, water, earth). In 1970, President Nixon submitted to C
ongress a proposal to consolidate many of the environmental duties previously ad
ministered by agencies including the Federal Water Quality Administration, the N
ational Air Pollution Control Administration, the Bureau of Solid Waste Manageme
nt, the Bureau of Water Hygiene, the Bureau of Radiological Health; certain func
tions with respect to pesticides carried out by the Food and Drug Administration
; certain functions of the Council on Environmental Quality; certain functions o
f the Atomic Energy Commission and the Federal Radiation Council; and certain fu
nctions of the Agricultural Research Service (Nixon 1970a). President Nixon reco
gnized that some pollutants exist in all forms of media and that successful admi
nistration of pollution-control measures required the cooperation of many of the
federal bureaus and councils. A more effective management method would be to re
cognize pollutants, observe their transport and transformation through each medi
um, observe how they interact with other pollutants, note the total presence of
the pollutant and its effect on living and nonliving entities, and determine the
most ef cient mitigation process. This multimedia approach required the creation
of a new agency to assume the duties of many existing agencies, thus eliminating
potential miscommunication and interdepartmental biases that could hinder envir
onmental protection. Thus, the President recommended the establishment of an int
egrated federal agency that ultimately came to be called the Environmental Prote
ction Agency (EPA). The roles and functions of the EPA were to develop and enfor
ce national standards for the protection of the environment; research the effect
s of pollutants, their concentrations in the environment, and ways of controllin
g them; provide funding for research and technical assistance to institutions fo
r pollutant research, and propose new legislation to the President for protectio
n of the environment (Nixon 1970b). In the words of William D. Ruckelshaus, EPAs r
st administrator,
EPA is an independent agency. It has no obligation to promote agriculture or com
merce; only the critical obligation to protect and enhance the environment. It d
oes not have a narrow charter to deal with only one aspect of a deteriorating en
vironment; rather it has a broad responsibility for research, standard-setting,
monitoring and enforcement with regard to ve environmental hazards; air and water
pollution, solid waste disposal, radiation, and pesticides. EPA represents a co
ordinated approach to each of these problems, guaranteeing that as we deal with
one dif culty we do not aggravate others. (Ruckelshaus 1970)
The EPA has instituted numerous programs and made signi cant changes in the way bu
sinesses operate in the United States. A brief summary of the EPAs milestones (Ta
ble 1) shows the many ways the course of business and the environment have been
altered since the agencys inception in 1970.
2.3. 2.3.1.
Clean Air Acts Air Pollution Control Concept in the United States
Air pollution control in the United States is said to be a command and control regul
atory approach to achieving clean air. That is, regulations are promulgated at t
he federal level and via state implementation plans (SIPs) at the state level an
d air pollution sources are required (commanded) to comply. These regulations have b
een set up to operate on the air we breathe as well as the sources that produce
the air pollution. The regulations established to control the air we breathe are
called National Ambient Air Quality Standards (NAAQSs). Their engineering units
are concentration based, that is, micrograms-pollutant per cubic meter of air o
r parts per million by volume (ppmv). These NAAQSs also have a time-averaging pe
riod associated with them such as a 24-hour or an annual averaging period. Addit
ionally, the NAAQSs have primary standards and secondary standards associated wi
th them. The primary standards are for the protection of human health and the se
condary standards are for the protection of things. For example, the primary 24hour average standard for sulfur dioxide is 365 g/m3 and the secondary standard i
s a 3-hour average standard of 1300 g/m3. The 365 g/m3 standard protects humans fr
om a high-level short-term dose of sulfur dioxide, while the 1300 g/m3 standard c
ould protect a melon crop from a high-level, short-term dose of sulfur dioxide.
Table 2 contains the NAAQSs for the six EPA criteria pollutants. The other thrus
t of the air pollution regulations applies to the sources of the pollutants. The
se regulations are emission standards called the New Source Performance Standard
s (NSPSs). These sources include stationary sources such as power plants and ind
ustrial manufacturing operations as well as mobile sources such as automobiles,
trucks, and aircraft. These regulations are mass ow rate-based, that is, grams-po
llutant / hr or lb-pollutant / 106 Btu. Pollutant-speci c emission limits were
ENVIRONMENTAL ENGINEERING: REGULATION AND COMPLIANCE TABLE 1 Year 1970 1971 1972
1973 1974 1975 1976 Timeline of Environmental Regulation Event
591
1977 1978 1979 1980 1984 1985 1986
1987 1989 1990 1992 1993 1994 1995
Agency established December 2, 1970. Clean Air Act Amendments set national healt
hbased standards. Bans use of lead-containing interior paints in residences buil
t or renovated by the federal government. Bans the use of DDT. Commits to a nati
onal network of sewage-treatment plants to provide shable and swimmable waterways
. United States and Canada sign the International Great Lakes Water Quality Agre
ement. Phase-out of lead-containing gasoline begins. First industrial wastewater
discharge permit issued. New Safe Water Drinking Act establishes health-based s
tandards for water treatment. Standards limiting industrial water pollution esta
blished. Fuel economy standards allow consumers to include fuel ef ciency in their
purchasing considerations. EPA emission standards require catalytic convertors
to be installed on automobiles. Resource Conservation and Recovery Act establish
ed to track hazardous waste from cradle to grave. Toxic Substance Control Act develo
ped to reduce public exposure to pollutants that pose an unreasonable threat of
injury. Included in this act is a ban on polychlorinated biphenyls (PCBs). Air q
uality in pristine natural areas is addressed by Clean Air Act Amendments. Chlor
o uorocarbons (CFCs) banned for use as a propellant in most aerosol cans. CFCs are
implicated as ozone-depleting chemicals. Dioxin is indicated as a carcinogen. T
wo herbicides containing dioxins banned. Under the new Superfund law a nationwid
e program for the remediation of hazardous waste sites is established. Amendment
s to the RCRA Resource Conservation and Recovery Act enacted to prevent contamin
ation from leaking underground storage tanks (USTs) and land lls and require treat
ment of hazardous wastes prior to land disposal. EPA joins international movemen
t calling for worldwide ban on the use of OzoneDepleting Chemicals (ODCs). A maj
or toxic chemical release in Bhopal, India, stimulates the passage of the Commun
ity Right to Know Law, requiring that individuals in possession of chemicals mai
ntain records of location, quantity, and use and any releases of chemicals. The
EPA is charged with maintaining a public database of these records. The EPA begi
ns assisting communities in the development of Emergency Response Plans. The Uni
ted States and 23 other countries sign the Montreal Protocol to phase out the pr
oduction of chloro ourocarbons (CFCs). The new Toxic Releases Inventory provides t
he public with the location and character of toxic releases in the United States
. EPA assesses $15 billion ne to a single company for PCB contamination at 89 sit
es. New Clean Air Act Amendments require states to demonstrate continuing progre
ss toward meeting national air quaility standards. EPA bans dumping of sewage sl
udge into oceans and coastal waters. EPAs Common Sense Initiative announces a shi
ft from pollutant-by-pollutant regulation to industry-by-industry regulation. Gr
ants initiated to revitalize inner-city brown elds. Clinton administration nearly
doubles the list of toxic chemicals that must be reported under the Community Ri
ght to Know laws. Two thirds of metropolitan areas that did not meet air quality
standards in 1990 now do meet them, including San Francisco and Detroit. EPA re
quires municipal incinerators to reduce toxic emissions by 90%.
From EPA OCEPA 1995.
assigned to both existing sources and proposed new sources under the 1970 Clean
Air Act. The existing-source emission limits were less stringent than the new so
urce limits because new sources had the opportunity to incorporate the latest te
chnology into the process or manufacturing design. To determine what emission st
andards apply to your new proposed operation, refer to the Code of Federal Regul
ations (CFR) part 60, which is also Title I under the 1990 Clean Air Act Amend-
592 TABLE 2 National Ambient Air Quality Standards Averaging Period Annual 24 ho
ur Annual 24 hour 3 hour Annual 1 hour 8 hour Quarterly Primary NAAQS g / m3 50 1
50 80 365 100 235 10,000 1.5
TECHNOLOGY
Criteria Pollutant PM-10 Sulfur dioxide (SO2) Nitrogen dioxide (NO2) Ozone Carbo
n monoxide (CO) Lead
Secondary NAAQS g / m3 50 150 1,300 100 235 1.5
ments. The NSPSs are shown for a large number of industrial classi cations. Look f
or your type of industry and you will nd the applicable new source emission stand
ards.
2.3.2.
The 1970 Clean Air Act
Control of outdoor air pollution in the United States certainly grew its regulat
ory roots prior to the 1970 Clean Air Act. However, for the practicing industria
l engineer, the passage of the 1970 Clean Air Act is the real beginning of conte
mporary outdoor or ambient air pollution control. Therefore, this section will f
ocus on those aspects of the Clean Air Act of 1970 and the subsequent amendments
of 1977 and 1990 that will be most meaningful to the industrial engineer. The 1
970 Clean Air Act was the rst piece of air pollution legislation in this country
that had any real teeth. The establishment of an autonomous federal agency that
ultimately came to be known as the EPA was a result of the 1970 Act. Prior to th
is time, matters relating to outdoor air pollution had their home in the United
States Public Health Service. Furthermore, it is historically interesting to not
e that the EPA went through a number of name changes in the months following pro
mulgation of the 1970 Clean Air Act before the current name was chosen.
2.3.3.
The 1977 Clean Air Act Amendments
The 1970 Clean Air Act had a deadline of July 1, 1975, by which all counties in
the United States were to comply with concentration-based standards (NAAQSs) for
speci ed air pollutants. However, when July 1, 1975, arrived, very few counties i
n industrialized sections of the country met the standards. As a consequence, mu
ch debate took place in Congress and the result was the promulgation of the 1977
Clean Air Act Amendments. In some quarters, this amendment was viewed as the st
rongest piece of land use legislation the United States had ever seen. This legi
slation required each state to evaluate the air sheds in each of its counties an
d designate the counties as either attainment or non-attainment counties for spe
ci ed air pollutants, according to whether the counties attained the National Ambi
ent Air Quality Standards (NAAQS) for those pollutants. If the state had insuf cie
nt data, the county was designated unclassi ed. A company in a non-attainment coun
ty had to take certain steps to obtain an air permit. This ultimately added a si
gni cant increased cost to a manufactured product or the end result of a process.
Under this same amendment were provisions for protecting and enhancing the nations ai
r resources. These provisions, called the Prevention of Signi cant Deterioration (
PSD) standards, prevented an industrial plant from locating in a pristine air sh
ed and polluting that air shed up to the stated ambient air standard. They class
i ed each states counties as to their economic status and air polluting potential a
nd established ceilings and increments more stringent than the ambient air stand
ards promulgated under the 1970 Clean Air Act. No longer could a company relocat
e to a clean air region for the purpose of polluting up to the ambient air stand
ard. In essence, the playing eld was now level.
2.3.4.
The 1990 Clean Air Act Amendments
Due to a number of shortcomings in the existing air pollution regulations at the
end of the 1980s, the Clean Air Act was again amended in 1990. The 1990 Clean A
ir Amendments contain a number of provisions, or titles, as they are referred to
. These titles, some of which carry over existing regulations from the 1970 and
1977 amendments, are listed below, along with a general statement of what they c
over:
193; 40 CFR Parts 5053, 55, 58, 65, 81, and 93. Establishes concentration-based a
mbient air standards. New Source Performance Standards (NSPSs), Clean Air Act se
ction 111; 40 CFR Part 60. Establishes emission limitations for speci c categories
of new or modi ed sources. Title II: Mobile Sources Program, Clean Air Act sectio
ns 202250; 40 CFR Parts 80 and 8588. Covers tailpipe emission standards for aircra
ft, autos, and trucks, including fuel and fuel additives, clean fuel vehicles, a
nd Hazardous Air Pollutants (HAPS) research for mobile sources. Title III: Natio
nal Emission Standards for Hazardous Air Pollutants (NESHAPS), Clean Air Act sec
tion 112; 40 CFR Parts 61, 63, and 68. Includes an accidental release program, l
ist of HAPS and sources, residual risk standards, and maximum achievable control
technology (MACT) standards. Title IV: Acid Rain Program, Clean Air Act section
s 401416; 40 CFR Parts 7278. Acid deposition control via sulfur and nitrogen oxide
controls on coal- and oil-burning electric utility boilers. Title V: Operating
permit program, Clean Air Act sections 501507; 40 CFR Parts 7071. Requires operati
ng permits on all sources covered under the Clean Air Act. Title VI: Stratospher
ic Ozone Protection Program, Clean Air Act sections 601618; 40 CFR Part 82. Conta
ins a list of ozone-depleting substances. Bans certain freons, requires freon re
cycling. Title VII: Enforcement Provisions, Clean Air Act sections 113, 114, 303
, 304, 306, and 307. Compliance certi cation, enhanced monitoring, record keeping
and reporting, $25,000 / day nes, civil and criminal penalties, entry / inspector
provisions, citizen lawsuits and awards up to $10,000, and public access to rec
ords.
2.4.
Worker Right to Know
The Hazard Communication Standard of the Occupational Safety and Health Act requ
ires that all employees be informed of the hazards associated with the chemicals
they are exposed to or could be accidentally exposed to. In addition to chemica
l hazards, OSHA requires workers be trained to recognize many other types of haz
ards. Hazards may include chemical, explosion and re, oxygen de ciency (con ned space
s, for example), ionizing radiation, biological hazards, safety hazards, electri
cal hazards, heat stress, cold exposure, and noise. Compliance with Worker Right
to Know laws general requires a written plan that explains how the Hazard Commu
nication Standard will be executed. The only operations where a written Hazard C
ommunication Plan is not required are handling facilities where workers contact
only sealed chemical containers and laboratories. These facilities must still pr
ovide hazard training to employees, retain the original labeling on the shipping
containers, and make material safety data sheets (MSDSs) available to all emplo
yees. In general, no employees should be working with any chemical or equipment
that they are not familiar with. OSHA statistics indicate that failure to comply
with the Hazard Communication Standard is its most cited area (OSHA 1999b). The
Hazard Communication Standard requires each facility to conduct a hazard assess
ment for each chemical in the workplace, maintain an inventory of chemicals in t
he workplace, retain MSDSs for each chemical in the workplace, properly label ea
ch chemical according to a uniform labeling policy, train each employee to under
stand the MSDSs, product labels, and Hazard Communication Standard, and develop
a written program that explains how the Hazard Communication Standard is to be i
mplemented at the facility.
2.5.
Resource Conservation and Recovery Act
The Resource Conservation and Recovery Act (RCRA) of 1976, an amendment to the S
olid Waste Disposal Act of 1965, and the Hazardous and Solid Waste Amendments of
1984, which expanded RCRA, were enacted to protect human health and environment
, reduce waste and energy, and reduce or eliminate hazardous waste as rapidly as
possible. Three programs addressing hazardous waste, solid waste, and undergrou
nd storage tanks are included in RCRA. Subtitle C requires the tracking of hazar
dous waste from cradle to grave, which refers to the requirement that hazardous wast
e be documented by a paper trail from the generator to nal disposal,
594
TECHNOLOGY
whether it be incineration, treatment, land ll, or some combination of processes.
Subtitle D establishes criteria for state solid waste management plans. Funding
and technical assistance are also provided for adding recycling and source reduc
tion implementation and research. Subtitle I presents rules for the control and
reduction of pollution from underground storage tanks (USTs). A RCRA hazardous w
aste is any substance that meets physical characteristics such as ignitibility,
corrosivity, and reactivity or may be one of 500 speci c hazardous wastes. They ma
y be in any physical form: liquid, semisolid, or solid. Generators and transport
ers of hazardous waste must have federally assigned identi cation numbers and abid
e by the regulations pertinent to their wastes. Individuals treating and disposi
ng of hazardous wastes must meet stringent operating guidelines and be permitted
for treatment and disposal technologies employed. RCRA hazardous waste regulati
ons apply to any commercial, federal, state, or local entity that creates, handl
es, or transports hazardous waste. A RCRA solid waste is any sludge, garbage, or
waste product from a water treatment plant, wastewater treatment plant, or air
pollution control facility. It also includes any discarded material from industr
ial, mining, commercial, agricultural, and community activities in contained gas
eous, liquid, sludge, or solid form. RCRA solid waste regulations pertain to own
ers and operators of municipal solid waste land lls (EPA OSW 1999a,b).
2.6.
Hazardous Materials Transportation Act
The Hazardous Materials Transportation Act of 1975 (HMTA) and the 1990 Hazardous
Materials Uniform Safety Act were promulgated to protect the public from risks
associated with the movement of potentially dangerous materials on roads, in the
air, and on waterways. They do not pertain to the movement of materials within
a facility. Anyone who transports or causes to be transported a hazardous materi
al is subject to these regulations, as is anyone associated with the production
or modi cation of containers for hazardous materials. Enforcement of the HMTA is d
elegated to the Federal Highway Administration, Federal Railway Administration,
Federal Aviation Administration, and Research and Special Programs Administratio
n (for enforcement of packaging rules). The regulations of the HMTA are divided
into four general areas: procedures and policies, labeling and hazard communicat
ion, packaging requirements, and operational rules. Proper labeling and hazard c
ommunication requires the use of the standard hazard codes, labeling, shipping p
apers, and placarding. Hazardous materials must be packaged in containers compat
ible with the material being shipped and be of suf cient strength to prevent leaks
and spills during normal transport (DOE OEPA 1996).
2.7. Comprehensive Environmental Response, Compensation and Liability Act (CERCL
A) and Superfund Amendments and Reauthorization Act (SARA)
The Comprehensive Environmental Response, Compensation and Liability Act (CERCLA
) was enacted in 1980 to provided a federal Superfund for the cleanup of abandon
ed hazardous waste sites. Funding for the Superfund was provided through nes levi
ed or lawsuits won against potentially responsible parties (PRPs). PRPs are thos
e individuals having operated at or been af liated with the hazardous waste site.
Af liation is not limited to having physically been on the site operating a proces
s intrinsic to the hazardous waste creation. Af liation can include those parties
that have provided transportation to the site or customers of the operation at t
he hazardous waste site that transferred raw materials to the site for processin
g (EPA OPA 1999a). The Superfund Amendment Reauthorization Act (SARA) of 1986 co
ntinued cleanup authority under CERCLA and added enforcement authority to CERCLA
(EPA OPA 1999b). Within SARA was the Emergency Planning and Community Right-toKnow Act (EPCRA) also know as SARA Title III. EPCRA established the framework fo
ndard being reduced over the years. The current NAAQS standard for ne particulate
matter is PM-10, which means particles less than or equal to 10 m. But a PM-2.5
NAAQS standard has been proposed by the EPA and awaits promulgation at the time
of writing.
3.2. 3.2.1.
Permits Air Permits
One of the provisions of the 1970 Clean Air Act initiated the requirement for ai
r pollution sources to obtain a permit for construction of the source and a perm
it to operate it. The construction permit application must be completed prior to
the initiation of construction of any air pollution source. Failure to do so co
uld result in a $25,000 per day ne. In some states, initiation of construction wa
s interpreted as issuance of a purchase order for a piece of equipment; in other
s, groundbreaking for the new construction. Therefore, to ensure compliance with
the air permit requirement, this author suggests that completion of the air per
mit be given rst priority in any project involving air emissions into the atmosph
ere. The best practice is to have the state-approved permit in hand before begin
ning construction. Most states have their permit forms on the Internet, and a ha
rd copy can be downloaded. Alternatively, the permit forms can be lled out electr
onically and submitted.
596
TECHNOLOGY
Recently, many states have offered the option of allowing an air pollution sourc
e to begin construction prior to obtaining the approved construction permit. How
ever, the required paperwork is not trivial, and it may still be easier to ll out
the permit form unless extenuating circumstances demand that construction begin
before the approved permit is in hand. The caveat is the state could disapprove
the construction (which the company has already begun) if, after reviewing the
permit application, the state disagrees with the engineering approach taken to c
ontrol the air pollution.
3.2.2.
Water Permits
Any person discharging a pollutant from a point source must have a National Poll
ution Discharge Elimination System (NPDES) permit. This applies to persons disch
arging both to a public treatment works or directly to a receiving water body. T
hese permits will limit the type of pollutants that can be discharged and state
the type of monitoring and reporting requirements and other provisions to preven
t damage to the receiving body or treatment facility. In most states, the state
department of environmental quality is responsible for issuing permits. In state
s that have not received approval to issue NPDES permits, you should contact the
regional EPA of ce. Wastewater permits that regulate the discharge of many pollut
ants can be broken down into the general categories of conventional, toxic, and
nonconventional. Conventional pollutants are those contained in typical sanitary
waste such as human waste, sink disposal waste, and bathwater. Conventional was
tes include fecal coliform and oil and grease. Fecal coliform is present in the
digestive tracts of mammals, and its presence is commonly used as a surrogate to
detect the presence of pathogenic organisms. Oils and greases such as waxes and
hydrocarbons can produce sludges that are dif cult and thus costly to treat. Toxi
c pollutants are typically subdivided into organics and metals. Organic toxins i
nclude herbicides, pesticides, polychlorinated biphenyls, and dioxins. Nonconven
tional pollutants include nutrients such as phosphorus and nitrogen, both of whi
ch can contribute to algal blooms in receiving waters. The EPA maintains many da
tabases of environmental regulatory data. Among these is the Permit Compliance S
ystem (PCS) database http: / / www.epa.gov / enviro / html / pcs / pcs query jav
a.html. This database provides many of the details governing a facilitys wastewat
er discharges. Speci c limitations typically depend on the classi cation and ow of th
e stream to which a facility is discharging. Surface waters are classi ed as to th
eir intended use (recreation, water supply, shing, etc.), and then the relevant c
onditions to support those uses must be maintained. Discharge limitations are th
en set so that pollution will not exceed these criteria. Total suspended solids,
pH, temperature, ow, and oils and grease are typical measures that must be repor
ted. In the case of a secondary sewage treatment, the minimum standards for BOD5
, suspended solids, and pH over a 30-day averaging period are 30 mg / L, 30 mg /
L, and 69 pH, respectively. In practice, these limits will be more stringent, an
d in many cases there will be separate conditions for summer and winter conditio
ns as well as case-by-case limitations.
4.
4.1.
ESTIMATING PLANT-WIDE EMISSIONS
Overview
A part of the overall air pollution requirements is an annual emissions inventor
y to be submitted to the state in which the source resides. This inventory ident
i es and quanti es all signi cant atmospheric discharges from the respective industria
l plant. Most states now have this inventory in electronic form and require subm
ission electronically. The Indiana Department of Environmental Management uses a
commercial package called i-STEPS, which is an automated tool for storing, repo
rting, and managing air emissions data. i-STEPS facilitates data compilation for
pollution management and reporting emissions data to government agencies.*
4.2. 4.2.1.
Estimating Methods Mass Balance
From an engineering viewpoint, the most direct way to estimate pollution emissio
ns from an industrial plant is by mass balance. The concept is mass in mass outthat i
s, everything the purchasing department buys and is delivered to the plant must
somehow leave the plant, whether within the manufactured product; as solid waste
to a land ll; as air emissions either through a stack or vent; or as liquid waste
either to an on-site treatment plant or to the sewer and the municipal wastewat
er treatment plant.
* i-STEPs Environmental Software is available from Paci c Environmental Services,
Inc., 5001 South Miami Boulevard; Suite 300, P.O. Box 12077, Research Triangle P
ark, NC 27709-2077, www.i-steps.com.
Uncontrolled Emission Factors for Fish Canning and Byproduct Manufacturea EMISSI
ON FACTOR RATING: C Particulate Trimethylamine [(CH3)3N] kg / Mg c 0.15c 1.75c b b
lb / ton c 0.3c 3.5c b b Hydrogen Sul de (H2S) kg / Mg c 0.005c 0.10c b b lb / ton c 0
c 0.2c b b
Process Cookers, canning (SCC 3-02-012-04) Cookers, scrap Fresh sh (SCC 3-02-01201) Stale sh (SCC 3-02-012-02) Steam tube dryer (SCC 3-02-012-05) Direct- red dryer
(SCC 3-02-012-06)
a
kg / Mg Neg Neg Neg 2.5 4
lb / ton Neg Neg Neg 5 8
From Prokop (1992). Factors are in terms of raw sh processed. SCC Source Classi cat
ion Code. Neg negligible. b Emissions suspected, but data are not available for
quanti cation. c Summer (1963).
598
TECHNOLOGY
from sh cookers can be estimated without costly measurements. The expected accura
cy of the estimate is alluded to at the top of the table as having an emission f
actor rating of C. This means that on a scale of A to E, with A being the relati
ve best estimate, the C rating is mediocre. It is important to refer to the refe
rences associated with each emission factor so a value judgment can be made when
using the factor.
5.
TOTAL-ENCLOSURE CONCEPT FOR FUGITIVE AIR EMISSIONS
Most if not all industrial operations result in the generation of air pollution
emissions to the atmosphere. The emissions may be partially captured by a ventil
ation hood and be taken to an oven, incinerator, baghouse, or other air pollutio
n control device. The air pollutants not captured by the ventilation hood escape
into the workplace and are referred to as fugitive emissions. These fugitive em
issions eventually reach the outdoor atmosphere either through roof ventilation
fans, open windows, doors, or ex ltration through the walls of the building struct
ure itself. Ultimately, all the air pollutants generated by the industrial proce
ss reach the outdoor atmosphere. For this reason, the EPA and state regulatory a
gencies require an industrial process that generates air pollutants to be massba
lance tested prior to approving an air pollution operating permit. In other word
s, either the fugitive emissions must be quanti ed and gured in the overall destruc
tion and removal ef ciency (DRE) of the air pollution controls or it must be demon
strated that the ventilation hood(s) form a 100% permanent total enclosure of th
e pollutants. With 100% permanent total enclosure demonstrated, it is known that
all air pollutants generated by the process are captured by the hood(s) or encl
osure and taken to the as-designed air pollution control device, whether it be an in
tegral oven, external incinerator, baghouse lter, electrostatic precipitator, or
some other tail gas cleanup device.
5.1.
Criteria for 100% Permanent Total Enclosure of Air Pollutants
The United States Environmental Protection Agency has developed a criterion for
determining 100% total enclosure for an industrial operation, EPA method 204 (EP
A M204 2000). It requires that a series of criteria be met in order for a ventil
ation hooded industrial process to qualify as 100% permanent total enclosure. Th
ese criteria are: 1. The total area of all natural draft openings (NDOs) must be
less than 5% of the total surface area of the enclosure. 2. The pollutant sourc
e must be at least four equivalent diameters away from any natural draft opening
(NDO). The equivalent diameter is de ned as 2(LW ) / (L
W ), where L is the lengt
h of the slot and W is the width. 3. The static pressure just inside each of the
NDOs must be no less than a negative pressure of 0.007 in. of water. 4. The dir
ection of ow must be into the enclosure as demonstrated by a device such as a smo
ke tube.
6.
GREEN ENGINEERING
While many of the earlier legislative efforts were oriented toward end-of-pipe techn
ologies, where pollution was captured and disposed of after being generated, the
focus is now shifting to legislation that favors recycling, reuse, and reductio
n of waste generation. The problem with end-of-pipe technologies is that frequen
tly the waste product, while no longer in the air, water, or earth, must still b
e disposed of in some manner. If a product is designed from its inception with t
he goal of reducing or eliminating waste not only during the production phase bu
t also for its total life cycle, from product delivery to salvage, resources can
be greatly conserved. This philosophy results in the design concept of cradle to
reincarnation rather than cradle to grave because no well-designed product is ever dis
posed ofit is recycled (Graedel et al. 1995). Many phrases have been adopted into
the expanding lexicon of green engineering. Total cost assessment (TCA), design
for the environment (DFE), design for recycling (DFR), and life-cycle assessmen
t are some of the many phrases that summarize efforts to look beyond the traditi
onal design scope when designing and manufacturing a product.
6.1.
Product Design
There is a threshold for the economic feasibility of recycling. If only a small
portion of a product is recyclable or if great effort is required to extract the
recyclable materials, then it may not be economically feasible to recycle a pro
duct. If, however, the product is designed to be largely recyclable and is desig
ned in a manner that facilitates easy removal, then reuse and recycling will be
economically reasonable. Consider the personal computer. The rapid rate at which
computing power has expanded has resulted in a proportionate turnover in comput
ers in the workplace. However, no such improvement in computer externals has acc
ompanied that in computer internals, so last years computer case could in theory
hold this years computer processor, motherboard, memory, and so on.
ning with Plastics: Considering Part Recyclability and the Use of Recycled Mater
ials, AT&T Technical Journal, NovemberDecember, pp. 5459.
600
TECHNOLOGY
DOE OEPA (1996), OEPA Environmental Law Summary: Hazardous Materials Transportatio
n Act, Department of Energy, Of ce of Environmental Policy Analysis, http: / / tis-n
t.eh.doe.gov / oepa / law sum / HMTA.HTM Dickinson, D. A., Draper, C. W., Samina
than, M., Sohn, J. E., and Williams, G. (1995), Green Product Manufacturing, AT&T Te
chnical Journal, NovemberDecember pp. 2634. EPA AP-42 (1995), AP-42 Compilation of
Air Pollutant Emission Factors, 5th Ed., U.S. Environmental Protection Agency,
Research Triangle Park, NC. (U.S. GPO (202) 512-1800 (stock no. 055-000-00500-1,
$56.00), individual sections available through CHIEF AP-42 website). EPA M204 (
2000), EPA Method 204, 40 Code of Federal Regulations, Part 51, Appendix M. EPA OCEP
A (1995), What Has the Agency Accomplished?, EPA Of ce of Communication, Education, an
d Public Affairs, November, http: / / www.epa.gov / history / faqs / milestones.
htm. EPA OPA (1999a), Comprehensive Environmental Response, Compensation, and Liab
ility Act 42 U.S.C. 9601 et seq. (1980), United States Environmental Protection Ag
ency, Of ce of Public Affairs, http: / / www.epa.gov / reg5oopa / defs / html / ce
rcla.htm. EPA OPA (1999b), Superfund Amendments and Reauthorization Act 42 U.S.C.
9601 et seq. (1986), United States Environmental Protection Agency, Of ce of Public
Affairs, http: / / www.epa.gov / reg5oopa / defs / html / sara.htm. EPA OPA (199
9c), Emergency Planning and Community Right-to-Know Act, 42 U.S.C. 11001 et seq. (
1986), United States Environmental Protection Agency, Of ce of Public Affairs, http:
/ / www.epa.gov / reg5oopa / defs / html / epcra.htm. EPA OPPT (1999), What Is th
e Toxics Release Inventory, United States Environmental Protection Agency, Of ce of
Pollution Prevention and Toxics, http: / / www.epa.gov / opptintr / tri / whatis
.htm. EPA OSW (1999), Frequently Asked Questions about Waste, United States Of ce of S
olid Waste, October 12, 1999, http: / / www.epa.gov / epaoswer / osw / basifact.
htm#RCRA. EPA OSW (1998) RCRA Orientation Manual, United States Of ce of Solid Waste,
http: / / www.epa.gov / epaoswer / general / orientat / . EPA OW (1998), Clean Wat
er Act: A Brief History, United States Environmental Protection Agency, Of ce of Wat
er, http: / / www.epa.gov / owow / cwa / history.htm. Graedel, T. E., Comrie, P.
R., and Sekutowski, J. C. (1995), Green Product Design, AT&T Technical Journal, Nov
emberDecember, pp. 1724. Nixon, R. M. (1970a), Reorganization Plan No. 3 of 1970, U.S.
Code, Congressional and Administrative News, 91st Congress2nd Session., Vol. 3.
Nixon, R. M. (1970b), Public Papers of the Presidents of the United States: Rich
ard Nixon, 1970, GPO, Washington, DC, pp. 578586. OSHA (1999a), Occupational Safety
and Health Standards, 29 CFR 1910, U.S. Code of Federal Regulations. OSHA (1999b)
, Frequently Cited OSHA Standards, Occupational Safety and Health Administration, U.
S. Department of Labor, http: / / www.osha.gov / oshstats / std1.html. Ruckelsha
us, W. D. (1970), EPI press release, December 16. White, A., and Savage, D. (199
9), Total Cost Assessment: Evaluating the True Pro tability of Pollution Preventio
n and Other Environmental Investments, Tellus Institute, http: / / www.tellus.or
g.
602
TECHNOLOGY
living in the knowledge revolution. As Sahlman (1999) notes, the new economy mar
kedly drives out inef ciency, forces intelligent business process reengineering, a
nd gives knowledgeable customers more than they want. This new economy, based pr
imarily on knowledge and strong entrepreneurship, is focused on productivity and
is profoundly changing the role of distribution. Distribution and logistics mus
t be more ef cient, cheaper, and more responsive to the consumer. This trend of ne
w, competitive, and open channels between businesses is geographically dispersed
, involving highly technical and rational parties in allocating effort and resou
rces to the most quali ed suppliers (even if they are part of another company). On
e of the key aspects leading the way in this new knowledge-based world economy i
s science. There is a well-documented link between science and economic growth (
Adams 1990), with a very important intermediate step, technology. Science enable
s a country to grow stronger economically and become the ideal base for entrepre
neurs to start new ventures that will ultimately raise productivity to unprecede
nted levels. This sciencegrowth relationship could lead to the erroneous conclusi
on that this new economy model will prevail only in these geographic areas favor
ing high-level academic and applied R&D and, even more, only to those institutio
ns performing leading research. However, as Stephan (1996) points out, there is
a spillover effect that transfers the knowledge generated by the research, and t
his knowledge eventually reaches underdeveloped areas (in the form of new plants
, shops, etc.). This observation is con rmed by Rodriguez-Clare (1996), who examin
ed an early study of collaborative manufacturing, multinational companies and th
eir link to economic development. According to the Rodriguez-Clare model, the mu
ltinational company can create a positive or negative linkage effect upon any lo
cal economy. A positive linkage effect, for example, is created by forcing local
companies to attain higher standards in productivity and quality. An example is
the electronics industry in Singapore (Lim and Fong 1982). A negative linkage e
ffect is created by forcing local companies to lower their operational standards
. An example is the Lockheed Aircraft plant in Marietta, Georgia (Jacobs 1985).
We are therefore facing a new world of business, business of increasing returns
for knowledgebased industries (Arthur 1996). The behavior of increasing-returns
products is contrary to the classical economic equilibrium, in which the larger
the return of a product or service, the more companies will be encouraged to ent
er the business or start producing the product or service, diminishing the retur
n. Increasing-returns products or services, on the other hand, present positive
feedback behavior, creating instability in the market, business, or industry. In
creasing returns put companies on the leading edge further ahead of the companie
s trailing behind in R&D of new products and technologies. A classical example o
f this new type of business is the DOS operating system developed by Microsoft,
which had a lock-in with the distribution of the IBM PC as the most popular comp
uter platform. This lock-in made it possible for Microsoft to spread its costs o
ver a large number of users to obtain unforeseen margins. The world of new busin
ess is one of pure adaptation and limits the use of traditional optimization met
hods, for which the rules are not even de ned. Reality presents us with a highly c
omplex scenario: manufacturing companies unable to perform R&D seem doomed to di
sappear. One of the few alternatives left to manufacturing companies is to go do
wnstream (Wise and Baumgartner 1999). This forces companies to rethink their str
ategy on downstream services (customer support) and view them as a pro table activ
ity instead of a trick to generate sales. Under this new strategy, companies mus
t look at the value chain through the customers eyes to detect opportunities down
stream. This affects how performance is measured in the business. Product margin
is becoming more restricted to the manufacturing operation, disregarding servic
es related to the functioning and maintenance of the product throughout its life
. A feature that is increasing over time in actual markets is for businesses to
give products at a very low price or even for free and wait for compensation in
service to the customer or during the maintenance stage of the products life cycl
604
TECHNOLOGY
1995; and Kim and Nof 1997, among others) to those based on concepts taken from
disciplines other than manufacturing (Khanna et al. 1998; Ceroni 1999; and Ceron
i and Nof 1999, who extend the parallel computing problem to manufacturing model
ing).
4.
FRAMEWORK FOR COLLABORATIVE MANUFACTURING
The distributed environment presents new challenges for the design, management,
and operational functions in organizations. Integrated approaches for designing
and managing modern companies have become mandatory in the modern enterprise. Hi
storically, management has relied on a wellestablished hierarchy, but, the need
for collaboration in modern organizations overshadows the hierarchy and imposes
networks of interaction among tasks, departments, companies, and so on. As a res
ult of this interaction, three issues arise that make the integration problem cr
itical: variability, culture, and con icts. Variability represents all possible re
sults and procedures for performing the tasks in the distributed organizations.
Variability is inherently present in the processes, but distribution enhances it
s effects. Cultural aspects such as language, traditions, and working habits imp
ose additional requirements for the integration process of distributed organizat
ions. Lastly, con icts may represent an important obstacle to the integration proc
ess. Con icts here can be considered as the tendency to organize based on local op
timizations in a dual local / global environment. Collaborative relationships, s
uch as usersupplier, are likely to present con icts when considered within a distri
buted environment. Communication of essential data and decisions plays a crucial
role in allowing organizations to operate cooperatively. Communication must tak
e place in a timely basis in order to be an effective integration facilitator an
d allow organizations to minimize their coordination efforts and costs. The orga
nizational distributed environment has the following characteristics (Hirsch et
al. 1995):
Cooperation of different (independent) enterprises Shifting of project responsib
ilities during the product life cycle Different conditions, heterogeneity, auton
omy, and independence of the participants hardware
and software environments With these characteristics, the following series of re
quirements for the integration of distributed organizations can be established a
s the guidelines for the integration process:
Support of geographically distributed systems and applications in a multisite pr
oduction environment and, in special cases, the support of site-oriented temporal manufacturi
ng
Consideration of heterogeneity of systems ontology, software, and hardware platf
orms and networks
Integration of autonomous systems within different enterprises (or enterprise do
mains) with
unique responsibilities at different sites
Provision of mechanisms for business process management to coordinate the inform
ation ow
within the entire integrated environment Among further efforts to construct a fr
amework for collaborative manufacturing is Nofs taxonomy of integration (Figure 1
and Table 2), which classi es collaboration in four types: mandatory, optional, c
oncurrent, and resource sharing. Each of these collaboration types is found alon
g a humanmachine integration level and an interaction level (interface, group dec
ision support system, or computersupported collaborative work).
5. FACILITATING AND IMPLEMENTING COLLABORATION IN MANUFACTURING
During the design stages of the product, codesign (Eberts and Nof 1995) refers t
o integrated systems implemented using both hardware and software components. Co
mputer-supported collaborative work (CSCW) allows the integration and collaborat
ion of specialists in an environment where work and codesigns in manufacturing a
re essential. The collaboration is accomplished by integrating CAD and database
applications, providing alphanumeric and graphical representations for the syste
ms users. Codesign protocols were established for concurrency control, error reco
very, transaction management, and information exchange (Figure 2). The CSCW tool
supports the following design steps:
Conceptual discussion of the design project High-level conceptual design Testing
and evaluation of models Documentation
COLLABORATIVE MANUFACTURING
605
Collaboration
Mandatory Optional Concurrent Resource Sharing Group Decision Support System Int
erface
Integration
HumanMachine H-M HumanHuman H-H
Computer Supported Collaborative Work
MachineMachine M-M
Interaction
Figure 1 Integration Taxonomy. (From Nof 1994)
When deciding on its operation, every manufacturing enterprise accounts for the
following transaction costs (Busalacchi 1999): 1. 2. 3. 4. 5. 6. Searching for a
supplier or consumer Finding out about the nature of the product Negotiating th
e terms for the product Making a decision on suppliers and vendors Policing the
product to ensure quality, quantity, etc. Enforcing compliance with the agreemen
t
Traditionally, enterprises grew until they could afford to internalize the trans
action costs of those products they were interested in. However, the transaction
costs now have been greatly affected by
TABLE 2
Example of Collaboration Types for Integration Problems Example Concept design f
ollowed by physical design Multi-robot assembly Single human planner (particular
) Single human planner (out of a team) Engineering team design Job-machine assig
nment Database backups H-H M-M H-M H-M H-H M-M M-M Collaboration Type Mandatory,
sequential Mandatory, parallel Optional, similar processors Optional, different
processors types Concurrent Competitive Cooperative
Integration Problem 1 2 3 4 5 6 7 Processing of task without splitting and with
sequential processing Processing of task with splitting and parallel processing
Processing of task without splitting. Very speci c operation Processing of task wi
thout splitting. General task Processing of task can have splitting Resource all
ocation Machine substitution
From Nof 1994.
606
TECHNOLOGY
Interaction functions (examples) Group windows Synchronous interaction Asynchron
ous interaction Information entries Graphic window Negotiation Resolution/decisi
on
Concurrency Control Error Recovery Transaction Management Information Exchange
GDSS/CSCW
Production Engineer
Economic Analyst
Design Engineer
...
Manufacturing Engineer
Figure 2 Codesign Computer-Supported Collaborative Work (CSCW) System.
technologies such electronic data interchange (EDI) and the Internet, which are
shifting the way of doing business, moving the transaction costs to: 1. 2. 3. 4.
5. 6. Coordination between potential suppliers or consumers Rapid access to inf
ormation about products Means for rapid negotiation of terms between suppliers a
nd consumers Access to evaluative criteria for suppliers and consumers Mechanism
s for ensuring the quality and quantity of products Mechanisms for enforcing com
pliance with the agreement
6. MOVING FROM FACILITATING TO ENABLING COLLABORATION: E-WORK IN THE MANUFACTURI
NG ENVIRONMENT
We de ne e-work as collaborative, computer-supported activities and communicationsupported operations in highly distributed organizations of humans and / or robo
ts or autonomous systems, and we investigate fundamental design principles for t
heir effectiveness (Nof 2000a,b). The premise is that without effective e-work,
the potential of emerging and promising electronic work activities, such as virt
ual manufacturing and e-commerce, cannot be fully realized. Two major ingredient
s for future effectiveness are autonomous agents and active protocols. Their rol
e is to enable ef cient information exchanges at the application level and adminis
ter tasks to ensure smooth, ef cient interaction, collaboration, and communication
to augment the natural human abilities. In an analogy to massively parallel com
puting and network computing, the teamwork integration evaluator (TIE) has been
developed (Nof and Huang 1998). TIE is a parallel simulator of distributed, netw
orked teams of operators (human, robots, agents). Three versions of TIE have bee
n implemented with the message-passing interface (on Intels Paragon, on a network
of workstations, and currently on Silicon Graphics Origin 2000): 1. TIE / design
(Figure 3) to model integration of distributed designers or engineering systems
(Khanna et al. 1998) 2. TIE / agent (Figure 4) to analyze the viability of dist
ributed, agent-based manufacturing enterprises (Huang and Nof, 1999) 3. TIE / pr
otocol (Figure 5) to model and evaluate the performance of different task admini
stration active protocols, such as in integrated assembly-and-test networks (Wil
liams and Nof 1995)
7.
CASE EXAMPLES
Global markets are increasingly demanding that organizations collaborate and coo
rdinate efforts for coping with distributed customers, operations, and suppliers
. An important aspect of the collaboration
COLLABORATIVE MANUFACTURING
Simulation Model Input
607
{
Team param eters
Res pons ivenes s Q uality rating F lex ibility
G iv e n c o m p o n e n ts S i m u l a t o r p l a t fo r m
{
{
Tas k Library
O rganiz ation W ork flows
Collaboration/ Com m unic ation P rotoc ols
A c tivity /Tas k
Available Processing Waiting
nCube parallel s uperc om puter
Figure 3 Integration of Distributed Designers with TIE / Design.
process of distributed, often remote organizations is the coordination cost. The
coordination equipment and operating costs limit the bene t attainable from colla
boration. In certain cases, this cost can render the interaction among distribut
ed organizations nonpro table. Previous research investigated a distributed manufa
cturing case, operating under a job-shop model with two distributed collaboratin
g centers, one for sales and one for production. A new model incorporating the c
ommunication cost of coordination has been developed (Ceroni et al. 1999) yields
the net reward of the total system, determining the pro tability of the coordinat
ion. Two alternative coordination modes are examined: (1) distributed coordinati
on by the two centers and (2) centralized coordination by a third party. The res
ults indicate that distributed and centralized coordination modes are comparable
up to a certain limit; over this limit, distributed coordination is always pref
erred.
7.1.
Coordination Cost in Collaboration
In a modern CIM environment, collaboration among distributed organizations has g
ained importance as companies try to cope with distributed customers, operations
, and suppliers (Papastavrou and Nof 1992; Wei and Zhongjun 1992). The distribut
ed environment constrains companies from attaining operational ef ciency (Nof 1994
). Furthermore, coordination becomes critical as operations face realtime requir
ements (Kelling et al. 1995). The role of coordination is demonstrated by analyz
ing the coordination problem of sales and production centers under a job-shop op
eration (Matsui 1982, 1988). Optimality of the centers cooperative operation and
suboptimality of their noncooperative operation have been demonstrated for known
demand, neglecting the coordination cost (Matsui et al. 1996). We introduce the
coordination cost when the demand rate is unknown and present one model of the
coordination with communication cost. The communication cost is modeled by a mes
sage-passing protocol with xed data exchange, with cost depending on the number o
608
TECHNOLOGY
Simulation Model Input
Output
System performance
Overall, Task mgrs & Contractor
{
system parameters
Time performance
States and transitions
System viability
Overall, Task mgrs & Contractor
Components
{
Task/Resource Manipulator fuctions
Messaging/ Communication Control functions Mechanisms
MPI Library
Simulator platform {
Parallel Computer:SGI-Origin 2000
Activity/Task
Available Processing Waiting
Figure 5 Modeling and Evaluation of Task Administration Protocols with TIE / Pro
tocol.
7.1.1.
Job-Shop Model
The job-shop model consists of two distributed centers (Figure 6). Job orders ar
rive at the sales center and are selected by their marginal pro t (Matsui 1985). T
he production center processes the job orders, minimizing its operating cost (Ti
jms 1977).
7.1.2.
Coordination Cost
Two basic coordination con gurations are analyzed: (1) a distributed coordination
model in which an optimization module at either of the two centers coordinates t
he optimization process (Figure 7) and (2) a centralized coordination model wher
e a module apart from both centers optimizes all operational parameters (Figure
8). The distributed model requires the centers to exchange data in parallel with
the optimization module. The centralized model provides an independent optimiza
tion module. Coordination cost is determined by evaluating (1) the communication
overhead per data transmission and (2) the transmission frequency over the opti
mization period. This method follows the concepts for integration of parallel se
rvers developed in Ceroni and Nof (1999). Communication overhead is evaluated ba
sed on the message-passing protocol for transmitting data from a sender to one o
r more receptors (Lin and Prassana 1995). The parameters of this model are excha
nge rate of messages from / to the communication channel (td), transmission star
tup time (ts), data packing / unpacking time from / to the channel (tl), and num
ber of senders / receptors ( p).
7.1.3.
Results
The coordination of distributed sales and production centers is modeled for inve
stigating the bene ts of different coordination modes. Results show that the coord
ination cost and the number of negotiation iterations should be considered in th
e decision on how to operate the system. The numerical results indicate:
Order Arrival O(S,X)
Sales Center S; c 1 , c 2
Accepted Orders
Production Center FCFS; 1, 2
Deliver y
(Backlog)
Rejected Orders (Subcontract)
Figure 6 Distributed Job-shop Production System.
COLLABORATIVE MANUFACTURING
Coordination
F 1 - F 2 -> max
609
Bidirectional one-to-one communication process c 1, c2 ; F 1
Bidirectional one-to-one communication process 1 , 2 ; F 2
Order Arrval O(S,X) m a x F 1 (c 1 ,c 2 , 1 , 2 )
Sales Center S; c 1 , c 2
Accepted Orders
Producton Center FCFS; 1, 2
Delver y
(Backlog)
Computer System
Rejected Orders (Subcontract)
mn F 2 (c 1 ,c 2 , 1 , 2 )
Computer System
Fgure 7 Dstrbuted Con guraton for the Coordnaton Model.
1. Same break-even pont for the dstrbuted and centralzed modes at 400 terat
2 jobs per tme unt. 2. The lowest break-even pont for the centralze
ons and
d mode at 90 teratons and
5 jobs per tme unt. 3. Consstently better pro tabl
ty for the centralzed mode. Ths effect s explaned by lower communcaton re
qurements and compettve hardware nvestment n the centralzed mode. 4. The d
strbuted mode wth consstently better pro tablty than the centralzed mode at
a hgher hardware cost. Ths shows that dstrbuted coordnaton should be pref
erred at a hardware cost less than half of that requred by the centralzed coor
dnaton mode. From ths analyss, the lmtng factor n selectng the coordna
ton mode s gven by the hardware cost, wth the dstrbuted and centralzed mo
des becomng comparable for a lower hardware nvestment n the centralzed case.
Coordnaton of dstrbuted partes nteractng for attanng a common goal s
also demonstrated to be sgn cant by Ceron and Nof (1999) wth the ncluson of
parallel servers n the system. Ths model of collaboratve manufacturng s ds
cussed next.
7.2.
Collaboraton n Dstrbuted Manufacturng
Manaco S.A. s a Bolvan subsdary of Bata Internatonal, an Italan-based sho
emaker company wth subsdares n most South Amercan countres as well as Can
ada and Span. The company has several plants n Bolva, and for ths llustrat
on the plants located n the ctes of La Paz and Cochabamba (about 250 mles a
part) are consdered. The desgn process at Manaco s performed by developng pr
ototypes of products for testng n the local market. The prototypes are develop
ed at the La Paz plant and then the producton s released to the Cochabamba pla
nt. Ths case study analyzes the ntegraton of the prototype producton and the
producton-plannng operatons beng performed at dstrbuted locatons by appl
yng the dstrbuted parallel ntegraton evaluaton model (Ceron 1999).
COLLABORATIVE MANUFACTURING
611
Assumptons were made for generatng an ntegraton model for applyng the paral
lelsm optmzaton.
7.2.1.
Integrated Optmzaton
The ntegraton process n both operatons s smpl ed by assumng relatonshps
between tasks pertanng to dfferent operatons. A relatonshp denotes an asso
caton of the tasks based on the smlartes observed n the utlzaton of n
formaton, resources, personnel, or the pursung of smlar objectves. Integrat
on then s performed by consderng the followng relatonshps: 1A1B 2A2B 5A3B 6A4
B
Start
F
1A 0.800
1B 1.000
J
F
2A 0.600
2B 0.800
3A 1.000
4A 0.700
J
F
5A 0.400
3B 0.800
J
F
6A 0.700
g
J
4B 0.900
End
Fgure 10 Integrated Model of the Dstrbuted Manufacturng Tasks.
pled to both cases wth the congeston delays computed accordng to the express
on 0.02e0.05* . The results obtaned for each operaton are presented n Fgures
13 and 14.
TABLE 5
Smulaton Results for Operaton B
I Average Communcaton Delay
I Standard Devaton
Task 1B 2B 3B 4B
0.000779 0.000442 0.000450 0.000433
0.000073 0.000013 0.000013 0.000017
0.971070 0.793036 0.790688 0.877372
Average Duraton
614
4,50 4,00 3,50 3,00 2,50 2,00 1,50 1,00 0,50 0,00
4 5 5 6 7 8
TECHNOLOGY
Tme (8-hr shfts)
= 1.1667 = 0.6404
9
10
10
12
12
13
14
igure 11 Changes in the otal roduction ime per Unit ( ) and Congestion ime ( )
for Different Values of the Degree of arallelism ( ) at the Integrated Operation.
Star t
1A1
1A2
1A3
1B1
1B2
1B3
1B4
J
2A1
2A2
2B1
2B2
2B3
3A1
3A2
3A3
4A1
4A2
J
5A1
5A2
3B1
J
6A1
6A2
6A3
4B1
4B2
4B3
e
J
J
END
COLLABORAIVE MANUACURING
615
3.50 3.00 2.50 ime (8-hr shifts) 2.00 1.50 1.00 0.50 0.00 3 4 4 4 5 5 5 5 5 5 5
5 6 7
igure 13 otal roduction ime ( ) and Congestion ime ( ) for Different Values of
the Degree of arallelism ( ) at Operation A.
4.00 3.50 3.00 ime (8-hr shifts) 2.50 2.00 1.50 1.00 0.50 0.00 1 2 2 3 3 4 4 5
5 6 6 7 7 8 8 9 9 10 10 11 11
igure 14 otal roduction ime ( ) and Congestion ime ( ) for Different Values of
the Degree of arallelism ( ) at Operation B.
616 ABLE 8 Summary of the Results for the Local and Integrated Scenarios
ECHNOLOG
Number of Subtasks 52 16 36 26
Local Scenario Operation A Operation B Integrated Scenario
1.9846 1.3908 0.5938 1.8071
1.4239 1.0350 0.3889 1.1666
0.5607 0.3558 0.2049 0.6404
scenario. herefore, the number of subtasks was set at 15 for operation A and 11
for operation B. his makes a total of 26 subtasks in both local operations, wh
ich equals the number of subtasks for the integrated scenario. he values for th
e performance parameters show a total production time (
) of 2.7769 shifts, direct
-production time ( ) of 2.3750, and interaction time ( ) of 0.4019 shifts. hese val
ues represent an increment of 54% for the total production time, an increment of
104% for the production time, and a decrement of 37% for the interaction time w
ith respect to the integrated solution. he results obtained from this case stud
y reinforce the hypothesis that exploiting potential bene ts is feasible when opti
mizing the parallelism of integrated distributed operations. Key issues in IEM
are the communication and congestion modeling. he modeling of time required by
tasks for data transmission relates to the problem of coordination of cooperatin
g servers. On the other hand, the congestion modeling relates to the delays resu
lting from the task granularity (number of activities being executed concurrentl
y).
7.3.
Variable roduction Networks
he trend for companies to focus on core competencies has forced enterprises to
collaborate closely with their suppliers as well as with their customers to impr
ove business performance (Lutz et al. 1999). he next step in the supply chain c
oncept is the production or supply networks (igure 15), which are characterized
by intensive communication between the partners. he aim of the system is to al
locate among the collaborating partners the excess in production demand that cou
ld not be faced by one of them alone. his capability provides the entire networ
k with the necessary exibility to respond quickly to peaks in demand for the prod
ucts. A tool developed at the Institute of roduction Systems at Hanover Univers
ity, the AS / net, employs basic methods of production logistics to provide pro
cedures for the ef cient use of capacity redundancies in a production network. he
tool satis es the following requirements derived from the capacity subcontracting
process:
Monitoring of resource availability and order status throughout the network Moni
toring should be internal and between partners
Supplier
roducer 1
Consumer
echnologydriven subcontract
roducer 2 Capacitydriven subcontract
roducer 3
igure 15 Capacity- and echnology-Driven Subcontracting in roduction Networks.
(Adapted from Lutz et al. 1999)
COLLABORAIVE MANUACURING
617
Support of different network partner perspectives (supplier and producer) for da
ta encapsulation Detection of the logistics bottlenecks and capacity problems
A key aspect of the system is the identi cation of orders the partner will not be
able to produce. his is accomplished by detecting the bottlenecks through the c
oncept of degree of demand (a comparison of the capacity needed and available in
the future, expressed as the ratio between the planned input and the capacity).
All the systems with potential to generate bottlenecks are identi ed by the AS /
net system and ranked by their degree of demand. he subcontracting of the orde
rs can be performed by alternative criteria such as history, production costs, a
nd throughput time. he system relies on the con dence between partners and the av
ailability of communication channels among them. Carefully planned agreements am
ong the partners concerning the legal aspects, duties, responsibilities, and lia
bility of the exchanged information are the main obstacles to implementing produ
ction networks.
8.
EMERGING RENDS AND CONCLUSIONS
he strongest emerging trend that we can observe in the collaborative manufactur
ing arena is partnership. In the past, the concept of the giant, self-suf cient co
rporation with presence in several continents prevailed. Emerging now and expect
ed to increase in the years to come are agile enterprises willing to take advant
age of and participate in partnerships. he times of winning everything or losin
g everything are behind us. What matters now is staying in business as competiti
vely as possible. Collaborative efforts move beyond the manufacturing function d
ownstream where signi cant opportunities exist. Manufacturing is integrating and a
ligning itself with the rest of the value chain, leaving behind concepts such as
product marginal pro t that controlled operations for so long. he future will be
driven by adaptive channels (Narus and Anderson 1996) for acquiring supplies, p
roducing, delivering, and providing after-sale service. Companies will continue
to drive inef ciencies out and become agile systems, forcing companies around them
to move in the same direction.
REERENCES
Adams, J. (1990), undamental Stocks of Knowledge and roductivity Growth, Journal o
f olitical Economy, Vol. 98, No. 4, pp. 673702. Arthur, W. B. (1996), Increasing R
eturns and the New World of Business, Harvard Business Review, Vol. 74, JulyAugust,
pp. 100109. Busalacchi, . A. (1999), he Collaborative, High Speed, Adaptive, Sup
ply-Chain Model for Lightweight rocurement, in roceedings of the 15th Internatio
nal Conference on roduction Research (Limerick, Ireland), pp. 585588. Ceroni, J.
A. (1996), A ramework for rade-Off Analysis of arallelism, MSIE hesis, School o
f Industrial Engineering, urdue University, West Lafayette, IN. Ceroni, J. A. (
1999), Models for Integration with arallelism of Distributed Organizations, h.D. D
issertation, urdue University, West Lafayette, IN. Ceroni, J. A., and Nof, S.
. (1997), lanning Effective arallelism in roduction Operations, Research Memo 9710, School of Industrial Engineering, urdue University, West Lafayette, IN. Cer
oni, J. A., and Nof, S. . (1999), lanning Integrated Enterprise roduction Opera
tions with arallelism, in roceedings of the 15th International Conference on ro
duction Research (Limerick, Ireland), pp. 457460. Ceroni, J. A., Matsui, M., and
Nof, S. . (1999), Communication Based Coordination Modeling in Distributed Manufa
cturing Systems, International Journal of roduction Economics, Vols. 60 61, pp. 293
4 Chen, X., Chen, Q. X., and Zhang, . (1999), A Strategy of Internet-Based Agile
Manufacturing Chain, in roceedings of the 15th International Conference on roduc
tion Research (Limerick, Ireland), pp. 15391542 DeVor, R., Graves, R., and Mills,
618
ECHNOLOG
Kelling, C., Henz, J., and Hommel, G. (1995), Design of a Communication Scheme for
a Distributed Controller Architecture Using Stochastic etri Nets, in roceedings
of the 3rd Workship on arallel and Distributed Real-ime Systems (Santa Barbar
a, CA, April 25), pp. 147154. Khanna, N., and Nof, S. . (1994), IE: eamwork Inte
gration Evaluation SimulatorA reliminary User Manual for IE 1.1, Research Memo 94
-21, School of Industrial Engineering, urdue University, West Lafayette, IN. Kh
anna, N., ortes, J. A. B., and Nof, S. . (1998), A ormalism to Structure and a
rallelize the Integration of Cooperative Engineering Design asks, IIE ransaction
s, Vol. 30, pp. 115. Kim, C. O., and Nof, S. . (1997), Coordination and Integratio
n Models for CIM Information, in Knowledge Based Systems, S. G. zafestas, Ed., Wo
rld Scienti c, Singapore, pp. 587602. Lim, L., and ong, . (1982), Vertical Linkages
and Multinational Enterprises in Developing Countries, World Development, Vol. 10
, No. 7, pp. 585595 Lin, C., and rassana, V. (1995), Analysis of Cost of erformin
g Communications Using Various Communication Mechanisms, in roceedings of 5th Sym
posium rontiers of Massively arallel Computation (McLean, VA), pp. 290297. Lutz
, S., Helms, K., and Wiendahl, H. . (1999), Subcontracting in Variable roduction
Networks, in roceedings of the 15th International Conference on roduction Resea
rch (Limerick, Ireland), pp. 597600. Matsui, M. (1982), Job-Shop Model: A M / (G,G)
/ 1(N) roduction System with Order Selection, International Journal of roductio
n Research, Vol. 20, No. 2, pp. 201210. Matsui, M. (1985), Optimal Order-Selection
olicies for a Job-Shop roduction System, International Journal of roduction Res
earch, Vol. 23, No. 1, pp. 2131. Matsui, M. (1988), On a Joint olicy of Order-Sele
ction and Switch-Over, Journal of Japan Industrial Management Association, Vol. 39
, No. 2, pp. 8788 (in Japanese). Matsui, M., ang, G., Miya, ., and Kihara, N. (
1996), Optimal Control of a Job-Shop roduction System with Order-Selection and Sw
itch-Over (in preparation). Narus, J. A., and Anderson, J. C. (1996), Rethinking Dis
tribution: Adaptive Channels, Harvard Business Review, Vol. 74, JulyAugust, pp. 1121
20. Nof, S. . (1994), Integration and Collaboration Models, in Information and Coll
aboration Models of Integration, S. . Nof, Ed., Kluwer Academic ublishers, Dor
drecht. Nof, S. ., and Huang, C. . (1998), he roduction Robotics and Integrati
on Software for Manufacturing (RISM): An Overview, Research Memorandum No. 98-3,
School of Industrial Engineering, urdue University, West Lafayette, IN. Nof, S.
. (2000a), Modeling e-Work and Con ict Resolution in acility Design, in roceedings
of the 5th International Conference on Computer Simulation and AI (Mexico City,
ebruary). Nof, S. . (2000b), Models of e-Work, in roceedings of the IAC / MIM-2
000 Symposium (atras, Greece, July). apastavrou, J., and Nof, S. . (1992), Deci
sion Integration undamentals in Distributed Manufacturing opologies, IIE ransac
tions, Vol. 24, No. 3, pp. 2742. hillips, C. L. (1998), Intelligent Support for En
gineering Collaboration, h.D. Dissertation, urdue University, West Lafayette, IN
. Rodriguez-Clare, A. (1996), Multinationals, Linkages, and Economic Development, Am
erican Economic Review, Vol. 86, No. 4, pp. 852873. Sahlman, W. A. (1999), he New
Economy Is Stronger han ou hink, Harvard Business Review, Vol. 77, NovemberDecem
ber, pp. 99106. Stephan, . E. (1996), he Economics of Science, Journal of Economic
Literature, Vol. 34, pp. 11991235. ijms, H. C. (1977), On a Switch-Over olicy for
Controlling the Workload in a System with wo Constant Service Rates and ixed
Switch-Over Costs, Zeitschrift fu r Operations Research, Vol. 21, pp. 1932. Wei, L.
, Xiaoming, X., and Zhongjun, Z. (1992), Distributed Cooperative Scheduling for a
JobShop, in roceedings of 1992 American Control Conference (Green Valley, AZ), Vo
l. 1, pp. 830831. Williams, N. ., and Nof, S. . (1995), estLAN: An Approach to I
ntegrated esting Systems Design, Research Memorandum No. 95-7, School of Industri
al Engineering, urdue University, West Lafayette, IN. Wise, R., and Baumgartner
, . (1999), Go Downstream: he New ro t Imperative in Manufacturing, Harvard Busines
s Review, Vol. 77, SeptemberOctober, pp. 133141.
COLLABORAIVE MANUACURING
619
Witzerman, J. ., and Nof, S. . (1995), Integration of Cellular Control Modeling
with a Graphic Simulator / Emulator Workstation, International Journal of roducti
on Research, Vol. 33, pp. 31933206. usuf, . ., and Sarhadi, M. (1999), A ramewo
rk for Evaluating Manufacturing Enterprise Agility, roceedings of the 15th Intern
ational Conference on roduction Research (Limerick, Ireland), pp. 15551558.
II.C
Service Systems
Handbook of Industrial Engineering: echnology and Operations Management, hird
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
624
ECHNOLOG
2.
igure 1 he Service Encounter. (artly based on the SERVUCION system model dev
eloped by Langeard et al. 1981)
interested in the service quality and customer satisfaction derived from the ser
vice encounter. If the evaluation of service quality and customer satisfaction i
s positive, the customer may decide to remain loyal to the service organization
(Bateson 1985; Czepiel et al. 1985). he customer-contact service employee persp
ective is mainly concerned with the primary rewards of the service encounter, su
ch as pay, promotion, job satisfaction, and recognition from the employees collea
gues and supervisor. hese primary rewards are mainly contingent on the employees
performance in the service encounter. However, it should be noted that customer
-contact personnel generally really care about the customer and are willing to e
xert the greatest possible effort to satisfy the customers needs (Schneider 1980;
Schneider and Bowen 1985). he marketing eld has concentrated mainly on the serv
ice customer. he customer-contact service employee has been relatively neglecte
d in marketing academia (cf. Hartline and errell 1996). One notable exception i
s the conceptual model of service quality (arasuraman et al. 1985), where perce
ived service quality is determined by four organizational gaps. his model was l
ater extended by adding organizational control and communication processes (Zeit
haml et al. 1988). In the next section, we will explore the de nition of service q
uality, which is used as the basis for conceptual model of service quality.
4.
DEINING SERVICE QUALI
Quality is an elusive and fuzzy concept and as a result is extremely hard to de ne
(Garvin 1984a, b; arasuraman et al. 1985; Reeves and Bednar 1994; Steenkamp 19
89). his may partly be caused by the different perspectives taken by scholars f
rom different disciplines in de ning quality. Multiple approaches to de ning quality
have been identi ed by various authors (Garvin 1984a, b; Reeves and Bednar 1994).
Garvin (1984b) distinguishes ve major approaches to de ne quality. he transcenden
t approach de nes quality as innate excellence (Garvin 1984b, p. 25): roponents of th
is approach contend that quality cannot be precisely de ned but rather is absolute
and universally recognizable. his approach nds its origins in philosophy, parti
cularly in metaphysics (Reeves and Bednar 1994; Steenkamp 1989). However, its pr
actical applicability is rather limited because quality cannot be de ned precisely
using this perspective (Garvin 1984a, b; Steenkamp 1989). he product-based app
roach posits that quality differences amount to differences in the quantity of a
particular desired attribute of the product (Steenkamp 1989). Garvin (1984b, p.
26) provides the following illustration of this approach: [H]igh-quality ice crea
m has a high butterfat content, just as ne rugs have a large number of knots per
square inch. he assumption underlying this approach suggests two corollaries (Gar
vin 1984b). irst, higher quality products can only be obtained at higher cost b
ecause quality is re ected in the quantity of a particular desired attribute. Seco
ndly, quality can be evaluated against an objective standard, namely quantity.
626
ECHNOLOG
he user-based approach to de ning quality is based on the notion that quality lies
in the eye of the beholder (Garvin 1984b, p. 27). In essence, this approach conten
ds that different consumers have different needs. High quality is attained by de
signing and manufacturing products that meet the speci c needs of consumers. As a
result, this approach re ects a highly idiosyncratic and subjective view of qualit
y. Juran (1974, p. 2-2), a proponent of this approach, de nes quality as tness for us
e. his approach is rooted in the demand side of the market (cf. Dorfman and Stein
er 1954). wo issues should be addressed with regard to this approach (Garvin 19
84b). he rst issue concerns the aggregation of individual preferences at the hig
her level of the market. he second issue deals with the fact that this approach
essentially equates quality with (a maximization) of satisfaction. In other wor
ds, as Garvin (1984b, p. 27) puts it: A consumer may enjoy a particular brand beca
use of its unusual taste or features, yet may still regard some other brand as b
eing of higher quality. As opposed to the user-based approach, the manufacturing-b
ased approach to quality originates from the supply side of the market, the manu
facturer. his approach is based on the premise that meeting speci cations connote
s high quality (Crosby 1979). he essence of this approach boils down to this: q
uality is conformance to requirements. (Crosby 1979, p. 15). his approach to de ning
quality is quite elementary, being based on an objective standard or speci cation
(Reeves and Bednar 1994). A critical comment on this approach is articulated by
Garvin (1984b), who nds that although a product may conform to certain speci cation
s or standards, the content and validity of those speci cations and standards are
not questioned. he perspective taken in this approach is predominantly inward.
As a result, rms may be unaware of shifts in customer preferences and competitors
(re)actions (Reeves and Bednar 1994). he ultimate consequence of this approach
is that quality improvement will lead to cost reduction (Crosby 1979; Garvin 198
4b), which is achieved by lowering internal failure costs (e.g., scrap, rework,
and spoilage) and external failure costs (e.g., warranty costs, complaint adjust
ments, service calls, and loss of goodwill and future sales) through prevention
and inspection. he value-based approach presumes that quality can be de ned in te
rms of costs and prices. Garvin (1984b, p. 28) uses the following example to cla
rify this perspective: [A] $500 running shoe, no matter how well constructed, coul
d not be a quality product, for it would nd few buyers. Reeves and Bednar (1994) em
phasize that this de nition of quality forces rms to concentrate on internal ef cienc
y (internal conformance to speci cations) and external effectiveness (the extent to whi
h external customer expectations are met). However, this approach mixes two distin
ct, though related, concepts: quality and value (Reeves and Bednar 1994). Becaus
e of its hybrid nature, this concept lacks de nitional clarity and might result in
incompatible designs when implemented in practice. Reeves and Bednar (1994) pro
pose one additional approach to quality: quality is meeting and / or exceeding c
ustomers expectations. his approach is based on the de nition of perceived service
quality by arasuraman et al. (1988, p. 17): erceived service quality is therefo
re viewed as the degree and direction of discrepancy between consumers perception
s and expectations. his de nition was initially conceived in the services marketing
literature and as such takes an extremely userbased perspective (Gro nroos 1990
; arasuraman et al. 1985). Gro nroos (1990, p. 37) in this respect emphasizes: It
should always be remembered that what counts is quality as it is perceived by t
he customers (emphasis in original). Relying on only a single approach to de ning qu
ality might seriously impede the successful introduction of high-quality product
s; a synthesis of the above approaches is clearly needed. Garvin (1984b) propose
s a temporal synthesis, in which emphasis shifts from the user-based approach to
the product-based approach and nally to the manufacturing-based approach as prod
ucts move from design to manufacturing and to the market. he user-based approac
h is the starting point because market information must be obtained using market
ing research to determine the features that connote high quality. Next, these fe
atures must be translated into product attributes (the product-based approach).
inally, the manufacturing approach will need to ensure that products are manufa
ctured according to speci cations laid down in the design of the product. his not
ion is readily recognizable in the conceptual model of service quality conceived
by arasuraman et al. (1985).
5.
he SERVQUAL instrument for measuring service quality has evolved into a kind of
gospel for academics and practitioners in the eld of service quality. With the 1
0 dimensions in able 1 as a starting point, 97 items were generated (arasurama
n et al. 1988). Each item consisted of two components: one component re ected perc
eived service or perceptions and the other component re ected expected service or
expectations. Both components were measured on seven-point Likert scale with onl
y the ends of scale anchored by Strongly disagree (1) and Strongly agree (7). he ite
were presented in a two consecutive parts. he rst part contained the expectation
components for the items, while the second part contained the perception compon
ents for the items. In order to prevent distortion of the responses by acquiesce
nce bias or yea-saying or nay-saying tendencies, about half of the items were negati
vely worded and the other half positively wordedreverse statement polarization.
wo stages of data collection and scale puri cation were subsequently carried out.
he rst stage of data collection and scale puri cation, using coef cient , item-to-tot
al correlations and principal components analysis, resulted in a reduction of th
e number of factors to seven. ive of the original factors were retained in this
con guration (see able 1): (1) tangibles, (2) reliability, (3) responsiveness, (
4) understanding / knowing the customer, and (5) access. he remaining ve dimensi
ons (communication, credibility, security, competence and courtesy), were collap
sed into two dimensions. he number of factors was further reduced in the second
stage of data collection and scale puri cation. he results of principal componen
ts analysis suggested an overlap between the dimensions understanding / knowing
the customer and access and the dimensions communication, credibility, security,
competence and courtesy. Consequently, the overlapping dimensions were combined
to form two separate dimensions: (1) assurance and (2) empathy. arasuraman et
al. (1991) present a replication and extension of their 1988 study. In particula
r, they propose a number of modi cations to the original SERVQUAL instrument (ara
suraman et al. 1988). he rst modi cation is concerned with the expectations sectio
n of SERVQUAL. Confronted with extremely high scores on the expectations compone
nts of the individual statements, arasuraman et al. (1991) decided to revise th
e expectation part of the instrument. Whereas the original scale
ABLE 1 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Dimensions of Service Quality
Reliability involves consistency of performance and dependability. Responsivenes
s concerns the willingness or readiness of employees to provide service. Compete
nce means possession of the required skills and knowledge to perform the service
. Access involves approachability and ease of contact. Courtesy involves politen
ess, respect, consideration, and friendliness of contact personnel. Communicatio
n means keeping customers informed in language they understand and listening to
them. Credibility involves trustiness, believability and honesty. Security is fr
eedom from danger, risk, or doubt. Understanding / knowing the customer involves
making the effort to understand the customers needs. angibles include the physi
cal evidence of the service.
Adapted from arasuraman et al. 1985, p. 47.
628
ECHNOLOG
re ected normative or ideal expectations, the revised instrument re ected predictive
expectations relative to an excellent rm in the industry. or example, with rega
rd to statement no. 5, the expectation item (E5) of the original instrument is f
ormulated as follows: When these rms promise to do something by a certain time, the
y should do so (arasuraman et al. 1988, p. 38). In the revised SERVQUAL instrumen
t, the wording of expectation item no. 5 (E5) has been changed to When excellent t
elephone companies promise to do something by a certain time, they will do so (ar
asuraman et al. 1991, p. 446). A second modi cation related to the use of negative
ly worded items for the responsiveness and empathy dimensions in the original in
strument. or the modi ed instrument, all negatively worded items were replaced by
positively worded items. Moreover, in their 1991 study, arasuraman and his col
leagues suggest adding an importance measure to instrument in order to be able t
o calculate a composite, weighted estimate of overall service quality (arasuraman e
t al. 1991, p. 424). arasuraman et al. (1991) propose that importance should be
measured by allocating 100 points to the individual dimensions of service quali
ty in accordance with their perceived importance.
7.
lity are directed towards the interaction between customer and service provider
and therefore focus on the service process (Lapierre 1996). De Ruyter and Wetzel
s (1998) nd in an experimental study that service process and service outcome int
eract. Although service process is an important determinant of evaluative judgme
nts (e.g., customer satisfaction), it may not wholly compensate for an unfavorab
le service outcome. An additional problem with the conceptual model of service i
s its implicit assumption that each service encounter consists of only a single
stage (Lemmink et al. 1998; De Ruyter et al. 1997). In particular, if a service
encounter consisted of multiple stages, the dimensions of the conceptual model o
f service quality would require an extension at the level of the individual stag
e level. Rust et al. (1995) take this train of thought to an even more extreme p
osition. hey are structuring service quality as structures around the business
process. Apart from introducing stages into the model, such an approach also ens
ures managerial relevance. An issue that has received ample attention in academi
a in this respect is the differentiation between service quality and customer sa
tisfaction (Cronin and aylor 1992, 1994; Iacobucci et al. 1994, 1996; Oliver 19
93). he confusion surrounding these two constructs in services research can be
accounted for by the fact that both are based on the same canonical model (Iacob
ucci et al. 1996). Service quality and customer satisfaction models share the fo
llowing characteristics (Iacobucci et al. 1996):
ich they introduce three dimensions: (1) emotional, (2) practical, and (3) logic
al (e.g., Lemmink et al. 1998; De Ruyter et al. 1997). Moreover, several empiric
al concerns have also been raised with regard to the conceptual model of service
quality, particularly with regard to the measurement instrument SERVQUAL instru
ment. he dimensionality of the SERVQUAL instrument is a well-researched issue (
Asubonteng 1996; Buttle 1996; aulin and errien 1996). In their initial study,
arasuraman et al. (1988) report relatively high values of coef cient for the indi
vidual dimensions of SERVQUAL. Moreover, using exploratory factor analysis to as
sess the convergent and discriminant validity, they nd that each item loads high
only on the hypothesized factor for the four participating companies. hese favo
rable results, however, seem not to have been replicated in their 1991 study (a
rasuraman et al. 1991). Although the reliabilities in terms of coef cient
were sti
ll relatively high, their factor-analytic results seemed somewhat problematic. I
n particular, the tangibles dimension loaded on two dimensions (one representing
equipment and facilities and one representing personnel and communication mater
ials), thus casting considerable doubt on the unidimensionality of this dimen-
630
ECHNOLOG
sion. urthermore, responsiveness and assurance (and to some degree reliability)
loaded on the same factor, while in general interfactor correlations were somew
hat higher than in their 1988 study. hese divergent results might be caused by
an artifact: restraining the factor solution to ve factors. herefore, arasurama
n et al. (1991) proposed that a six-factor solution might lead to a more plausib
le result. Although responsiveness and assurance seemed to be slightly more dist
inct, tangibles still loaded on two factors, whereas the interfactor correlation
s remained high. Replication studies by other authors have fared even less well
than the studies by the original authors. Carman (1990) carried out a study usin
g an adapted version of SERVQUAL in four settings: (1) a dental school patient c
linic, (2) a business school placement center, (3) a tire store, and (4) an acut
e care hospital, and found similar factors (although not an equal number) as com
pared to arasuraman et al. (1988, 1991). However the item-to-factor stability a
ppeared to be less than in the original studies. Carman (1990) also notes that t
he applicability in some of the settings requires quite substantial adaptations
in terms of dimensions and items. Babakus and Boller (1992) report on a study in
which they applied the original SERVQUAL to a gas utility company. Using both e
xploratory and con rmatory ( rst-order and second-order) factor analysis, Babakus an
d Boller (1992) were unable to replicate the hypothesized ve-dimensional structur
e of the original SERVQUAL instrument. inally, Cronin and aylor (1992, 1994) u
sed con rmatory factor analysis and found that a ve-factor structure did not provid
e an adequate t to the data. Subsequently, they carried out exploratory factor an
alysis and a unidimensional factor structure was con rmed. Authors using a more me
taanalytic approach have reported similar results (Asubonteng et al. 1996; Buttl
e 1996; arasuraman et al. 1991; aulin and errien 1996). In general, they repo
rt relatively high reliabilities in terms of coef cient
for the individual dimensi
ons. However, results differ considerably when looking at different service-qual
ity dimensions. aulin and errien (1996) nd that for the studies included in the
ir overview, coef cient
varies from 0.50 to 0.87 for the empathy dimension and fro
m 0.52 to 0.82 for the tangibles dimension. However, the number of factors extra
cted and factor loading patterns are inconsistent across studies. urthermore, i
nterfactor correlations among the responsiveness, assurance and reliability dime
nsions are quite high (Buttle 1996; arasuraman et al. 1991). inally, aulin an
d errien (1996) suggest that the limited replicability of the SERVQUAL instrume
nt may be caused by contextuality (cf. Cronbach 1986). hey nd that studies apply
ing SERVQUAL differ in terms of units of study, study observations, and type of
study. Empirical research, however, has found that the inclusion of an importanc
e weight as suggested by arasuraman et al. (1991) may only introduce redundancy
(Cronin and aylor 1992, 1994). herefore, the use of importance weights is not
recommended; it increases questionnaire length and does not add explanatory pow
er. he SERVQUAL instrument employs a difference score (perception minus expecta
tion) to operationalize perceived service quality. However, the use of differenc
e scores is subject to serious psychometric problems (eter et al. 1993). o beg
in with, difference scores per se are less reliable than their component parts (
perceptions and expectations in the case of SERVQUAL). Because reliability place
s an upper limit on validity, this will undoubtedly lead to validity problems.
eter et al. (1993) indicate that low reliability might lead to attenuation of co
rrelation between measures. Consequently, the lack of correlations between measu
res might be mistaken as evidence of discriminant validity. urthermore, differe
nce scores are closely related to their component scores. inally, the variance
of the difference score might potentially be restricted. eter et al. (1993) poi
nt out that this violates the assumption of homogeneity of variances in ordinary
least-squares regression and related statistical techniques. inally, an altern
ative might be the use of a nondifference score, which allows for the direct com
parison of perceptions to expectations (Brown et al. 1993; eter et al. 1993). E
mpirical evidence indicates that the majority of the respondents locate their se
rvice quality score at the right-hand side of the scale (Brown et al. 1993; ara
suraman et al. 1988; 1991; eterson and Wilson 1992). his distribution is refer
red to as negatively skewed. A skewed distribution contains several serious impl
ications for statistical analysis. o begin with, the mean might not be a suitab
le measure of central tendency. In a negatively skewed distribution, the mean is
typically to the left of the median and the mode and thus excludes considerable
information about the variable under study (eterson and Wilson 1992). Skewness
also attenuates the correlation between variables. Consequently, the true relat
ionship between variables in terms of a correlation coef cient may be understated
(eterson and Wilson 1992). inally, parametric tests (e.g., t-test, -test) ass
ume that the population is normally or at least symmetrically distributed. A les
s skewed alternative for measuring service quality is the nondifference score fo
r service quality. Brown et al. (1993) report that the nondifference score for s
ervice quality is approximately normally distributed. Moreover, several authors
have suggested that the number of scale points might have considerably contribut
ed to the skewness of satisfaction measures (eterson and Wilson 1992). Increasi
ng the number of scale points may increase the sensitivity of the scale and cons
equently reduce skewness.
REERENCES
Anderson, E. W., and ornell, C. (1994), A Customer Satisfaction Research rospect
us, in Service Quality: New Directions in heory and ractice, R. . Rust and R. L
. Oliver, Eds., Sage, housand Oaks, CA, pp. 119. Asubonteng, Mcleary, K. J., and
Swan, J. E. (1996), SERVQUAL Revisited: A Critical Review of Service Quality, Journ
al of Services Marketing, Vol. 10, No. 6, pp. 6281. Babakus, E., and Boller, G. W
. (1992), An Empirical Assessment of the SERVQUAL Scale, Journal of Business Researc
h, Vol. 24, pp. 253268. Bateson, J. E. G. (1985), erceived Control and the Service
Encounter, in he Service Encounter: Managing Employee / Customer Interaction in
the Service Businesses, J. A. Czepiel, M. R. Solomon and C. . Surprenant, Eds.,
Lexington Books, Lexington, MA, pp. 6782. Bell, D. (1973), he Coming of ost-In
dustrial Society: A Venture in Social orecasting, Basic Books, New ork. Berry,
L. L. (1980), Services Marketing Is Different, Business, Vol. 30, MayJune, pp. 2428.
Bitner, M. J. (1990), Evaluating Service Encounters: he Effects of hysical Surro
undings and Employee Responses, Journal of Marketing, Vol. 54, January, pp. 7184. B
rown, . J., Churchill, G. A., and eter, J. . (1993), Improving the Measurement
of Service Quality, Journal of Retailing, Vol. 69, Spring, pp. 127139. Buttle, . (
1996), SERVQUAL: Review, Critique, Research Agenda, European Journal of Marketing, V
ol. 30, No. 1, pp. 832. Buzzell, R. D., and Gale, B. . (1987), he IMS rincipl
es: Linking Strategy to erformance, ree ress, New ork. Carman, J. M. (1990),
Consumer erceptions of Service Quality: An Assessment of the SERVQUAL Dimensions
, Journal of Retailing, Vol. 66, No. 1, pp. 3355. Chase, R. B. (1978), Where Does the
Customer it in a Service Operation, Harvard Business Review, Vol. 56, NovemberDec
ember, pp. 137142. Cronbach, L. J. (1986), Social Inquiry by and for Earthlings, in M
etatheory in Social Science: luralisms and Subjectivities, D. W. iske and R. A
. Schweder, Eds., University of Chicago ress, Chicago. Cronin, J. J., Jr. and
aylor, S. A. (1992), Measuring Service Quality: A Reexamination and Extension, Journ
al of Marketing, Vol. 56, July, pp. 5568. Cronin, J. J., Jr., and aylor, S. A. (
1994), SERVER versus SERVQUAL: Reconciling erformance-Based and erceptions-Min
us-Expectations Measurement of Service Quality, Journal of Marketing, Vol. 58, Jan
uary, pp. 132139. Crosby, . B. (1979), Quality Is ree: he Art of Making Qualit
y Certain, New American Library, New ork. Czepiel, J. A., Solomon M. R., Surpre
nant, C. ., and Gutman, E. G. (1985), Service Encounters: An Overview, in he Servi
ce Encounter: Managing Employee / Customer Interaction in the Service Businesses
, J. A. Czepiel, M. R. Solomon, and C.. Surprenant, Eds., Lexington Books, Lexin
gton, MA, pp. 315. De Ruyter, K., and Wetzels, M. G. M. (1998), On the Complex Natu
re of atient Evaluations of General ractice Service, Journal of Economic sychol
ogy, Vol. 19, pp. 565590. De Ruyter, J. C., Wetzels, M. G. M., Lemmink J., and Ma
ttsson, J. (1997), he Dynamics of the Service Delivery rocess: A Value-Based App
roach, International Journal of Research in Marketing, Vol. 14, No. 3, pp. 231243.
Dorfman, R., and Steiner . O. (1954), Optimal Advertising and Optimal Quality, Amer
ican Economic Review, Vol. 44, December, pp. 822836. itzsimmons, J. A., and itz
simmons, M. J. (1998), Service Management: Operations, Strategy, and Information
echnology, Irwin / McGraw-Hill, Boston. Garvin, D. A. (1984a), roduct Quality:
An Important Strategic Weapon, Business Horizons, Vol. 27, MarchApril, pp. 4043. Gar
vin, D. A. (1984b), What Does roduct Quality Really Mean? Sloan Management Review, Vo
l. 26, all, pp. 2543. Ginsberg, E., and Vojta, G. (1981), he Service Sector of th
e U.S. Economy, Scienti c American, March, pp. 3139. Gro nroos, C. (1978), A Service-O
iented Approach to Marketing of Services, European Journal of Marketing, Vol. 12,
No. 8, pp. 588601.
632
ECHNOLOG
Gro nroos, C. (1990), Service Management and Marketing: Managing the Moments of
ruth in Service Competition, Lexington Books, Lexington, MA. Hartline, M. D., a
nd errell, O. C. (1996), he Management of Customer-Contact Service Employees, Jour
nal of Marketing, Vol. 60, October, pp. 5270. Henkoff, R. (1994), Service Is Everyb
odys Business, ortune, June 24, pp. 4860. Heskett, J. L. (1987), Lessons in the Servi
ce Sector, Harvard Business Review, Vol. 65 March April, pp. 118126. Iacobucci, D.,
Grayson, K. A., and Ostrom, A. L. (1994), he Calculus of Service Quality and Cust
omer Satisfaction: heoretical and Empirical Differentiation, in Advances in Servi
ces Marketing and Management: Research and ractice, . A. Swartz, D. E. Bowen,
and S. W. Brown, Eds., Vol. 3, JAI ress, Greenwich, pp. 167. Iacobucci, D., Ostr
om, A. L., Braig, B. M., and Bezjian-Avery, A. (1996), A Canonical Model of Consum
er Evaluations and heoretical Bases of Expectations, in Advances in Services Mark
eting and Management: Research and ractice, . A. Swartz, D. E. Bowen, and S. W
. Brown, Eds., Vol. 5, JAI ress, Greenwich, pp. 144. Juran, J. M. (1974), Basic Co
ncepts, in Quality Control Handbook, J. M. Juran, . M. Gryna, Jr., and R. S. Bing
ham, Jr., Eds., McGraw-Hill, New ork, pp. 2-12-24. Langeard, E., Bateson, J., Lo
velock C., and Eiglier, . (1981), Marketing of Services: New Insights from Custom
ers and Managers, Report No. 81-104, Marketing Sciences Institute, Cambridge. Lapi
erre, J. (1996), Service Quality: he Construct, Its Dimensionality and Its Measur
ement, in Advances in Services Marketing and Management: Research and ractice, .
A. Swartz, D. E. Bowen, and S. W. Brown, Eds., Vol. 5, JAI ress, Greenwich, 457
0. Lemmink, J., de Ruyter, J. C., and Wetzels, M. G. M. (1998), he Role of Value
in the Service Delivery rocess of Hospitality Services, Journal of Economic sych
ology, Vol. 19, pp. 159 179. Mills, . K. (1986), Managing Service Industries: Or
ganizational ractices in a ost-industrial Economy, Ballinger, Cambridge. Olive
r, R. L. (1993), A Conceptual Model of Service Quality and Service Satisfaction: C
ompatible Goals, Different Concepts, in Advances in Services Marketing and Managem
ent: Research and ractice, . A. Swartz, D. E. Bowen, and S. W. Brown, Eds., Vo
l. 3, JAI ress, Greenwich, pp. 6585. arasuraman, A. (1995), Measuring and Monitor
ing Service Quality, in Understanding Services Management: Integrating Marketing,
Organisational Behaviour and Human Resource Management, W. J. Glynn and J. G. Ba
rnes, Eds., John Wiley & Sons, Chichester, pp. 143177. arasuraman, A., Zeithaml,
V. A., and Berry, L. L. (1985), A Conceptual Model of Service Quality and Its Imp
lications for urther Research, Journal of Marketing, Vol. 49, all, pp. 4150. ara
suraman, A., Zeithaml, V. A., and Berry, L. L. (1988), SERVQUAL: A Multiple-Item S
cale for Measuring Consumer erceptions of Service Quality, Journal of Retailing,
Vol. 64, Spring, pp. 1240. arasuraman, A., Berry, L. L., and Zeithaml, V. A. (19
91), Re nement and Reassessment of the SERVQUAL Scale, Journal of Retailing, Vol. 67,
Winter, pp. 420450. aulin, M., and errien, J. (1996), Measurement of Service Qual
ity: he Effect of Contextuality, in Managing Service Quality, . Kunst and J. Lem
mink, Eds., Vol. 2, pp. 7996. eter, J. ., Churchill, G. A., Jr., and Brown, .
J. (1993), Caution in the Use of Difference Scores in Consumer Research, Journal of
Consumer Research, Vol. 19, March, pp. 655662. eterson, R. A., and Wilson, W. R.
(1992), Measuring Customer Satisfaction: act and Artifact, Journal of the Academy
of Marketing Science, Vol. 20, No. 1, pp. 6171. Reeves, C. A., and Bednar, D. A.
(1994), De ning Quality: Alternatives and Implications, Academy of Management Review,
Vol. 19, No. 3, pp. 419445. Reichheld, . ., and Sasser, W. E., Jr. (1990), Zero D
efection: Quality Comes to Services, Harvard Business Review, Vol. 68, SeptemberOct
ober, pp. 105111. Rust, R. ., and Oliver, R. L. (1994), Service Quality: Insights
and Managerial Implications from the rontier, in Service Quality: New Directions
in heory and ractice, R. . Rust and R. L. Oliver Eds., Sage, housand Oaks, C
A, pp. 120. Rust, R. ., Zahorik, A. J., and Keiningham, . L. (1995), Return on Qu
ality (ROQ): Making Service Quality inancially Accountable, Journal of Marketing,
Vol. 59, April, pp. 5870. Rust, R. ., Zahorik, A. J., and Keiningham, . L. (19
96), Service Marketing, HarperCollins, New ork.
he design of the customer interface and customer interaction he design of the
service processes he selection and training of personnel Optimized support for
front-of ce staff (i.e., those employees in direct contact with the customers).
An interdisciplinary approach is essential for solving development tasks of this
nature. Service development must therefore integrate knowhow from a variety of
scienti c disciplines, notably engineering sciences, business studies, industrial
engineering, and design of sociotechnical systems. opics receiving particular a
ttention within the eld of service engineering are:
De nitions, classi cations and standardization of services: Until now there has been
only a
diffuse understanding of the topic of service. In particular, international de nit
ions, classi cations, and standards, which are necessary in order to develop, bund
le, and trade services in the future in the same way as material products, are c
urrently not available. Research questions to be solved include which structural
elements of services can be identi ed, how they should be considered in the devel
opment, and how they can be made operational, as well as which methods and instr
uments could be used for the description and communication of internal and exter
nal services.
636
ECHNOLOG
Development of service products: he development and design of services requires
reference
models, methods, and tools. Research topics include reference models for differe
nt cases (e.g., development of new services, development of hybrid products, bun
dling of services, reengineering or redesigning of services) and different servi
ce types, analysis of the transferability of existing methods (e.g., from classi
c product development and software engineering) to the development of services,
development of new service-speci c engineering methods, and development of tools (
e.g., computer-aided service engineering). Coengineering of products and services. O
ffers of successful, high-quality products are often accompanied by service acti
vities. articularly for companies willing to make the move towards an integrate
d product / service package methods for combined product / service development a
re presently not available. Concepts for simultaneous engineering of products an
d services are necessary, for instance. R&D management of services. he developm
ent of services must be integrated into the organizational structure of companie
s. In practice, there are very few conclusive concepts. he consequences of this
include a lack of allocation of resources and responsibilities, abstract develo
pment results, and insuf cient registration of development costs. Research topics
include which parts of a company must be integrated in the service development p
rocess; which organizational concepts (e.g., R&D departments for service develop
ment, virtual structures) are suitable; how information ow, communication, and kn
owledge can be managed; how the development process can be controlled; and how a
company can continuously bring competitive services to market.
2.
2.1.
valid difference between services and goods: With services, the customer provides
signi cant inputs into the production process. With manufacturing, groups of custo
mers may contribute ideas to the design of the product, however, individual cust
omers only part in the actual process is to select and consume the output (Sampson
1999). Although the relevance of each of these basic properties varies considera
bly between different types of services, they indicate some basic challenges tha
t must be met in managing almost any service:
De nition and description of an intangible product: Because product properties are
mostly
immaterial, describing andeven more importantdemonstrating them to the customer is
much more dif cult than with material goods. In most cases, the customer does not
have an opportunity to look at or to test the service prior to purchase, as can
be done with a car, for example. herefore, buying a service creates a certain
extent of risk for the customer. he challenge for a marketing department consis
ts at this point of communicating the services characteristics to the customer an
d reducing this feeling of risk as much as possible. or a development or an
he standard provides a rather general de nition that contains various essential a
spects. or purposes of service management, the de nition needs to be re ned. his c
an be done by analyzing the most common de nitions of the services concept, the mo
st widely accepted of which was originally established by Donabedian (1980). It
states that a service bears three distinct dimensions: 1. A structure dimension
(the structure or potential determines the ability and willingness to deliver th
e service in question) 2. A process dimension (the service is performed on or wi
th the external factors integrated in the processes) 3. An outcome dimension (th
e outcome of the service has certain material and immaterial consequences for th
e external factors) Combining the aspects that are considered in the standard wi
th Donabedians dimensions of service yields igure 1.
2.3.
Service ypologies
Most approaches to de ning services aim at giving an general explanation of servic
e characteristics that applies to all types of services. While this approach is
valuable from a theoretical point of view, due to the broad variety of different
services, it hardly provides very detailed guidelines for the design and manage
ment of a speci c service. Classi cation schemes and typologies that classify servic
es and provide distinctive classes calling for similar management tools are need
ed. Classifying services with respect to the industry that the company belongs t
o hardly provides further insight, since a particular service may be offered by
companies from very different industries. Industries in the service sector are m
erging, generating competition among companies that were acting in separate mark
etplaces before (e.g., in the media, telecommunications, or banking industries).
On the other hand, services within one industry may require completely differen
t management approaches. hus, services have to be classi ed and clustered accordi
ng to criteria that relate to the particular service product rather than to the
industry. Such typologies can give deeper insight into the special characteristi
cs of a service as well as implications concerning its management. Several attem
pts have been made in order to design such typologies. Some of them are based on
empirical studies. he study presented in Eversheim et al. (1993) evaluated que
stionnaires from 249 German companies representing different service industries.
By the application of a clustering algorithm, seven types of services were nally
identi ed, based on 10 criteria. A recent study on service engineering (a hnrich
and Meiren 1999) used 282 answered questionnaires for the determination of serv
ice types. It derived four separate types of services that differ signi cantly in
two ways: 1. he intensity of interaction between service provider and customer
2. he number of different variants of the service that the customer may obtain
638
Customer / Supplier Interface
ECHNOLOG
Ser vice rovider
Human Resources
Customer
eople
Assignment
at io n
explicit
t en nm sig As
t ic ip
External / Customer Resources
a r
Outcome
Customer Needs
Structure
immaterial
material
Internal Service
Service Deliveryrocess
satisfies
n tio ra eg Int
Usage
Us ag e
implicit
Customer property
Material Goods
Information, Rights
material
immaterial
Strategies for service development and service operations can be assigned for th
e four types of services. Interestingly, these results are very similar to a typ
ology of service operations introduced by Schmenner (1995), identifying four dif
ferent types of service processes: service factory, service shop, mass service,
and professional services. igure 2 gives an overview of the typology, relating
it to Schmenners types.
3.
3.1.
MANAGEMEN O SERVICE QUALI
Service Quality Models
ISO 8402 de nes quality as the totality of characteristics of an entity that bear on
its ability to satisfy stated or implied needs. he standard explicitly applies t
o services, too, but it leaves space for interpretation. herefore, some differe
nt notions of quality have emerged, depending on the scienti c discipline or pract
ical objective under consideration. hose notions can be grouped into ve major ap
proaches (Garvin 1984) and can be applied to goods as well as to services:
he transcendent approach re ects a common sense of quality, de ning it as something th
t
is both absolute and universally recognizable, a mark of uncompromising standards
and high achievement (Garvin 1984). herefore, quality cannot be de ned exactly, but
will be recognized if one experiences it. he product-based approach links the
quality of an item to well-de ned, measurable properties or attributes. herefore,
it can be assessed and compared objectively by comparing the values of those at
tributes. he user-based approach focuses exclusively on the customers expectatio
ns and therefore de nes quality as the degree to which a product or service satis es
the expectations and needs of an individual or a group. he manufacturing-based
approach derives quality from the engineering and manufacturing processes that
deliver the product. Quality equals the degree to which a product meets its spec
i cations.
igure 2 our Basic ypes of Service roducts. (rom a hnrich and Meiren 1999)
he value-based approach relates the performance of a product or service to its
price or the
cost associated with its production. According to this notion, those products th
at offer a certain performance at a reasonable price are products with high qual
ity. While the relevance of the transcendent approach is only theoretical or phi
losophical, the other approaches do in uence quality management in most companies.
Normally, they exist simultaneously in one organization: while the marketing de
partment applies a user-based approach to quality, the manufacturing and enginee
ring departments think of quality in a product- or manufacturing-based manner.
he coexistence of the two views carries some risk because it might lead to diver
ging efforts in assuring product quality. On the other hand, todays competitive e
nvironment requires a combination of the different de nitions (Garvin 1984) becaus
e each of them addresses an important phase in the development, production, and
sale of a product. irst, expectations and needs of the targeted customers have
to be analyzed through market research, which requires a customer-based approach
to quality. rom those requirements, product features have to be derived such t
hat the product meets the customers expectations (product-based approach). hen t
he production process must be set up and carried out in a way that ensures that
the resulting product in fact bears the desired characteristics. his implies th
at the production process should be controlled with respect to quality goals and
therefore a manufacturing-based approach. inally, the product is sold to its c
ustomers, who assess it against their expectations and express their satisfactio
n or dissatisfaction. he product speci cations will eventually be modi ed, which ag
ain calls for a customer-based approach. Because most de nitions of service qualit
y are derived from marketing problems, they re ect more or less the customer-based
approach or value-based approach. here are several reasons for this one-sided
view of service quality. In general, services do not bear attributes that can be
measured physically. urthermore, the customer is involved in the service-deliv
ery process, so process standards are dif cult to de ne. Additionally, the understan
ding of quality has moved from the product- and process-based view towards the c
ustomer- and value-based view in almost all industries, equaling
640
ECHNOLOG
quality and customer satisfaction. herefore, the customer- and value-based approach
been used predominantly in recent research on service quality. he most fundame
ntal model for service quality that in uences many other approaches is derived fro
m Donabedians dimensions, namely structure, process, and outcome (Donabedian 1980
). Quality of potentials includes material resources and people, for example, th
e capability of customer agents. rocess quality includes subjective experiences
of customers (e.g., the friendliness of employees) during the service-delivery
process, as well as criteria that can be measured exactly (e.g., the time needed
for answering a phone call). In comparison to manufacturing, process quality ha
s an even greater impact for services. A service-delivery process that is design
ed and performed extremely well ensures, as in manufacturing, the quality of its
nal outcome. Additionally, to the customer, the process is part of the service b
ecause the customer may observe it or even participate in it. he quality of the
outcome of a service is the third component of service quality. It re ects the de
gree to which the service solves the customers problems and therefore satis es his
or her needs and expectations. Many models for service quality are derived from
the concept of customer satisfaction. According to this notion, service quality
results from the difference between the customers expectations and their experien
ces with the actual performance of the service provider. If the expectations are
met or even surpassed, the quality perceived by customers is good or very good,
otherwise the customers will remain dissatis ed and rate the service quality as p
oor. Based on this general assumption, several approaches to service quality aim
at explaining the reasons for customer satisfaction (and therefore for service
quality). his provides a basis for measuring the outcome quality of a service.
Unfortunately, there are hardly any concepts that link the measurement of custom
er satisfaction or service quality to methods that in uence it during service desi
gn or delivery. A rst step towards this connection is the gap model, developed by
arasuraman et al. (1985) based on results from empirical research. he gap mod
el identi es ve organizational gaps within the process of service design and delive
ry that cause de cits in quality, leading to dissatis ed customers. he gaps occur i
n the following phases of the process:
Gap 1 results from the fact that the management of the service provider fails to
understand the
customers expectations correctly.
Gap 2 denotes the discrepancy between managements conceptions of customers expecta
tions
and the service speci cations that are derived from it.
Gap 3 is caused by a discrepancy between service speci cations and the performance
that is
delivered by the service provider.
Gap 4 consists of the discrepancy between the delivered perfomance and the perfo
rmance that
is communicated to the customer.
Gap 5 is the sum of gap 1 through 4. It describes the discrepancy between the se
rvice the
customer expected and the service that he or she actually experienced. According
to arasuraman et al., customer satisfaction (i.e., gap 5) can be expressed in
terms of ve dimensions: tangibles, reliability, responsiveness, assurance, and em
pathy (arasuraman et al. 1988). hese dimensions are the basis for the SERVQUAL
method, which is an well-known method for measuring the quality of services thr
ough assessing customer satisfaction. he following section addresses different
rors very early on. urthermore, they can be used as training material for emplo
yees and customers. igure 3 shows an example of a service blueprint.
3.4.
Resource-Related Quality Concepts
Approaches to service quality that apply to the structure dimension can be subsu
med under the heading resource concepts. hese include, most importantly, human reso
urces concepts (especially quali cation concepts), as well as the infrastructure n
ecessary to deliver the service and servicesupport tools in the form of suitable
information and communication technologies. he analysis of customeremployee int
eraction is crucial to enable appropriate recruiting and quali cation of the emplo
yees who are to deliver the service. wo questions arise immediately in regard t
o human resource management in services.
here is an ongoing debate on whether the knowhow required for running a service
system
should be put into people or into processes. he job enlargement approach requir
es signi cant effort in recruiting, training, and retaining the kind of employees
who are able to ensure high
642
start establish contact prepare car
ECHNOLOG
line of external interaction take digital photo save photo on data carrier
disposition
examine car
invoice
document results
collect payment hand over report line of visibility
receive order
check equipment drive to customer
return to office
sign certificate dispatch certificate
transfer data
end
line of internal interaction development of photo color print of certificate
igure 3 Service Blueprint of a Car Certi cation Service. (rom Meiren 1999)
quality through personal effort and skills. he production line approach tries t
o ensure service quality by elaborated process design, therefore calling for low
er requirements from the workforce involved. he management of capacity is cruci
al in services because they cannot be produced on stock. his calls for concepts tha
t enable a service provider to assign the employees in a exible way. Role concept
s are one way to deploy human resources during the service-delivery phase (ring
s and Weisbecker 1998; Hofmann et al. 1998) that addresses both questions descri
bed above. Roles are de ned groups of activities within the framework of a role concep
t. he roles are then linked to employees and customers. It is possible for seve
ral employees or several customers to perform one and the same role and for one
employee or one customer to perform several different roles. Role concepts are a
particularly useful tool for simplifying personnel planning, for instance in co
nnection with the selection and quali cation of the employees who will later be re
quired to deliver the service. Most importantly, potential bottlenecks in the se
rvice-delivery phase can be identi ed extremely early on and suitable action taken
to avoid them. Moreover, role concepts provide a starting point for formulating
tasks in the area of work planning and enable customer behavior to be analyzed
and planned prior to the service-delivery process. inally, the use of roles doe
s not imply any direct relationship to xed posts or organizational units, and the
concept is thus extraordinarily exible.
4.
iented view on services. hese different views have not been integrated much, ma
inly because very different scienti c disciplines are involved. Obviously, both ap
proaches are needed. Nowadays service processes are highly complex and call for
the application of sophisticated methods from industrial engineering, while on t
he other hand the outcome depends heavily on the performance of employees and cu
stomers. One way to handle this complexity is to apply a system view to services
, as proposed by several authors (e.g., Lovelock 1995; Kingman-Brundage 1995). S
ystems are made up of elements bearing attributes and the relationships between
those elements. Elements may be (sub)systems themselves. If the elements of a se
rvice system are de ned as those different components that have to be designed and
managed, then the most elementary system model for services is made up of three
basic elements:
of the service is obtainable. 5. If the variant is valid, the con guration of the
service will be set up, that is, those delivery processes that are required for
the particular variant are activated. 6. If a process includes participation of
the customer, the customer must be informed about the expected outcome and form
of the cooperation. Results produced by the customer must be fed back into the
delivery process.
644
ECHNOLOG
7. As soon as all threads of the delivery process (carried out by the service pr
ovider or the customer) have been terminated, the outcome will be delivered and
an invoice sent to the customer. 8. If the results are satisfactory, the contact
between customer and service provider comes to an end and the overall process i
s nished. he customer may order some additional service or may return later to i
ssue a new order. If the results leave the customer dissatis ed, a servicerecovery
procedure may be initiated or the customer may simply terminate the business re
lation. he information needed to control such a process is normally distributed
between the people involved (employees and customers) and the documents that de n
e the processes. However, the reasoning outlined above shows that it is useful t
o combine some elements within a product model. hese considerations lead to the
product model depicted in igure 4, which uses an object-oriented notation acco
rding to the Uni ed Modeling Language (UML). Using the object-oriented notion of a
system, the individual elements of the product model are displayed as classes t
hat have a unique name (e.g., Service roduct), bear attributes (e.g., External roduc
t Information), and use methods for interaction with other classes. he product mo
del is centered around the class Service roduct. his class identi es the product
, carries information for internal use, and provides all information regarding v
alid con gurations of the product. Service roduct may be composed of other, less
complex products, or it may be an elementary product. It relates to a class Serv
ice Concept that de nes the customer value provided by the product, the customers
and market it aims at, and the positioning in comparison to its competitors. he
class Service roduct is activated by whatever provides the customers access to
the service system. Access Module carries information on the product for externa
l distribution and records the speci cation of the product variant as requested by
the particular customer. Access Module eventually activates Service roduct, wh
ich subsequently checks the consistency and validity of the requested product co
n guration and sets up the con guration by assembling individual modules of the service.
hen Deliver roduct is carried out by activating one or several Service uncti
ons that make up the particular product con guration. During and after the service
delivery process Service roduct may return statistics to the management system
, such as cost and performance data. Also, roduct eatures may be attached to a
Service roduct, potentially including Material Components. Each Service roduc
t contains one or several Service unctions that represent the Outcome of the se
rvice. A Service unction carries information about the expected Quality Level (
which may
Service_Concept Customer_Value arget_Customer_and_Market Competition 1
Customer_Interaction Medium requency Location Duration Employee_Role Customer_R
ole Activate_Interaction
1..
1 Service_roduct
1 1
1..
646 Effort 1 person, e.g., the quality manager Ca. 57 persons Complete questionna
ire articipants Questionnaire Shortcomings he result shows the view of just on
e person Only the view of management is integrated Broad understanding of the me
thod and its topics has to be disseminated Speci c questionnaire Very high effort
Complete questionnaire All employees of the unit Select and customized items ea
m of specialists
ABLE 1
Various ypes of Quality Assessment
ype
Objectives
Quick check
Getting started fast, overview of status quo
Assessment workshop
Awareness building for the management team, action plan
Survey
Appraisal of practices or processes by all relevant employees
Documented in-depth analysis
igure 5 Categories and Key Areas for the Assessment of Service Systems.
Quality Management
648
ECHNOLOG
2. Customer satisfaction: How does the service organization de ne and measure cust
omer satisfaction? 3. Business results: How does the service organization establ
ish performance measurements using nancial and non nancial criteria? A questionnair
e has been derived from these categories and the related key areas. It is the mo
st important tool in the assessment method. Each key area contains several crite
ria related to one or more items of the questionnaire. hese items indicate the ma
turity level (see Section 5.3) of the service organization with respect to the cri
terion under consideration. he questionnaire may be utilized for analyses at di
fferent depths, from an initial quick selfassessment by senior management to an
in-depth discussion of the different key areas in workshops and quality circles.
he results serve as a basis for planning improvement measures.
5.3.
A Maturity Model for Quality Management in Service Organizations
ABLE 2 Level 1
Maturity Levels of Service Organizations rinciple Characteristics Service quali
ty is attained by coincidence or by repairing mistakes. Given identical circumst
ances, a certain quality level can be reached repeatedly. ISO 9000 type of quali
ty management. eedback loops and performance measures are established. All memb
ers of the company are involved in improvement actions. Key asks Quality and se
rvice policy, dissemination of a common understanding of quality Structured docu
mentation of processes and products Ensuring effectiveness and ef ciency of proces
ses by use of performance measures Re ning of feedback loops, ensuring the partici
pation of each employee Continuous review of measures and feedback loops
Ad hoc management
2 3
Repeatable rocess de nition
4 5
Quantitative management Continuous improvement
reparation
Selection of assessment type Setting up of an assessment team raining of t he i
nvolved persons Kickof f workshop Def inition of object ives
Crossing out of N/A criteria Company or industry specific modification of questi
onnaire
Collection of data and information Self-assessment reliminary results Consensus
building Determinat ion of f inal score
Def inition of priorities Action plan Setting up of feedback loops
650
ECHNOLOG
Easingwood, C. (1986), New roduct Development for Service Companies, Journal of ro
duct Innovation Management, Vol. 3, No. 4, pp. 264275. Eversheim, W. (1997), Qual
ita tsmanagement fu r Dienstleister: GrundlagenSelbstanalyse Umsetzungshilfen, Spr
inger, Berlin. Eversheim, W., Jaschinski, C., and Roy, K.-. (1993), ypologie D
ienstleistungen, orschungsinstitut fu r Rationalisierung, Aachen. a hnrich, K.
-. (1998), Service Engineeringerspektiven einer noch jungen achdisziplin, IM Infor
mation Management and Consulting, special edition on service engineering, pp. 373
9. a hnrich, K.-., and Meiren, . (1998), Service Engineering, Offene Systeme, Vol
. 7, No. 3, pp. 145151. a hnrich, K.-., and Meiren, . (1999), Service Engineer
ingErgebnisse einer empirischen Studie zum Stand der Dienstleistungsentwicklung i
n Deutschland, IRB, Stuttgart. isk, R. ., Brown, S. W., and Bitner, M. J. (199
3), racking the Evolution of the Service Marketing Literature, Journal of Retailing
, Vol. 69, No. 1, pp. 61103. rings, S., and Weisbecker, A. (1998), u r jeden die
passende Rolle, it Management, Vol. 5, No. 7, pp. 1825. Garvin, D. A. (1984), What Do
es roduct Quality Really Mean?, Sloan Management Review, all, pp. 2543. Gogoll, A
. (1996), Untersuchung der Einsatzmo glichkeiten industrieller Qualita tstechnik
en im Dienstleistungsbereich, IK, Berlin. Haischer, M. (1996), Dienstleistungsqua
lita tHerausforderung im Service Management, HMD heorie und raxis der Wirtschafts
informatik, No. 187, pp. 3548. Heskett, J. L., Sasser, W. E., and Schlesinger, L.
A. (1997), he Service ro t Chain: How Leading Companies Link ro t and Growth to
Loyalty, Satisfaction, and Value, ree ress, New ork. Hofmann, H., Klein, L.,
and Meiren, . (1998), Vorgehensmodelle fu r das Service Engineering, IM Information
Management and Consulting, special edition on service engineering, pp. 2025. Hum
phrey, W. S. (1989), Managing the Software rocess, Addison-Wesley, Reading, MA.
Jaschinski, C. M. (1998), Qualita tsorientiertes Redesign von Dienstleistungen,
Shaker, Aachen. Kaplan, R. S., and Norton, D. . (1996), he Balanced Scorecard
, Harvard Business School ress, Boston. Kingman-Brundage, J. (1995), Service Mapp
ing: Back to Basics, in Understanding Services Management, W. J. Glynn and J. G. B
arnes, Eds, Wiley, Chichester, pp. 119142. Lovelock, C. H. (1995), Managing Service
s: he Human actor, in Understanding Services Management, W. J. Glynn and J. G. B
arnes, Eds, Wiley, Chichester, pp. 203243. Malorny, C. (1996), Einfu hren und Ums
etzen von otal Quality Management, IK, Berlin. Meiren, . (1999), Service Engine
ering: Systematic Development of New Services, in roductivity and Quality Managem
ent, W. Werter, J. akala, and D. J. Sumanth, Eds., MCB University ress, Bradfo
rd, pp. 329343. arasuraman, A., Zeithaml, V. A., and Berry, L. L. (1985), A Concep
tual Model of Service Quality and Its Implications for uture Research, Journal of
Marketing, Vol. 49, No. 3, pp. 4150. arasuraman, A., Zeithaml, V. A., and Berry
, L. L. (1988), SERVQUAL: A Multiple-Item Scale for Measuring Consumer erceptions
of Service Quality, Journal of Retailing, Vol. 64, No. 1, pp. 1237. Ramaswamy, R.
(1996), Design and Management of Service rocesses, Addison-Wesley, Reading, MA.
Reichheld, . . (1996), he Loyalty Effect, Harvard Business School ress, Bos
ton. Rommel, G., Kempis, R.-D., and Kaas, H.-W. (1994), Does Quality ay?, McKinsey
Quarterly, No. 1, pp. 5163. Sampson, S. E. (1999), he Uni ed Service heory, Brigh
am oung University, rovo, U. Schmenner, R. W. (1995), Service Operations Mana
gement, rentice Hall, Englewood Cliffs, NJ. Shostack, G. L. (1984), Designing Ser
vices hat Deliver, Harvard Business Review, Vol. 62, No. 1, pp. 133139. Stanke, A.
, and Ganz, W. (1996), Design hybrider rodukte, in Dienstleistungen fu r das 21. Ja
hrhundert: Eine o ffentliche Diskussion, V. Volkholz and G. Schrick, Eds., RKW,
Eschborn, pp. 85 92.
HE UURE O CUSOMER SERVICE AND SERVICE ECHNOLOG 5.1. Customer Assess Is th
e New Marketing aradigm
660 660 662 662 663
3.3. 4.
y-to-day strategic and operational decisions. But this lifetime value is the cri
tical issue for understanding why investments in service quality and customer sa
tisfaction are not just expense lines in some budget but investments in bottom-l
ine pro t and the future of the company
Handbook of Industrial Engineering: echnology and Operations Management, hird
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
651
652
ECHNOLOG
Employees Are Assets
Sales Earnings rofits
Good Hiring
Improved Service
Customer Satisfaction
Low urnover of Good Employees
raining
Wide Job Definitions
Satisfied Employees
High Challenge & Responsibility
ABLE 1
Increased Revenues hat Can Result over ime from Improved Customer Service Reve
nues at 70% Retention Rate $1,000,000 770,000 593,000 466,000 352,000 270,000 20
8,000 160,000 124,000 95,000 $4,038,000 Revenues at 80% Retention Rate $1,000,00
0 880,000 774,000 681,000 600,000 528,000 464,000 409,000 360,000 316,000 $6,012
,000 Revenues at 90% Retention Rate $1,000,000 990,000 980,000 970,000 961,000 9
51,000 941,000 932,000 923,000 914,000 $9,562,000 Revenues at 100% Retention Rat
e $1,000,000 1,100,000 1,210,000 1,331,000 1,464,100 1,610,510 1,771,561 1,948,7
17 2,143,589 2,357,948 $15,937,425
ear 1 2 3 4 5 6 7 8 9 10 otals
Reprinted from Customer Service Operations: he Complete Guide by Warren Blandin
g. Copyright
1991 AMACOM, a division of the American Management Association Inte
rnational. Reprinted by permission of AMACOM, a division of American Management
Association International, New ork, N. All rights reserved. http: / / www.amac
ombooks.org. igures are based on 10% account growth annually.
CUSOMER SERVICE AND SERVICE QUALI ABLE 2 If you Lose: 1 customer a day 2 cus
tomers a day 5 customers a day 10 customers a day 20 customers a day 50 customer
s a day 100 customers a day Annual Revenue Loss from Customer Defection SENDING
$5 Weekly $94,900 189,800 474,500 949,000 1,898,000 4,745,000 9,490,000 SENDIN
G $10 Weekly $189,800 379,600 949,000 1,898,000 3,796,000 9,490,000 18,980,000 S
ENDING $50 Weekly $949,000 1,898,000 4,745,000 9,490,000 18,980,000 47,450,000
94,900,000 SENDING $100 Weekly $1,898,000 3,796,000 9,490,000 18,980,000 37,960
,000 94,900,000 189,800,000 SENDING $200 Weekly $3,796,000 7,592,000 18,980,000
37,960,000 75,920,000 189,800,000 379,600,000
653
SENDING $300 Weekly $5,694,000 11,388,000 28,470,000 56,940,000 113,880,000 284
,700,000 569,400,000
Reprinted from Customer Service Operations: he Complete Guide by Warren Blandin
g. Copyright
1991 AMACOM, a division of the American Management Association Inte
rnational. Reprinted by permission of AMACOM, a division of American Management
Association International, New ork, N. All rights reserved. http: / / www.amac
ombooks.org.
1400
1200
1000
Referrals
800
Sales & rofits
600
Reduced Costs
400
654
ECHNOLOG
Service quality and customer satisfaction are not new constructs. As early as th
e mid-1960s, eter Drucker, the original management guru, prescribed the followi
ng questions for all businesses: Who is your customer? What is the essence of yo
ur business? What does your customer value? And how can your business serve the
customer better than the competition? It is clear that although the 1990s were a
n era of talking about service quality and customer satisfaction, business has a
long way to go in delivering on a customer satisfaction promise. It would certa
inly be depressing if all we have accomplished since the 1982 publication of et
ers and Watermans In Search of Excellence is to add a customer service customer s
atisfaction statement to the mission statements of our ortune 500 companies. i
ve years ago, we would have been pleased to state that customer satisfaction was
being discussed in the boardrooms and had a chance to become a part of business
strategy. Now this is not enough. Delighting customers must become a focal stra
tegy. Every business, every executive, and every manager must assess how their fun
ction / work contributes to customer satisfaction.
2. IS HE CUSOMER, SUID . . . NO SERVICE QUALI, RODUC QUALI, OR CUSOM
ER SERVICE!
he issue is satisfaction. Consumers will pay for two things: solutions to probl
ems and good feelings. When the consumer can purchase almost any product from mu
ltiple channels (V, catalog, Internet, multiple stores) the only thing that can
differentiate one business from another is the way it makes the customer feel.
Businesses must decide whether they will simply be a vending machine and wait fo
r the customer to deposit money in the slot and pick up their product / service,
or whether they will add value. If businesses choose to be a vending machine, c
onsumers will choose the cheapest, most convenient vending machine. But if busin
esses add value, consumers will expend effort, remain loyal, and purchase at gre
ater margin for longer periods of time. he issue is not simply service and prod
uct quality. Businesses can make their stuff better and better, but if it is not the
stuff that consumers want, then consumers will not buy. In the 1960s, the big n
ames in slide rules were ost, ickett, and K&E. Each made considerable pro t by s
elling better-quality and better-functioning slide rulesyear after year after yea
r. hese three companies did exactly what the experts say is importantmake qualit
y products. et in the late 1970s slide rule sales disappeared, and so did these
companies, not because they made slide rules of lower quality but because new c
ompanies by the name of Hewlett-ackard and exas Instruments began making elect
ronic calculators. Not a single slide rule company exists today. None of the thr
ee dominant companies noticed the shift. All three continued to make quality sli
de rules for customers who were now buying electronic calculators. Making the be
st-functioning product does not guarantee survival. he customer passed them by.
Satisfaction results not simply from quality products and services but from pro
ducts and services that consumers want in the manner they want. he side of the
business highway is littered with companies that continued to make their stuff b
etter without watching where the consumer was headed. In 1957, the top 10 busine
sses in Chicago were Swift, Standard Oil, Armour, International Harvester, Inlan
d Steel, Sears, Montgomery Wards, rudential, and the irst Bank of Chicago. hi
rty- ve years later, the list includes only Sears and irst National Bank from the
original list, plus Ameritech, Abbott Labs, McDonalds, Motorola, Waste Managemen
t, Baxter, CAN inancial, and Commonwealth Edison. hese two lists dramatically
show the evolution that business goes through. he discussion of a customer-sati
sfaction orientation is not limited to a few select businesses and industries. A
satisfaction orientation exists in pro t, not-for-pro t, public and private, and bu
sinessto-business, and business to consumer (see able 3).
2.1.
roducts
Services
Retail, general Retail, special Mail order Do-it-yourself Home delivery arty sa
les Door-to-door In-home demos Auctions Estate / yard sales Equipment rental Sub
scriptions Negative options
Insurance Banking, nancial ravel Health care Real estate Domestic help Lawn, yar
d care Bridal counseling Catering Riding, sports Recreation Entertainment Beauty
Diet plans Child care Education Consulting
Manufacturing to inventory Manufacturing to order Commodities inished goods Hitech / low-tech Bulk / packaged Consumer / industrial Original equipment manufac
turers / distributors Direct / indirect Consignment Site delivery On-site constr
uction
Reprinted from Customer Service Operations: he Complete Guide by Warren Blandin
g. Copyright
1991 AMACOM, a division of the American Management Association Inte
rnational. Reprinted by permission of AMACOM, a division of American Management
Association International, New ork, N. All rights reserved. http: / / www.amac
ombooks.org.
655
656
ECHNOLOG
of a functional department to oversee customer relationship management and an ex
ecutive responsible for the voice of the consumer. A customer-focused mission st
atement should make it clear to all who work in the company and do business with
the company that the customer is not to be ignored and should be the focal poin
t of all components of the business. Every department and every individual shoul
d understand how they are serving the customer or serving someone who is serving
the customer. he mission can be something as simple as Lands Ends guaranteed perio
d or the engraving on a two-ton granite rock that marks the entrance to Stew Leona
rds grocery stores in Connecticut: Stews RulesRule 1. he customer is always right. R
ule 2. If you think the customer is wrong reread rule number 1. hen there is the
elegant mission statement on the walls of the Ritz-Carlton: We are Ladies and Gent
leman Serving Ladies and Gentleman. . . . he Ritz-Carlton experience enlivens t
he senses, instills well-being, and ful lls even the unexpressed wishes and needs
of our guests. And the mission statement that helps explain why Wal-Mart is the to
p retailer in the world: We exist to provide value to our customers. o make their
lives better via lower prices and greater selection; all else is secondary. he f
oundation for a visionary company is the articulation of a core ideology (values
and guiding principles) and essence (the organizations fundamental reason for e
xistence beyond just making money) (Collins and orras 1994).
3.2.
Step 2: roactive olicies and rocedure
Once an organization has established the principles of customer orientation and
service quality as part of its reason for being, it must implement that vision t
hroughout in speci c policies and procedures that de ne doing business as and at tha
t company. hese policies can be taught as simply as they do at Nordstroms, where
customer service training consists of the prescription Use your good judgment alw
ays, or as complex as a 250-page policies and procedures manual covering acceptanc
e or orders, handling major and minor accounts, order changes, new customers, ph
one inquiries, return policies, ship dates, orders for future delivery, and hund
reds of other speci c policies and procedures. In a business world that no longer
has time for a ve-year apprenticeship in which the policies and procedures are pa
ssed through in one-on-one learning, how things are done and their standards mus
t be available for all. High levels of turnover means that teaching policies and
procedures is a 7-day a week, 24-hour-a-day requirement.
3.2.1.
One Very Important olicy: he Guarantee
One of the more important policies and procedures for creating service quality a
nd satisfaction is the guarantee. In a world in which products and services are
commodities (interchangeable products that can be purchased at a number of chann
els), customers want to know that the company stands behind the purchase. Compan
ies may not be able to guarantee that they wont make mistakes or that the product
will never fail, but they can guarantee that they will stand behind it. Unfortu
nately, many companies have policies that state what they will not do rather tha
n reassuring the customer with what they will do. Lands End tells us that everyth
ing is Guaranteed period. Restoration Hardware tells us, our satisfaction is not only
our guarantee, its why were here. o be perfectly clear, we insist youre delighted
with your purchase. If for any reason, a selection doesnt meet your expectations
, we stand ready with a full refund or exchange. Nordstroms has been built on its l
egendary policy of taking back anything (it is part of the Nordstroms legend that
they once accepted a set of tires despite never having sold tires). Same-day de
Organizing a department whose top executive reports to the CEO is likely to send
a signal to the organization that customer service / satisfaction is important.
Direct reporting to the CEO allows policies about customers to be made at the h
ighest levels and reduces the probability that these policies will get lost in t
he history and bureaucracy of the business.
4.2.
Centralization
658
ECHNOLOG
Mystery shoppers, mystery callers, and quality monitoring are all techniques use
d to monitor service quality standards. Unfortunately, more attention is paid to
how those standards are measured than to the establishment of standards that ar
e really related to outcome measures that matter. Simply stated, what gets measu
red gets done. What gets rewarded gets repeated. elling frontline people that t
hey must greet the customer within 25 seconds can easily be monitored and will i
ncrease the chance that the front-line person will greet the customer quickly. I
f a standard is established, it should be related to satisfaction, purchase, or
loyalty in some way. Standards should be related to issues of bottom-line signi ca
nce. Unfortunately, many customer satisfaction and service quality standards are
set because they can be easily measured and monitored (greet customers within 2
5 seconds when they enter the store, answer the phone by the fourth ring, respon
d to the e-mail with 24 hours). Or they are set because they have been historica
lly used in the organization. Creating meaningful customer service and service q
uality standards plays a role in establishing the company as an outstanding cust
omer-oriented organization. ew companies and organizations have standards, and
those who have them do not tie them to strategy and mission. But a few have stan
dards that they have found to be causal determinants of customer satisfaction an
d pro tability. hese companies are leaders in their elds.
4.4.
Managing Complaints
At any time, almost 25% of your customers are dissatis ed with your products or se
rvice. et fewer than 5% of these consumers ever complain. Complaints are a fert
ile source of consumer intelligence. Businesses should do everything to maximize
the number of complaints from dissatis ed customers. Complaints de ne what companie
s are doing wrong so that systemic changes can be made if needed. Second, resear
ch is clear in showing that a dissatis ed consumer who complains and is taken care
of is signi cantly more loyal than a consumer who does not complain. Complaints a
re strategic opportunities. Most consumers who complain are not irate. Systems a
nd employees who are not responsive create irate consumers. raining and empower
ment allow front-line employees to reduce anger and create loyal customers. Comp
anies that understand the strategic value of complaints have instituted systems
and access points that literally encourage consumers to complain. Internet acces
s sites and e-mail addresses are the wave of the future, and companies will need
to be prepared for the volume of contacts received in this manner. More likely
todays companies have a call center at the center of their complaintmanagement sy
stem. With simple training and sound procedures and policies, most consumer comp
laints can be resolved quickly and effectively at the lowest levels of contact.
It costs ve times as much to get new customers as it does to keep customers. Call
centers, and in the future e-mail and Web access, provide companies with the co
st-effective ability to manage complaints, turning a dissatis ed customer into a l
oyal one. But maybe more important, a company that recognizes a complaint as a s
trategic opportunity encourages complaints and is more likely to use the informa
tion to make strategic development and marketing decisions. What you do not hear
can and will hurt you.
4.5.
Call Centers
he call center has emerged as a vital link between customers and businesses aft
er the consumer has purchased the products and / or services. hese centers rang
e from small operations to massive operations employing thousands of telephone s
ervice representatives. he birth of the 800 toll-free number made access to com
panies cheap and easy for consumers. Subsequent advances in telecommunications t
echnology have enabled businesses to handle volumes of calls unimaginable ve year
s ago at affordable costs.
4.5.1.
Call Center Operations and Logistics
Inbound and outbound communications form the thrust of call center operations.
he Internet is forming the basis of low-cost communication for the business-to-b
usiness and consumer-to-business enterprise. EDI (electronic data interchange) w
as a novelty ve years ago. Now consumers (and businesses who are consumers of oth
er businesses) would probably prefer to be able to order, check order, check inv
entory, locate where the products are en route, pay, and follow up without havin
g to touch or talk to a live person. Sophisticated natural language recognition
voice recognition programs (interactive voice response technology [IVR]) are rep
lacing the boring and ineffective rst-generation IVR (press or say 1 if you . . . p
ress or say 2 if you . . . ). IVR can become a cost-effective means of handling 507
5% of all incoming phone conversations. elephonic advances allow a consumer to
speak in his or her natural voice about the problem he or she is experiencing an
d the call to be routed so that the most quali ed customer service agents will get
the call before it has even been picked up. More importantly, switch technology
allows routing of customers based on their value to the company. High-priority
customers can get through quickly, while lower-valued customers can wait in the
queue.
he customer is the most important part of a business. No customer means no sale
s means no reason to exist. If the customer is so important, do we see few execu
tive-level positions with customer satisfaction in the title? he nancial side of
the business is important, so we have a vice president of nance. he marketing s
ide of the business is important, so we have a vice president of marketing. he
consumer is important, but we do not have a vice president for consumer satisfac
tion. Where to begin: 1. Make a commitment to exceptional customer satisfaction
by making certain that job descriptions have customer satisfaction accountabilit
ies. Have a person whose only responsibility is to think about how the operation
s and marketing and recruitment affects the customer. Because we believe that hi
ring minority individuals is important, we put into place an executive who audit
s all aspects of this part of the company. Without a key individual whose focus
is customer satisfaction, customer satisfaction may be lost in the day-to-day pr
essures. 2. he expression is, if you are not serving the customer, youd better b
e serving someone who is. hus, positions should include some accountability for
customer satisfaction. What gets measured and rewarded gets done. If customer s
atisfaction is important, then monitoring, measuring, and rewarding based on cus
tomer satisfaction are critical. 3. Customer satisfaction must be lived by the t
op executives. Customer-oriented cultures wither when senior executives only tal
k the talk. Every Saturday morning, executives at Wal-Marts Bentonville, Arkansas
, headquarters gather for a meeting that is satellite linked to each of their 26
00 stores. hey do a number of things at this meeting (point out and discuss hot
items, cost savings, the Wal-Mart cheer, etc.), but nothing is more important th
an the senior executive at the meeting getting up and asking all there, Who is the
most important person in WalMarts life? and then hearing all respond, he customer. 4
Hire for attitude, train for skill. here is simply too much looking for skille
d people and then hoping they can learn customer service skills. In studying at
the best-in-class companies, we have observed that selecting people at all level
s who are eagles (show evidence of the ability to soar) and then teaching them the s
kill set for the job is better than getting people who have some skills and hopi
ng they will become eagles. 5. here are any number of commercially available sc
reening and selection tools focusing on customer satisfaction. Selecting, traini
ng, and developing a fanatical devotion to the customer in employees is the crit
ical piece of the puzzle.
4.6.1.
more clearly how customer service ts the big picture. Stories of companies with e
xceptional customer service orientation and excellent training obscures the fact
that training alone will never work. he organization, its people, and its proc
esses must support the jobs of people who have direct customer service functiona
lity. Customer service training is a continuous process. Measuring performance,
holding people accountable, providing feedback, cross-training, refresher traini
ng, and customer visits all allow the growth and development of a customer servi
ce culture and mission.
4.7.
Serving the Internal Customer
One of the greater gaps found in organizations attempting to become more custome
r focused is the lack of attention to the internal customer. A good example of a
n internal customer orientation is provided by the Haworth Company, an of ce syste
ms manufacturer. heir quality-improvement program revolved around the internal
customer. Each work unit identi ed all its internal customers. hey prioritized th
ese customers, identi ed key work output in measurable terms, set standards, and m
ea-
660
ECHNOLOG
sured their performance with their internal customers. Many companies have insti
tuted internal help desks to which employees can call to get answers and solutio
ns to their work problems much the same way that external customers use call cen
ters for answers to questions and solutions to problems.
5.
annels (contact centers, storefronts, and webcenters) need to understand the val
ue that is added at each and every point of customer contact. his requires a re
de nition of the role of each person who is acting in one of these contact channel
s. As an example, storefront people need to be comfortable with the Web and the
call center. hey need to be aware of how each of these access channels operates
. his will require them to access to these other channels within the store. he
company face to the customer should be such that whatever channel the customer
selects, he or she will be able to conduct the business he or she wants and be r
ecognized by his or her contact point as a person, not a transaction. No story w
ill need to be told twice. No problem will go unsolved. No question will go unan
swered. he customer will be proactively informed of any changes. he organizati
on must realize that this is not simply a technology purchase. his is not just
pasting the latest electronic gizmo or switch or software onto your call center
or website. echnology only enables the organization to enhance the customer exp
erience. Customer relationship management (CRM) is not a toolbox; it is a new wa
y of seeing the customer and then using a set of tools to enhance the total cust
omer experience. In a similar manner, the director, manager, or senior executive
will have access to a set of cockpitlike analytical tools that will provide her
or him enterprise-wide access to aggregate real-time information. So while the
individual performance of a telephone service representative may not be
662
ECHNOLOG
Accessibility is clearly a business issue. Dell has recently moved past Compaq c
omputer as the leading C in the United States, yet Dell doesnt sell a single com
puter in the store. Intel and Cisco will book more from direct Internet orders t
han all the business-to-consumer sales that have taken place up until now. We ca
n check inventory levels, place orders, and track these orders and delivery date
s from these companies anytime we want to from our desk (as well as answer or ha
ve answered any of the questions we might have). Accessibility is a control issu
e. Our ability to track our edEx and US packages gives us power (and satisfact
ion). Imagine my smile when, after the recipient of a report I sent by edEx cla
imed not to have received it, meaning my bonus for doing the work at that date w
ould not be forthcoming, I was able to access (while we were speaking) the very
signature that signed for the edEx package exactly one hour before the due time
(signed by this guys secretary). Accessibility is not just a marketing issue but
an enterprise one. he story of how Avis trains its people better and more ef cie
ntly is the story of accessibility. Avis had the goal of consistent training acr
oss all employees anywhere anytime in the organization. he company developed a
system called Spotlight, a virtual multimedia learning center that can be access
ed in any of the 15,000 of ces across 1,210 countries. After hearing a greeting an
d motivational talk by the CEO and learning the basic Avis skill set, the new em
ployee meets a customer who takes him or her through the most common problems (r
epresenting 80% of all the escalated customer dissatisfaction issues). here are
multiple lessons of accessibility here. irst, anyone in the organization has i
mmediate access to training. Classes do not have to be scheduled. rainers do no
t have to be trained. he associate in Asia can access the training as well as a
n associate in New ork City. his training is in nitely repeatable. But less obvi
ous is the accumulation of customer information so that employees can know and l
earn from the top consumer problems, questions, and dif culties in an almost realtime process. While Avis is training to the speci c situation, the competition is
talking about general issues of customer dissatisfaction / satisfaction. Accessi
bility of information has led to speci city of attack. Accessibility is an invento
ry issue. Accessibility among vendors, manufacturers, and retailers will lead to
automatic replenishment. he complex accessibility among Dell, edEx, and the m
any manufacturers who make Dells parts results in edEx managing just-in-time del
ivery of all the parts needed to build that special C that the consumer just or
dered today. At the end of the day, the system will tell us what we have on hand
compared with what we want. If there is a difference, the difference will be or
dered and the system will learn about proper inventory levels so that difference
s will be less likely in the future. In other words, the system learns from its
experiences today and adjusts for tomorrow. Accessibility is a retooling issue.
Retooling a company for customer access means reengineering the technology and t
he people. Most importantly, whatever the bottom-line impact of accessibility is
on running a better business, the bottom-line impact on consumers is key. Acces
sibility enhances the customers total experience. Accessibility builds customer a
nd employee relationships with the company that empower them to change the enter
prise and the enterprise to change them. Accessibility builds brand meaning and
value. Consumers are nding accessibility as a differentiating brand value. Access
ibility can create delightthe difference between the just satis ed and the WOWed. In ad
dition to accessibility, the future of service quality and customer satisfaction
has to do with how information about the consumer will lead to extraordinary re
lationship building. Many names are emerging for this, such as database marketin
g, relationship marketing, and one to one marketing. his is not a new name for
direct mail, or an order-taking system, or a substitute for a solid marketing st
rategy, or a solution to a bad image, or quick x for a bad year. his is a new pa
radigm that eppers and Rogers (1997, 1999) call one to one marketing. he goal is t
o identify your best customers (best can mean more pro table, most frequent purchasers
, highest proportions of business, loyalty) and then spend the money to get them
, grow them, and keep them loyal. ou need to fence your best customers off from
competition. How easy would it be for another company to come in and steal thes
e clients? How easy would it be for these best customers to form a relationship
similar to what they have with you with another company?
6.
CUSOMER SERVICE AND SERVICE QUALI ABLE 4 Customer Service Audit Checklist
663
OBJECIVE: he following list is an aid to measure and monitor customer orientat
ion. ROCEDURE Do you have a written customer service policy? Is this given a wi
de circulation within the company? Do customers receive a copy of this policy? I
s customer service included in the marketing plan? What elements of customer ser
vice do you regularly monitor? Do you think other aspects of service should be m
onitored? Do you monitor competitive service performance? Do you know the true c
osts of providing customer service? Do customers have direct access to informati
on on stock availability and delivery? How do you report to customers on order s
tatus? Is there a single point of contact for customers in your company? Do cust
omers know who this individual is? Is any attempt made to estimate the cost of c
ustomer service failures (for example, a part delivery, late delivery, etc.)? Do
you seek to measure the costs of providing different levels of service? Do you
have internal service measures as well as external measures? How do you communic
ate service policies to customers? What is your average order cycle time? How do
es this compare with that of your major competitors? Do you monitor actual order
-to-delivery lead-time performance? Do you have a system for accounting for cust
omer pro tability? Does the chief executive regularly receive a report on customer
service performance? Do you consciously seek to hire individuals with a positiv
e attitude towards customer service? Does customer service feature in the criter
ia for staff promotion? Do you use quality control concepts in managing customer
service? Do you differentiate service levels by product? Do you differentiate c
ustomer service levels by customer type? Do you have a standard cost for an out
of stock situation (for example, cost of lost sales, cost of back orders, etc.)?
Do you provide customers with a customer service manual? Do you monitor the int
ernal customer service climate on a regular basis? Does your customer service organi
zation effectively manage the client relationship from order to delivery and bey
ond? How do you monitor and respond to complaints? How responsive are you to cla
ims from customers? Do you allocate adequate resources to the development of cus
tomer service? How do you seek to maintain a customer focus? Does customer servi
ce regularly feature at management meetings and in training programs? What speci c
actions do you take to ensure that staff motivation re customer service is main
tained at a high level? Is the company image re customer service adequate for th
e markets in which it operates?
Adapted from Leppard and Molyneux 1994. Used with permission of International h
omson ublishings.
customer access. But rst, senior executives must agree that providing customers w
ith a consistent, thoughtful, and value-added total customer experience at any a
nd all touch points is vital to their retention and loyalty and future acquisiti
on. his will allow their organizations to be moving towards a yet-to-be-de ned le
vel of enhanced total enterprise access for employees and customers . . . which
will enhance the employee and customer experience . . . which will create loyal
and long-lasting employee and consumer relationships with your company . . . whi
ch means happy customers, happy employees, happy senior executives, happy shareh
olders, happy bankers, and of course happy consumers.
REERENCES
Anton, J. (1996), Customer Relationship Management, rentice Hall, Upper Saddle
River, NJ. Blanding, W. (1991), Customer Service Operations: he Complete Guide,
Amacon, New ork.
664
ECHNOLOG
Collins, J., and orras, J. (1994), Built to Last: Successful Habits of Visionar
y Companies, HarperBusiness, New ork. Leppard, J., and Molyneux, L. (1994), Aud
iting our Customer Service, Rutledge, London. eppers, D., and Rogers, M. (1997
), he One to One uture: Building Relationships One Customer at a ime, Currenc
y Doubleday Dell, New ork. eppers, D., and Rogers, M. (1999), Enterprise One t
o One: ools for Competing in the Interactive Age, Doubleday, New ork. eters,
., and Waterman, R. (1988), In Search of Excellence, Warner, New ork. Reichhel
d, ., and Sasser, W. (1990), Zero Defections: Quality Comes to Services, Harvard Bu
siness Review, Vol. 68, SeptemberOctober, pp. 105111. Rust, R., Zahorik, A., and K
eningham, . (1994), Return on Quality: Measuring the inancial Impact of our C
ompanys Quest for Quality, Irwin, Homewood, IL. Rust, R., Zeithaml, V., and Lemon
, K. (2000), Driving Customer Equity: How Customer Lifetime Value Is Reshaping C
orporate Strategy, ree ress, New ork.
ro t Analysis for Multiple roducts ricing New roducts 5.1.1. Skimming ricing
5.1.2. enetration ricing 5.2. ricing During Growth ricing During Maturity r
icing a Declining roduct rice Bundling 5.5.1. Rationale for rice Bundling 5.5
.2. rinciples of rice Bundling ield Management
673 674 675 675 675 675 675 675 676 676 676 676 677 678 678 678 679 679 679 680
680 680 681
he Effect of E-commerce on ricing 671 3.4.1. he Role of Information 671 3.4.2
. ressures on E-commerce rices 672 3.4.3. he easibility of Sustainable Low I
nternet rices 672 672 672 673 8.
4.
HE ROLE O COSS IN RICING 4.1. 4.2. Cost Concepts ro tability Analysis
LEGAL ISSUES IN RICING 8.1. 8.2.
Handbook of Industrial Engineering: echnology and Operations Management, hird
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
665
666 8.3. 8.4. 8.5. 9. arallel ricing and rice Signaling redatory ricing Ill
egal rice Discrimination 9.2. 681 681 681 10. 682 682 9.4. 9.3.
ECHNOLOG Know our Demand Know our Competition and our Market Know our Cost
s 682 682 683 683 683
OUR BASIC RULES OR RICING 9.1. Know our Objectives
SUMMAR
REERENCES
1.
INRODUCION O RICING MANAGEMEN
A husband and wife were interested in purchasing a new full-sized automobile. h
ey found just what they were looking for, but its price was more than they could
afford$28,000. hey then found a smaller model for $5,000 less that seemed to of
fer the features they wanted. However, they decided to look at some used cars an
d found a one-year-old full-size model with only 5,000 miles on it. he used car
included almost a full warranty and was priced at $20,000. hey could not overl
ook the $8,000 difference between their rst choice and the similar one-year-old c
ar and decided to purchase the used car. his simple example illustrates an impo
rtant aspect of pricing that often is not recognized: buyers respond to price di
fferences rather than to speci c prices. While this basic point about how prices i
n uence buyers decisions may seem complex and not intuitive, it is this very point
that drives how businesses are learning to set prices. ou may reply that this c
ouple was simply responding to the lower price of the used car. And you would be
correctup to a point. hese buyers were responding to a relatively lower price,
and it was the difference of $8,000 that eventually led them to buy a used car.
Now consider the automobile maker who has to set the price of new cars. his dec
ision maker needs to consider how the price will compare (1) to prices for simil
ar cars by other car makers; (2) with other models in the sellers line of cars; a
nd (3) with used car prices. he car maker also must consider whether the car de
alers will be able to make a suf cient pro t from selling the car to be motivated to
promote the car in the local markets. inally, if the number of new cars sold a
t the price set is insuf cient to reach the pro tability goals of the maker, price r
eductions in the form of cash rebates or special nancing arrangements for buyers
might have to be used. Besides these pricing decisions, the car maker must decid
e on the discount in the price to give to eet buyers like car rental companies. W
ithin one year, these new rental cars will be sold at special sales to dealers a
nd to individuals like the buyer above. ricing a product or service is one of t
he most important decisions made by management. ricing is the only marketing st
rategy variable that directly generates income. All other variables in the marke
ting mixadvertising and promotion, product development, selling effort, distribut
ioninvolve expenditures. he purpose of this chapter is to introduce this strateg
ically important marketing decision variable, de ne it, and discuss some of the im
portant ways that buyers may respond to prices. We will also discuss the major f
actors that must be considered when setting prices, as well as problems managers
face when setting and managing prices.
1.1.
buyers receive in the way of goods and services relative to what they give up i
n the way of money or goods and services. In other words, price is the ratio of
what is received relative to what is given up. hus, when the price of a pair of
shoes is quoted as $85, the interpretation is that the seller receives $85 from
the buyer and the buyer receives one pair of shoes. Similarly, the quotation of
two shirts for $55 indicates the seller receives $55 and the buyer receives two
shirts. Over time, a lengthy list of terms that are used instead of the term pr
ice has evolved. or example, we pay a postage rate to the ostal Service. ees
are paid to doctors and dentists. We pay premiums for insurance coverage, rent f
or apartments, tuition for education, and fares for taxis, buses, and airlines.
Also, we pay tolls to cross a bridge, admission to go to a sporting event, conce
rt, movie, or museum. Banks may have user fees for credit charges, minimum requi
red balances for a checking account service, rents for safety deposit boxes, and
fees or interest charges for automatic teller machine (AM) use or cash advance
s. Moreover, in international marketing, tariffs and duties are paid to import g
oods into another country.
he problem that this variety of terms creates is that we often fail to recogniz
e that the setting of a rent, interest rate, premium, fee, admission charge, or
toll is a pricing decision exactly like that for the price of a product purchase
d in a store. Moreover, most organizations that must set these fees, rates, and
so on must also make similar pricing decisions to that made by the car maker dis
cussed above.
1.2.
roactive ricing
he need for correct pricing decisions has become even more important as global
competition has become more intense. echnological progress has widened the alte
rnative uses of buyers money and time and has led to more substitute products and
services. Organizations that have been successful in making pro table pricing dec
isions have been able to raise prices successfully or reduce prices without comp
etitive retaliation. hrough careful analysis of pertinent information and delib
erate acquisition of relevant information, they have become successful pricing s
trategists and tacticians (Cressman 1997). here are two essential prerequisites
to becoming a successful proactive pricer. irst, it is necessary to understand
how pricing works. Because of the complexities of pricing in terms of its impac
t on suppliers, salespeople, distributors, competitors, and customers, companies
that focus primarily on their internal costs often make serious pricing errors.
Second, it is essential for any pricer to understand the pricing environment. I
t is important to know how customers perceive prices and price changes. Most buy
ers do not have complete information about alternative choices and most buyers a
re not capable of perfectly processing the available information to arrive at th
e optimum choice. Often, price is used as an indicator not only of how much money th
e buyer must give up, but also of product or service quality. Moreover, differen
ces between the prices of alternative choices also affect buyers perceptions. hu
s, the price setter must know who makes the purchase decision for the products b
eing priced and how these buyers perceive price information.
1.3.
RICING OBJECIVES
ro t Objectives
ricing objectives need to be measured precisely. erformance can then be compar
668
ECHNOLOG
2.3.
Competitive Objectives
At times, rms base their pricing objectives on competitive strategies. Sometimes,
the goal is to achieve price stability and engage in nonprice competition, whil
e at other times, they price aggressively. When marketing a mature product and w
hen the rm is the market leader, it may seek to stabilize prices. rice stability
often leads to nonprice competition in which a rms strategy is advanced by other
components of the marketing mix: the product itself, the distribution system, or
the promotional efforts. In some markets, a rm may choose to price aggressively,
that is, price below competition, to take advantage of market changes, for exam
ple, when products are in early stages of the life cycle, when markets are still
growing, and when there are opportunities to establish or gain a large market s
hare. As with a market share or volume objective, this aggressiveness must be co
nsidered within the context of a longer term perspective.
3.
DEMAND CONSIDERAIONS
One of the most important cornerstones of price determination is demand. In part
icular, the volume of a product that buyers are willing to buy at a speci c price
is that products demand. In this section we will review some important analytical
concepts for practical pricing decisions.
3.1.
In uence of rice on Buyer Behavior
In economic theory, price in uences buyer choice because price serves as an indica
tor of product or service cost. Assuming the buyer has perfect information conce
rning prices and wants satisfaction of comparable product alternatives, he or sh
e can determine a product / service mix that maximizes satisfaction within a giv
en budget constraint. However, lacking complete and accurate information about t
he satisfaction associated with the alternative choices, the buyer assesses them
on the basis of known information. Generally, one piece of information availabl
e to the buyer is a products price. Other pieces of information about anticipated
purchases are not always known, and buyers cannot be sure how reliable and comp
lete this other information is. And because this other information is not always
available, buyers may be uncertain about their ability to predict how much they
will be satis ed if they purchase the product. or example, if you buy a new car,
you do not know what the relative incidence of car repairs will be for the new
car until after some months or years of use. As a result of this imperfect infor
mation, buyers may use price both as an indicator of product cost as well as an
indicator of quality (want satisfaction attributes).
3.2.
Useful Economic Concepts
his brief outline of how price in uences demand does not tell us about the extent
to which price and demand are related for each product / service choice, nor do
es it help us to compare, for example, engineering services per dollar to accoun
ting services per dollar. he concept of elasticity provides a quantitative way
of making comparisons across product and service choices.
3.2.1.
Demand Elasticity
rice elasticity of demand measures how the quantity demanded for a product or s
ervice changes due to a change in the price of that product or service. Speci call
y, price elasticity of demand is de ned as the percentage change in quantity deman
ded relative to the percentage change in price. Normally, it is assumed that qua
ntity demanded falls as price increases; hence, price elasticity of demand is a
negative value ranging between 0 and
. Because demand elasticity is relative, vari
ous goods and services show a range of price sensitivity. Elastic demand exists
when a given percentage change in price results in a greater percentage change i
n the quantity demanded. hat is, price elasticity ranges between 1.0 and
. When de
mand is inelastic, a given percentage change in price results in a smaller perce
ntage change in the quantity demanded. In markets characterized by inelastic dem
and, price elasticity ranges between 0 and 1. Another important point about price
elasticity: it does change and is different over time for different types of pr
oducts and differs whether price is increasing or decreasing. A second measure o
f demand sensitivity is cross price elasticity of demand, which measures the res
ponsiveness of demand for a product or service relative to a change in price of
another product or service. Cross price elasticity of demand is the degree to wh
ich the quantity of one product demanded will increase or decrease in response t
o changes in the price of another product. If this relation is negative, then, i
n general, the two products are complementary; if the relation is positive, then
, in general, the two products are substitutes. roducts that can be readily sub
stituted for each other are said to have high cross price elasticity of demand.
his point applies not only to brands within one product class but also to diffe
rent product classes. or example, as the price of ice cream goes up, consumers
may switch to cakes for dessert, thereby increasing the sales of cake mixes.
here is a relationship between sellers revenues and the elasticity of demand for
their products and services. o establish this relationship we need to de ne the
concepts of total revenue, average revenue, and marginal revenue. otal revenue
is the total amount spent by buyers for the product (R
Q). Average revenue is
the total outlay by buyers divided by the number of units sold, or the price of
the product (AR R / Q). Marginal revenue refers to the change in total revenue
resulting from a change in sales volume. he normal, downward-sloping demand cu
rve reveals that to sell an additional unit of output, price must fall. he chan
ge in total revenue (marginal revenue) is the result of two forces: (1) the reve
nue derived from the additional unit sold, which is equal to the new price; and
(2) the loss in revenue which results from marking down all prior saleable units
to the new price. If force (1) is greater than force (2), total revenue will in
crease, and total revenue will increase only if marginal revenue is positive. Ma
rginal revenue is positive if demand is price elastic and price is decreased, or
if demand is price inelastic and price is increased.
3.2.3.
Consumers Surplus
At any particular price, there are usually some consumers willing to pay more th
an that price in order to acquire the product. Essentially, this willingness to
pay more means that the price charged for the product may be lower than some buy
ers perceived value for the product. he difference between the maximum amount co
nsumers are willing to pay for a product or service and the lessor amount they a
ctually pay is called consumers surplus. In essence, it is the money value of the
willingness of consumers to pay in excess of what the price requires them to pa
y. his difference represents what the consumers gain from the exchange and is t
he money amounts of value-in-use (what is gained) minus value-in-exchange (what
is given up). Value-in-use always exceeds value-in-exchange simply because the m
ost anyone would pay must be greater than what they actually pay, otherwise they
would not enter into the trade.
3.3.
Understanding How Buyers Respond to rices
As suggested above, a successful pricer sets price consistent with customers perc
eived value (Leszinski and Marn 1997). o understand how customers form value pe
rceptions, it is important to recognize the relative role of price in this proce
ss. Because of the dif culty of evaluating the quality of products before and even
after the product has been acquired, how customers form their perceptions of th
e product becomes an important consideration when setting prices. During this pe
rceptual process, buyers make heavy use of information cues, or clues. Some of t
hese cues are price cues and in uence buyers judgments of whether the price differe
nces are signi cant. or example, buyers may use the prices of products or service
s as indicators of actual product quality.
3.3.1.
Would you buy a package of 25 aspirin that costs only 50 cents? Would you be hap
py to nd this bargain, or would you be suspicious that this product is inferior t
o other brands priced at 12 for $1.29? In fact, many consumers would be cautious
about paying such a low relative price. hus, the manufacturers of Bayer and Ex
cedrin know that some people tend to use price as an indicator of quality to hel
p them assess the relative value of products and services. Since buyers generall
y are not able to assess product quality perfectly (i.e., the ability of the pro
duct to satisfy them), it is their perceived quality that becomes important. Und
er certain conditions, the perceived quality in a product is positively related
to price. erceptions of value are directly related to buyers preferences or choi
ces; that is, the larger a buyers perception of value, the more likely would the
buyer express a willingness to buy or preference for the product. erceived valu
e represents a trade off between buyers perceptions of quality and sacri ce and is
positive when perceptions of quality are greater than the perceptions of sacri ce.
igure 1 illustrates this role of price on buyers perceptions of product quality
, sacri ce, and value. Buyers may also use other cues, such as brand name, and sto
re name as indicators of product quality.
3.3.2.
rice hresholds
hose of us who have taken hearing tests are aware that some sounds are either t
oo low or too high for us to hear. he low and high sounds that we can just bare
ly hear are called our lower and upper absolute hearing thresholds. rom psychol
ogy, we learn that small, equally perceptible changes in a response correspond t
o proportional changes in the stimulus. or example, if a products price being ra
ised from $10 to $12 is suf cient to deter you from buying the product, then anoth
er product originally priced at $20 would have to be repriced at $24 before you
would become similarly disinterested. Our aspirin example above implies that con
sumers have lower and upper price thresholds; that is, buyers have a range of ac
ceptable prices for products or services. urthermore, the existence of
670
ECHNOLOG
igure 1 erceived riceerceived Value Relationship
a lower price threshold implies that there are prices greater than $0 that are u
nacceptable because they are considered to be too low, perhaps because buyers ar
e suspicious of the products quality. ractically, this concept means that rather
than a single price for a product being acceptable to pay, buyers have some ran
ge of acceptable prices. hus, people may refrain from purchasing a product not
only when the price is considered to be too high, but also when the price is con
sidered to be too low. he important lesson to learn is that there are limits or
absolute thresholds to the relationship between price and perceived quality and
perceived value (Monroe 1993). When buyers perceive that prices are lower than
they expect, they may become suspicious of quality. At such low prices, this low
perceived quality may be perceived to be less than the perceived sacri ce of the
low price. Hence, the mental comparison or trade-off between perceived quality a
nd perceived sacri ce may lead to an unacceptable perceived value. hus, a low pri
ce may actually reduce buyers perceptions of value. At the other extreme, a perce
ived high price may lead to a perception of sacri ce that is greater than the perc
eived quality, also leading to a reduction in buyers perceptions of value. hus,
not only is it important for price setters to consider the relationship among pr
ice, perceived quality, and perceived value, but to recognize that there are lim
its to these relationships. Usually a buyer has alternative choices available fo
r a purchase. However, even if the numerical prices for these alternatives are d
ifferent, it cannot be assumed that the prices are perceived to be different. He
nce, the price setter must determine the effect of perceived price differences o
n buyers choices. As suggested above, the perception of a price change depends on
the magnitude of the change. Generally, it is the perceived relative difference
s between prices that in uence buyers use of price as an indicator of quality. In a
similar way, relative price differences between competing brands, different off
erings in a product line, or price levels at different points in time affect buy
ers purchase decisions. A recent experience of a major snack food producer illust
rates this point. At one time, the price of a speci c size of this brands potato ch
ips was $1.39 while a comparable size of the local brand was $1.09, a difference
of 30 cents. Over a period of time, the price of the national brand increased s
everal times until it was being retailed at $1.69. In like manner, the local bra
nds price also increased to $1.39. However, while the local brand was maintaining
a 30-cent price differential, the national brand obtained a signi cant gain in ma
rket share. he problem was buyers perceived a 30-cent price difference relative
to $1.69 as less than a 30-cent price difference relative to $1.39. his exampl
e illustrates the notion of differential price thresholds, or the degree that bu
yers are sensitive to relative price differences. rom behavioral price research
, a number of important points about price elasticity have emerged: 1. Buyers, i
n general, are more sensitive to perceived price increases than to perceived pri
ce decreases. In practical terms, this difference in relative price elasticity b
etween price increases vs. price decreases means it is easier to lose sales by i
ncreasing price than it is to gain sales by reducing price. 2. Sometimes a produ
ct may provide a unique bene t or have a unique attribute that buyers value. hese
unique bene ts or attributes serve to make the product less price sensitive.
3.4.1.
672
ECHNOLOG
can search in a wider geographical area than in the local community. inally, co
nsumers and corporate buyers can pool their purchasing power and get volume disc
ounts. General Electric Co. was able to reduce its purchase cost by 20% on more
than $1 billion in purchases of operating materials by pooling orders from divis
ions on a worldwide basis (Hof 1999).
3.4.2.
o determine pro t at any volume, price level, product mix, or time, proper cost c
lassi cation is required. Some costs vary directly with the rate of activity, whil
e others do not. If the cost data are classi ed into their xed and variable compone
nts and properly attributed to the activity causing the cost, the effect of volu
me becomes apparent and sources of pro t are revealed. In addition to classifying
costs according to ability to attribute a cost to a product or service, it is al
so important to classify costs according to variation with the rate of activity
(Ness and Cucuzza
his V ratio is an important piece of information for analyzing the pro t impact
of changes in sales volume, changes in the cost structure of the rm (i.e., the re
lative amount of xed costs to total costs), as well as changes in price. or a si
ngle product situation, the elements of pro tability that the rm must consider are
price per unit, sales volume per period, variable costs per unit, and the direct
and objectively assignable xed costs per period. However, typically rms sell mult
iple products or offer multiple services, and the cost-classi cation requirements
noted above become more dif cult. urther, many companies now recognize that it is
important to avoid arbitrary formulas for allocating overhead (common costs) so
that each product carries its fair share of the burden. In part, these companie
s understand that the concept of variable costs extends beyond the production pa
rt of the business. hat is why we have stressed the notion of activity levels w
hen de ning variable costs. hese companies have found out that using the old form
ula approach to cost allocation has meant that some products that were believed
to be losing money were actually pro table and that other so-called pro table produc
ts were actually losing money. A key lesson from the above discussion and the ex
periences of companies that use this approach to developing the relevant costs f
or pricing is that pro ts must be expressed in monetary units, not units of volume
or percentages of market share. ro tability is affected by monetary price, unit
volume, and monetary costs.
4.3.
s. Not only are these important factors different, but they are changing. he V
ratio can be used to analyze the relative pro t contributions of each product in
the line. Each product has a different V value and different expected dollar sa
les volume as a percentage of the lines total dollar volume. In multiple-product
situations, the V is determined by weighting the V of each product by the perc
entage of the total dollar volume for all products in the line. his issue of ma
naging the prices and pro tability of the rms product line is extremely important.
or example, consider the prices of a major hotel chain. or the same room in a g
iven hotel, there will be multiple rates: the regular rate (commonly referred to
as the rack rate), a corporate rate that represents a discount for business tra
velers, a senior citizen rate, a weekend rate, single vs. double rates, group ra
te, and conference rate, to name just a few. Over time, the hotels occupancy rate
expressed as a percentage of rooms sold per night had increased from about 68%
to 75%. et despite increasing prices that more than covered increases in costs,
and increasing sales volume, pro tability was declining. After a careful examinat
ion of this problem, it was discovered that they were selling fewer and fewer ro
oms at the full rack rate while increasing sales, and therefore occupancy, of ro
oms at discounted rates that were as much as 50% off the full rack rate. As a re
sult, the composite weighted V ratio had signi cantly declined, indicating this d
emise in relative pro tability. urther, to illustrate the importance of consideri
ng buyers perceptions of prices, the price increases had primarily been
674
ECHNOLOG
at the full rack rate, creating a greater and more perceptible difference betwee
n, for example, the full rate and the corporate rate. More and more guests were
noticing this widening price difference and were requesting and receiving the lo
wer rates. When there are differences in the Vs among products in a line, a rev
ision in the product selling mix may be more effective than an increase in price
s. hat is, a rm, by shifting emphasis to those products with relatively higher
Vs, has a good opportunity to recover some or all of its pro t position. Hence, pr
o t at any sales level is a function of prices, volume, costs and the product doll
ar sales mix (Monroe and Mentzer 1994).
5.
re of new product pricing takes into account the price sensitivity of demand and
the incremental promotional and production costs of the seller. What the produc
t is worth to the buyer, not what it costs the seller, is the controlling consid
eration. What is important when developing a
ABLE 1 1. 2. 3. 4. 5. 6.
Basic ricing Decisions
What to charge for the different products and services marketed by the rm What to
charge different types of customers Whether to charge different types of distri
butors the same price Whether to give discounts for cash and how quickly payment
should be required to earn them Whether to suggest resale prices or only set pr
ices charged ones own customers Whether to price all items in the product line as
if they were separate or to price them as a team 7. How many different price offeri
ngs to have of one item 8. Whether to base prices on the geographical location o
f buyers (i.e., whether to charge for transportation)
Source: Monroe 1990, p. 278.
enetration ricing
A penetration strategy encourages both rapid adoption and diffusion of new produ
cts. An innovative rm may thus be able to capture a large market share before its
competitors can respond. One disadvantage of penetration, however, is that rela
tively low prices and low pro t margins must be offset by high sales volumes. One
important consideration in the choice between skimming and penetration pricing a
t the time a new product is introduced is the ease and speed with which competit
ors can bring out substitute offerings. If the initial price is set low enough,
large competitors may not feel it worthwhile to make a big investment for small
pro t margins. One study has indicated that a skimming pricing strategy leads to m
ore competitors in the market during the products growth stage than does a penetr
ation pricing strategy (Redmond 1989).
5.2.
676
ECHNOLOG
able to maintain or reduce direct costs during the maturity stage are likely to
have remained. If the decline is not due to an overall cyclical decline in busin
ess but to shifts in buyer preferences, then the primary objective is to obtain
as much contribution to pro ts as possible. So long as the rm has excess capacity a
nd revenues exceed all direct costs, the rm probably should consider remaining in
the market. Generally, most rms eliminate all period marketing costs (or as many
of these costs as possible) and remain in the market as long as price exceeds d
irect variable costs. hese direct variable costs are the minimally acceptable p
rices to the seller. hus, with excess capacity, any market price above direct v
ariable costs would generate contributions to pro t. Indeed, the relevant decision
criterion is to maximize contributions per sales dollar generated. In fact, it
might be bene cial to raise the price of a declining product to increase the contr
ibutions per sales dollar. In one case, a rm raised the price of an old product t
o increase contributions while phasing it out of the product line. o its surpri
se, sales actually grew. here was a small but pro table market segment for the pr
oduct after all!
5.5.
rice Bundling
In marketing, one widespread type of multiple products pricing is the practice o
f selling products or services in packages or bundles. Such bundles can be as si
mple as pricing a restaurant menu for either dinners or the items a la carte,
or as complex as offering a ski package that includes travel, lodging, lift and
ski rentals, and lessons. In either situation, there are some important principl
es that need to be considered when bundling products at a special price.
5.5.1.
Rationale for rice Bundling
Many businesses are characterized by a relatively high ratio of xed to variable c
osts. Moreover, several products or services usually can be offered using the sa
me facilities, equipment, and personnel. hus, the direct variable cost of a par
ticular product or service is usually quite low, meaning that the product or ser
vice has a relatively high pro tvolume (V) ratio. As a result, the incremental cos
ts of selling additional units are generally low relative to the rms total costs.
In addition, many of the products or services offered by most organizations are
interdependent in terms of demand, either being substitutes for each other or co
mplementing the sales of another offering. hus, it is appropriate to think in t
erms of relationship pricing, or pricing in terms of the inherent demand relatio
nships among the products or services. he objective of price bundling is to sti
mulate demand for the rms product line in a way that achieves cost economies for t
he operations as a whole, while increasing net contributions to pro ts.
5.5.2.
of a second product or service may be greater than they are willing to pay and t
hey do not acquire it. If the rm, by price bundling, can shift some of the consum
er surplus from the highly valued product to the less valued product, then there
is an opportunity to increase the total contributions these products make to th
e rms pro tability. he ability to transfer consumer surplus from one product to ano
ther depends on the complementarity of demand for these products. roducts or se
rvices may complement each other because purchasing them together reduces the se
arch costs of acquiring them separately. It is economical to have both savings a
nd checking accounts in the same bank to reduce the costs of having to go to mor
e than one bank for such services. roducts may complement each other because ac
quiring one may increase the satisfaction of acquiring the other. or the novice
skier, ski lessons will enhance the satisfaction of skiing and increase the dem
and to rent skis. inally, a full product-line seller may be perceived to be a b
etter organization than a limited product-line seller, thereby enhancing the per
ceived value of all products offered.
5.6.
ield Management
A form of segmentation pricing that was developed by the airlines has been calle
d yield management. ield management operates on the principle that different se
gments of the market for airline travel have different degrees of price sensitiv
ity. herefore, seats on ights are priced differently depending on the time of da
y, day of the week, length of stay in the destination city before return, when t
he ticket is purchased, and willingness to accept certain conditions or restrict
ions on when to travel. Besides the airlines, hotels, telephone companies, renta
l car companies, and banks and savings and loans have used yield management to i
ncrease sales revenues through segmentation pricing. It seems likely that retail
rms could use yield management to determine when to mark down slow-moving
erhaps the most dif cult aspect of the pricing decision is to develop the procedu
res and policies for administering prices. Up to this point, the issue has been
on the setting of base or list prices. However, the list price is rarely the act
ual price paid by the buyer. he decisions to discount from list price for volum
e purchases or early payment, to extend credit, or to charge for transportation
effectively change the price actually paid. In this section, we consider the pro
blems of administering prices. An important issue relative to pricing is the eff
ect that pricing decisions and their implementation have on dealer or distributo
r cooperation and motivation as well as on salespeoples morale and effort. While
it is dif cult to control prices legally through the distribution channel, neverth
eless, it is possible to elicit cooperation and provide motivation to adhere to
company-determined pricing policies. Also, because price directly affects revenu
es of the trade and commissions of salespeople, it can be used to foster desired
behaviors by channel members and salespeople. rice administration deals with p
rice adjustments or price differentials for sales made under different condition
s, such as: 1. 2. 3. 4. 5. Sales Sales Sales Sales Sales made made made made mad
e in different quantities to different types of middlemen performing different f
unctions to buyers in different geographic locations with different credit and c
ollection policies at different times of the day, month, season, or year
Essentially, price structure decisions de ne how differential characteristics of t
he product and / or service will be priced. hese price structure decisions are
of strategic importance to manufacturers, distributors or dealers, and retailers
(Marn and Rosiello 1992). In establishing a price structure, there are many pos
sibilities for antagonizing distributors and even incurring legal liability. hu
s, it is necessary to avoid these dangers while at the same time using the price
structure to achieve the desired pro t objectives. We have noted that the de nition
of price includes both the required monetary outlay by the buyer, plus a number
of complexities including terms of agreement, terms of payment, freight charges
, offering a warranty, timing of delivery, or volume of the order. Moreover, off
ering different products or services in the line with different features or bene t
s at different prices permits the opportunity to develop prices for buyers who h
ave different degrees of sensitivity to price levels and price differences. Movi
ng from a simple one-price-for-all-buyers structure to a more complex pricing st
ructure provides for pricing exibility because the complexity permits price varia
tions based on speci c product and service characteristics as well as buyer or mar
ket differences. Moreover, a more complex price structure enhances the ability o
f rms to (Stern 1986): 1. 2. 3. 4. 5. Respond to speci c competitive threats or opp
ortunities Enhance revenues while minimizing revenue loss due to price changes M
anage the costs of delivering the product or service Develop focused price chang
es and promotions Be more effective in gaining distributors cooperation
o accomplish this goal of differential pricing requires identifying the key fac
tors that differentiate price-market segments. hen the elements of the price st
ructure may be developed that re ect these factors. A products list price is the pr
oducts price to nal buyers. hroughout the distribution system, manufacturers gran
t intermediaries discounts, or deductions from the list price. hese price conce
ssions from producers may be seen as payment to intermediaries for performing th
e distribution function and for providing time and place utilities. he differen
ce between the list price and the amount that the original producer receives rep
resents the total discounts provided to channel members.
678
ECHNOLOG
Channel members themselves employ discounts in various ways. Wholesalers pass on
discounts to retailers just as manufacturers pass along discounts to wholesaler
s. Retailers may offer promotional discounts to consumers in the form of sweepst
akes, contests, and free samples. Some stores offer quantity and cash discounts
to regular customers. Even seasonal discounts may be passed along for example, to
reduce inventory of Halloween candy or Christmas cards.
7.
sumer. Second, these segments must have different degrees of price sensitivity a
nd / or different variable purchasing costs. hird, the variable costs of sellin
g to these segments must not be so different that the effect of the lower price
is not canceled by increased selling costs.
7.2.
romotions seem to have a substantial immediate impact on brand sales. or examp
le, price cuts for bathroom tissue are accompanied by immediate increases in bra
nd sales. When such price promotions are coordinated with special point-of-purch
ase displays and local feature advertising, sales might increase as much as 10 t
imes normal sales levels. Because of such observable immediate sales impact, man
680
ECHNOLOG
the consumer develop a loyalty to the deal and look for another promoted brand o
n the next purchase occasion? he managerial implications of a promotion extend
beyond the immediate sales effect of that promotion. While there has been consid
erable research on whether brand purchasing enhances repeat brand purchases, suc
h research provides con icting results. here is some evidence that a prior brand
purchase may increase the likelihood of buying that brand again. However, given
the increasing use of promotions, other research shows no evidence that promotio
ns enhance repeat brand purchases. A reason why promotions may not enhance repea
t brand purchases involves the question of why consumers may decide to buy the b
rand initially. It has been suggested that people may attribute their purchase t
o the deal that was available and not to the brand itself being attractive (Scot
t and alch 1980). If consumers indicate a negative reason for a purchase decisi
on, there is a smaller likelihood that the experience will be a positive learnin
g experience relative to the brand itself. If a brand is promoted quite frequent
ly, then the learning to buy the brand occurs because of the reward of the deal,
not the positive experience of using the brand itself.
7.3.3.
Long-erm Effects of rice and Sales romotions
In the long run, the relevant issue is the ability of the brand to develop a loy
al following among its customers (Jones 1990). he important objective is to dev
elop favorable brand attitudes of the customers such that they request that the
retailer carry the brand and are unwilling to buy a substitute. As we observed a
bove, many marketing practitioners fear that too-frequent promotional activity l
eads to loyalty to the deal, thereby undermining the brands franchise. And if buy
ers come to expect that the brand will often be on some form of a deal, they wil
l not be willing to pay a regular price and will be less likely to buy because t
hey do not believe that the brand provides bene ts beyond a lower price. A second
problem develops if consumers begin to associate a frequently promoted brand wit
h lower product quality. hat is, if consumers believe that higher-priced produc
ts are also of higher quality, then a brand that is frequently sold at a reduced
price may become associated with a lower perceived quality level. hus, the res
ult may be that buyers form a lower reference price for the product. roducts th
at are perceived to be of relatively lower quality have a more dif cult time tryin
g to achieve market acceptance.
8.
LEGAL ISSUES IN RICING
educed prices in one market area or to a speci c set of customers in order to redu
ce competition or drive a competitor out of business. Moreover, we have suggeste
d that there are typically several price-market segments distinguished by differ
ent degrees of sensitivity to prices and price differences. hus, the opportunit
y exists to set different prices or to sell through different channels to enhanc
e pro ts. Each of these various possible strategies or tactics is covered by some
form of legislation and regulation. In fact, there are laws concerning price xing
amongst competitors, exchanging price information or price signalling to compet
itors, pricing similarly to competitors (parallel pricing), predatory pricing, a
nd price discrimination. his section brie y overviews the laws that cover these t
ypes of activities.
8.1.
rice ixing
he Sherman Antitrust Act (1890) speci cally addresses issues related to price xing
, exchanging price information, and price signaling. It also has an effect on th
e issue of predatory pricing. Section 1 of the Act prohibits all agreements in r
estraint of trade. Generally, violations of this section are divided into per se
violations and rule of reason violations. er se violations are automatic. hat
is, if a defendant has been found to have agreed to x prices, restrict output, d
ivide markets by agreement, or otherwise act to restrict the forces of competiti
on, he or she is automatically in violation of the law and subject to criminal a
nd civil penalties. here is no review of the substance of the situation, that i
s, whether there was in fact an effect on competition. In contrast, the rule-ofreason doctrine calls for an inquiry into the circumstances, intent, and results
of the defendants actions. hat is, the
8.4.
redatory ricing
redatory pricing is the cutting of prices to unreasonably low and / or unpro tabl
e levels so as to drive competitors from the market. If this price cutting is su
ccessful in driving out competitors, then the price cutter may have acquired a m
onopoly position via unfair means of competitiona violation of section 2 of the S
herman Act. here is considerable controversy about predatory pricing, particula
rly how to measure the effect of a low-price strategy on the rms pro ts and on compe
titors. redatory pricing occurs whenever the price is so low that an equally ef c
ient competitor with smaller resources is forced from the market or discouraged
from entering it. rimarily, the effect on the smaller seller is one of a drain
on cash resources, not pro ts per se. Much of the controversy surrounding measurin
g the effects of an alleged predatory price relates to the proper set of costs t
o be used to determine the relative pro tability of the predators actions. Recently
, the courts seem to have adopted the rule that predatory pricing exists if the
price does not cover the sellers average variable or marginal costs. However, the
intent of the seller remains an important consideration in any case.
8.5.
Illegal rice Discrimination
he Robinson-atman Act was passed in 1936 to protect small independent wholesal
ers and retailers from being charged more for products than large retail chains
were. Sellers can, however, defend
682
ECHNOLOG
themselves against discrimination charges by showing that their costs of selling
vary from customer to customer. Cost savings can then be passed along to indivi
dual buyers in the form of lower prices. he Robinson-atman Act therefore permi
ts price discrimination in the following cases: 1. If the rm can demonstrate that
it saves money by selling large quantities to certain customers, it can offer t
hese buyers discounts equal to the amount saved. 2. Long-term sales agreements w
ith customers also reduce costs; again, discounts may be granted equal to the am
ount saved. 3. In cases where competitors prices in a particular market must be m
et, it is legal to match the competitors lower prices in such a market while char
ging higher prices in other markets.
9.
his prescription suggests that the rm understand fully the factors in uencing the
demand for its products and services. he key question is the role of price in t
he purchasers decision process. rice and price differentials in uence buyer percep
tions of value. Indeed, many companies have achieved positive results from diffe
rentially pricing their products and services. Coupled with knowing how price in u
ences buyers perceptions of value, it is necessary to know how buyers use the pro
duct or service. Is the product used as an input in the buyers production process
? If so, does the product represent a signi cant or insigni cant portion of the buye
rs manufacturing costs? If the product is a major cost element in the buyers produ
ction process, then small changes in the products price may signi cantly affect the
buyers costs and the resulting price of the manufactured product. If the nal mark
et is sensitive to price increases, then a small price increase to the nal manufa
cturer may signi cantly reduce demand to the initial seller of the input material.
hus, knowing your buyers also means understanding how they react to price chan
ges and price differentials as well as knowing the relative role price plays in
their purchase decisions. urther, the seller should also know the different typ
es of distributors and their functions in the distribution channel. his prescri
ption is particularly important when the manufacturer sells both to distributors
and to the distributors customers.
9.3.
Know our Competition and our Market
In addition to the in uence of buyers, a number of other signi cant market factors i
n uence demand. It is important to understand the operations of both domestic and
foreign competitors, their rate of capacity utilization, and their products and
services. In many markets, the dynamic interaction of supply and demand in uences
prices. Moreover, changes in capacity availability due to capital investment pro
grams will in uence supply and prices. A second important aspect of knowing the ma
rket is the need to determine pricevolume relationships.
umer Research, Vol. 7, June, pp. 3241. Simon, H., assnacht, M., and Wubker, G. (
1995), rice Bundling, ricing Strategy and ractice, Vol. 3, No. 1, pp. 3444. Sparks
, D. (1999), Who Will Survive the Internet Wars? Business Week, December 27, 98100. S
tern, A. A. (1986), he Strategic Value of rice Structure, Journal of Business Stra
tegy, Vol. 7, all, pp. 2231.
MASS CUSOMIZAION
685
and market high-quality products within a short time frame and at a price that c
ustomers are willing to pay. hese counterdemands for nal products create enormou
s productivity challenges that threaten the very survival of manufacturing compa
nies. In addition, increasingly high labor and land costs often put developed co
untries or regions at a disadvantage in attracting manufacturing plants comparin
g with neighboring developing countries. In order to meet these pragmatic and hi
ghly competitive needs of todays industries, it is imperative to promote high-val
ue-added products and services (Ryan 1996). It was reported that 9 out of 10 bar
code scanner vendors were planning to repackage their product offerings in 1997
to include a larger scope of value-added features and pursue application-speci c
solution opportunities (Rezendes 1997). his chapter discusses the opportunities
brought by mass customization for high-value-added products and services. Mass
customization enhances pro tability through a synergy of increasing customer-perce
ived values and reducing the costs of production and logistics. herefore, mass
customization inherently makes high-value-added products and services possible t
hrough premium pro ts derived from customized products. he chapter also introduce
s techniques of integrating product lifecycle concerns in terms of how to connec
t customer needs proactively with the capabilities of a manufacturer or service
provider during the product-development process. Major technical challenges of m
ass customization are also summarized.
1.1.
Concept Implication
Mass customization is de ned here as producing goods and services to meet individual
customers needs with near mass production ef ciency (seng and Jiao 1996). he conce
pt of mass customization was anticipated by of er (1971) and the term was coined
by Davis (1987). ine (1993) documented its place in the continuum of industrial
development and mapped out the management implications for rms that decide to ad
opt it. Mass customization is a new paradigm for industries to provide products
and services that best serve customer needs while maintaining near-mass producti
on ef ciency. igure 1 illustrates the economic implications of mass customization
(seng and Jiao 1996). raditionally, mass production demonstrates an advantage
in high-volume production, where the actual volume can defray the costs of huge
investments in equipment, tooling, engineering, and training. On the other hand
, satisfying each individual customers needs can often be translated into higher
value, in which, however, low production volume is unavoidable and thus may lend
itself to becoming economically not viable. Accommodating companies to garner e
conomy of scale through repetitions, mass customization is therefore capable of
reducing costs and lead time. As a result, mass customization can achieve higher
margins and thus be more advantageous. With the increasing exibility built into
modern manufacturing systems and programmability in computing and communication
technologies, companies with low to medium production volumes can gain an edge o
ver competitors by implementing mass customization. In reality, customers are of
ten willing to pay premium price for their unique requirements being satis ed, thu
s giving companies bonus pro ts (Roberts and Meyer 1991). rom an economic perspec
tive, mass customization enables a better match between the producers capabilitie
s and customer needs. his is accomplished through either developing the companys
portfolio, which includes prod-
686
ECHNOLOG
ucts, services, equipment, and skills, in response to market demands, or leading
customers to the total capability of the company so that customers are better s
erved. he end results are conducive to improvement in resources utilization. Ma
ss customization also has several signi cant rami cations in business. It can potent
ially develop customer loyalty, propel company growth, and increase market share
by widening the product range (ine 1993).
1.2.
echnical Challenges
he essence of mass customization lies in the product and service providers abili
ty to perceive and capture latent market niches and subsequently develop technic
al capabilities to meet the diverse needs of target customers. erceiving latent
market niches requires the exploration of customer needs. o encapsulate the ne
eds of target customer groups means to emulate existing or potential competitors
in quality, cost, quick response. Keeping the manufacturing cost low necessitat
es economy of scale and development of appropriate production capabilities. her
efore, the requirements of mass customization depend on three aspects: time-to-m
arket (quick responsiveness), variety (customization), and economy of scale (vol
ume production ef ciency). In other words, successful mass customization depends o
n a balance of three elements: features, cost, and schedule. In order to achieve
this balance, three major technical challenges are identi ed as follows.
1.2.1.
Maximizing Reusability
Maximal amounts of repetition are essential to achieve the ef ciency of mass produ
ction, as well as ef ciencies in sales, marketing, and logistics. his can be atta
ined through maximizing commonality in design, which leads to reusable tools, eq
uipment, and expertise in subsequent manufacturing. rom a commercial viewpoint,
mass customization provides diverse nished products that can be enjoyed uniquely
by different customers. Customization emphasizes the differentiation among prod
ucts. An important step towards this goal will be the development and proliferat
ion of design repositories that are capable of creating various customized produ
cts. his product proliferation naturally results in the continuous accretion of
varieties and thus engenders design variations and process changeovers, which s
eemingly contradict the pursuit of low cost and high ef ciency of mass production.
Such a setup presents manufacturers with a challenge of ensuring dynamic stabilit
y (Boynton and Bart 1991), which means that a rm can serve the widest range of cust
omers and changing product demands while building upon existing process capabili
ties, experience, and knowledge. Due to similarity over product lines or among a
group of customized products, reusability suggests itself as a natural techniqu
e to facilitate increasingly ef cient and cost-effective product realization. Maxi
mizing reusability across internal modules, tools, knowledge, processes, compone
nts, and so on means that the advantages of low costs and mass production ef cienc
y can be expected to maintain the integrity of the product portfolio and the con
tinuity of the infrastructure. his is particularly true in savings resulting fr
om leveraging downstream investments in the product life cycle, such as existing
design capabilities and manufacturing facilities. Although commonality and modu
larity have been important design practices, these issues are usually emphasized
for the purpose of physical design or manufacturing convenience. o achieve mas
s customization, the synergy of commonality and modularity needs to be tackled s
tarting from the functional domain characterized by customer needs or functional
requirements, and needs to encompass both the physical and process domains of d
esign (Suh 1990). In that way, the reusability of both design and process capabi
lities can be explored with respect to repetitions in customer needs related to
speci c market niches.
1.2.2.
roduct latform
he importance of product development for corporate success has been well recogn
ized (Meyer and Utterback 1993; Roberts and Meyer 1991). he effectiveness of a r
ms new product generation lies in (1) its ability to create a continuous stream o
f successful new products over an extended period of time and (2) the attractive
ness of these products to the target market niches. herefore, the essence of ma
ss customization is to maximize such a match of internal capabilities with exter
nal market needs. owards this end, a product platform is impelled to provide th
e necessary taxonomy for positioning different products and the underpinning str
ucture describing the interrelationships between various products with respect t
o customer requirements, competition information, and ful llment processes. A prod
uct platform in a rm implicates two aspects: to represent the entire product port
folio, including both existing products and proactively anticipated ones, by cha
racterizing various perceived customer needs, and to incorporate proven designs,
materials, and process technologies. In terms of mass customization, a product
platform provides the technical basis for catering to customization, managing va
riety, and leveraging existing capabilities. Essentially, the product platform c
aptures and utilizes reusability underlying product families and serves as a rep
ertoire of knowledge bases for different products. It also prevents variant prod
uct proliferation for the same set of customer requirements. he formulation of
product platform involves inputs from design concepts,
MASS CUSOMIZAION
687
process capabilities, skills, technological trends, and competitive directions,
along with recognized customer requirements.
1.2.3.
Integrated roduct Life Cycle
Mass customization starts from understanding customers individual requirements an
d ends with a ful llment process targeting each particular customer. he achieveme
nt of time-to-market through telescoping lead times depends on the integration o
f the entire product-development process, from customer needs to product deliver
y. Boundary expansion and concurrency become the key to the integration of the p
roduct development life cycle from an organizational perspective. o this end, t
he scope of the design process has to be extended to include sales and service.
On the other hand, product realization should simultaneously satisfy various pro
duct life cycle concerns, including functionality, cost, schedule, reliability,
manufacturability, marketability, and serviceability, to name but a few. he mai
n challenge for todays design methodologies is to support these multiple viewpoin
ts to accommodate different modeling paradigms within a single, coherent, and in
tegrated framework (Subrahmanian et al. 1991). In other words, the realization o
f mass customization requires not only integration across the product developmen
t horizon, but also the provision of a context-coherent integration of various v
iewpoints of product life cycle. It is necessary to employ suitable product plat
forms with uni ed product and product family structure models serving as integrati
on mechanisms for the common understanding of general construction of products,
thereby improving communication and consistency among different aspects of produ
ct life cycle.
2.
DESIGN OR MASS CUSOMIZAION
Design has been considered as a critical factor to the nal product form, cost, re
liability, and market acceptance. he improvements made to product design may si
gni cantly reduce the product cost while causing only a minor increase in the desi
gn cost. As a result, it is believed that mass customization can best be approac
hed from design, in particular, the up-front effort in the early stages of the p
roduct-development process. Design for Mass Customization (DMC) (seng and Jiao
1996) aims at considering economies of scope and scale at the early design stag
e of the product-realization process. he main emphasis of DMC is on elevating
the current practice of designing individual products to designing product famil
ies. In addition, DMC advocates extending the traditional boundaries of product
design to encompass a larger scope, from sales and marketing to distribution an
d services. o support customized product differentiation, a product family plat
form is required to characterize customer needs and subsequently to ful ll these n
eeds by con guring and modifying well-established modules and components. herefor
e, there are two basic concepts underpinning DMC: product family architecture a
nd product family design. igure 2 summarizes the conceptual implications of DM
C in terms of the expansion of context from both a design scope perspective and
a product-differentiation perspective.
688
ECHNOLOG
2.1.
roduct amily
A product family is a set of products that are derived from a common platform (M
eyer and Lehnerd 1997). Each individual product within the family (i.e., a produ
ct family member) is called a product variant. While possessing speci c features /
functionality to meet a particular set of customer requirements, all product va
riants are similar in the sense that they share some common customer-perceived v
alue, common structures, and / or common product technologies that form the plat
form of the family. A product family targets a certain market segment, whereas e
ach product variant is developed to address a speci c set of customer needs of the
market segment. he interpretation of product families depends on different per
spectives. rom the marketing / sales perspective, the functional structure of p
roduct families exhibits a rms product portfolio, and thus product families are ch
aracterized by various sets of functional features for different customer groups
. he engineering view of product families embodies different product and proces
s technologies, and thereby product families are characterized by different desi
gn parameters, components, and assembly structures.
2.1.1.
Modularity and Commonality
here are two basic issues associated with product families: modularity and comm
onality. able 1 highlights different implications of modularity and commonality
, as well as the relationship between them. he concepts of modules and modulari
ty are central in constructing product architecture (Ulrich 1995). While a modul
e is a physical or conceptual grouping of components that share some characteris
tics, modularity tries to separate a system into independent parts or modules th
at can be treated as logical units (Newcomb et al. 1996). herefore, decompositi
on is a major concern in modularity analysis. In addition, to capture and repres
ent product-structures across the entire productdevelopment process, modularity
is achieved from multiple viewpoints, including functionality, solution technolo
gies, and physical structures. Correspondingly, there are three types of modular
ity involved in product realization: functional modularity, technical modularity
, and physical modularity. What is important in characterizing modularity is the
interaction between modules. Modules are identi ed in such a way that between-mod
ule (intermodule) interactions are minimized, whereas within-module (inframodule
) interactions may be high. herefore, three types of modularity are characteriz
ed by speci c measures of interaction in particular views. As for functional modul
arity, the interaction is exhibited by the relevance of functional features (s
) across different customer groups. Each customer group is characterized by a pa
rticular set of s. Customer grouping lies only in the functional view and is i
ndependent of the engineering (including design and process) views. hat is, it
is solution neutral. In the design view, modularity is determined according to t
he technological feasibility of design solutions. he interaction is thus judged
by the coupling of design parameters (Ds) to satisfy given s regardless of t
heir physical realization in manufacturing. In the process view, physical interr
elationships among components and assemblies (CAs) are mostly derived from manuf
acturability. or example, on a CB (printed circuit board), physical routings o
f CAs determine the physical modularity related to product structures. It is the
commonality that reveals the difference of the architecture of product families
from the architecture of a single product. While modularity resembles decomposi
tion of product structures and is applicable to describing module (product) type
s, commonality characterizes the grouping of similar module (product) variants u
ABLE 1 Issues
A Comparison of Modularity and Commonality Modularity Commonality
MASS CUSOMIZAION
689
according to appropriate categorization of engineering costs derived from assess
ing existing capabilities and estimated volume, that is, economic evaluation. h
e correlation of modularity and commonality is embodied in the class-member rela
tionships. A product structure is de ned in terms of its modularity where module t
ypes are speci ed. roduct variants derived from this product structure share the
same module types and take on different instances of every module type. In other
words, a class of products (product family) is described by modularity and prod
uct variants differentiate according to the commonality among module instances.
2.1.2.
roduct Variety
roduct variety is de ned as the diversity of products that a manufacturing enterp
rise provides to the marketplace (Ulrich 1995). wo types of variety can be obse
rved: functional variety and technical variety. unctional variety is used broad
ly to mean any differentiation in the attributes related to a products functional
ity from which the customer could derive certain bene ts. On the other hand, techn
ical variety refers to diverse technologies, design methods, manufacturing proce
sses, components and / or assemblies, and so on that are necessary to achieve sp
eci c functionality of a product required by the customer. In other words, technic
al variety, though it may be invisible to customers, is required by engineering
in order to accommodate certain customer-perceived functional variety. echnical
variety can be further categorized into product variety and process variety. h
e technical variety of products is embodied in different components / modules /
parameters, variations of structural relationships, and alternative con guration m
echanisms, whilst process variety involves those changes related to process plan
ning and production scheduling, such as various routings, xtures / setups, and wo
rkstations. While functional variety is mostly related to customer satisfaction
from the marketing/ sales perspective, technical variety usually involves manufa
cturability and costs from the engineering perspective. Even though these two ty
pes of variety have some correlation in product development, they result in two
different variety design strategies. Since functional variety directly affects c
ustomer satisfaction, this type of variety should be encouraged in product devel
opment. Such a design for functional variety strategy aims at increasing functional
variety and manifests itself through vast research in the business community, su
ch as product line structuring (age and Rosenbaum 1987; Sanderson and Uzumeri 1
995), equilibrium pricing (Choi and DeSarbo 1994), and product positioning (Choi
et al. 1990). In contrast, design for technical variety tries to reduce technical v
ariety so as to gain cost advantages. Under this category, research includes variety
reduction program (Suzue and Kohdate 1990), design for variety (Ishii et al. 19
95a; Martin and Ishii 1996, 1997), design for postponement (eitzinger and Lee 1
997), design for technology life cycle (Ishii et al. 1995b), function sharing (U
lrich and Seering 1990), and design for modularity (Erixon 1996). igure 3 illus
trates the implications of variety and its impact on variety ful llment. While exp
loring functional variety in the functional view through customer requirement an
alysis, product
igure 3 Variety Leverage: Handling Variety for Mass Customization.
690
ECHNOLOG
family development should try to reduce technical variety in the design and proc
ess views by systematic planning of modularity and commonality so as to facilita
te plugging in modules that deliver speci c functionality, reusing proven designs
and reducing design changes and process variations.
2.2.
he A consists of three elements: the common base, the differentiation enabler
, and the con guration mechanism. 1. Common base: Common bases (CBs) are the share
d elements among different products in a product family. hese shared elements m
ay be in the form of either common (functional) features from the customer or sa
les perspective or common product structures and common components from the engi
neering perspective. Common features indicate the similarity of
roduct amily Architecture
roduct amily
Market Segment
Legend Common Base Differentiation Enabler Configuration Mechanism roduct Varia
nts roduct eatures
MASS CUSOMIZAION
691
692
ECHNOLOG
igure 5 Basic Methods of Variety Generation.
bilities and the economy of scale. he sales and marketing functions involve map
ping between the structural and functional views, where the correspondence of a
physical structure to its functionality provides necessary information to assist
in negotiation among the customers, marketers, and engineers, such as facilitat
ing the request for quotation (RQ).
2.3.
MASS CUSOMIZAION
693
igure 7 A-Based roduct amily Design: Variant Derivation through GS Instant
iation.
family design. Customers make their selections among sets of options de ned for ce
rtain distinctive functional features. hese distinctive features are the differ
entiation enablers of A in the sales view. Selection constraints are de ned for
presenting customers only feasible options, that is, both technically affordable
and market-wanted combinations. A set of selected distinctive features together
with those common features required by all customers comprise the customer requ
irements of a customized product design. As shown in igure 7, customized produc
ts are de ned in the sales view in the form of functional features and their value
s (options), whereas in the engineering view, product
694
ECHNOLOG
family design starts with product speci cations in the form of variety parameters.
Within the A, variety parameters correspond to distinctive functional feature
s and the values of each variety parameter correspond to the options of each fun
ctional feature. o realize variety, a general product structure (GS) is employ
ed as a generic data structure of product family in the engineering view. he de
rivation of product variants becomes the instantiation of GS. While the GS cha
racterizes a product family, each instance of GS corresponds to a product varia
nt of the family. Each item in the GS (either a module or a structural relation
ship) is instantiated according to certain include conditions that are prede ned i
n terms of variety parameters. Variety parameters originate from functional feat
ures speci ed by the customers and propagate along levels of abstraction of GS. V
ariety generation methods, such as attaching, swapping, and scaling, are impleme
nted through different instantiations of GS items. While the GS provides a com
mon base for product family design, distinctive items of GS, such as distinctiv
e modules and structural relationships, perform as the differentiation enablers
of the family. Distinctive items are embodied in different variants (instances)
that are identi ed by associated conditions. herefore, these include conditions a
nd the variety generation capability constitutes con guration mechanisms of A in
the engineering view.
3.
MASS CUSOMIZAION MANUACURING
Competition for mass customization manufacturing is focused on the exibility and
responsiveness in order to satisfy dynamic changes of global markets. he tradit
ional metrics of cost and quality are still necessary conditions for companies t
o outpace their competitors, but they are no longer the deciding factors between
winners and losers. Major trends are: 1. A major part of manufacturing will gra
dually shift from mass production to the manufacturing of semicustomized or cust
omized products to meet increasingly diverse demands. 2. he made-in-house mindset w
ill gradually shift to distributed locations, and various entities will team up
with others to utilize special capabilities at different locations to speed up p
roduct development, reduce risk, and penetrate local markets. 3. Centralized con
trol of various entities with different objectives, locations, and cultures is a
lmost out of the question now. Control systems to enable effective coordination
among distributed entities have become critical to modern manufacturing systems.
o achieve this, it is becoming increasingly important to develop production pl
anning and control architectures that are modi able, extensible, recon gurable, adap
table, and fault tolerant. lexible manufacturing focuses on batch production en
vironments using multipurpose programmable work cells, automated transport, impr
oved material handling, operation and resource scheduling, and computerized cont
rol to enhance throughput. Mass customization introduces multiple dimensions, in
cluding drastic increase of variety, multiple product types manufactured simulta
neously in small batches, product mixes that change dynamically to accommodate r
andom arrival of orders and wide spread of due dates, and throughput that is min
imally affected by transient disruptions in manufacturing processes, such as bre
akdown of individual workstations.
3.1.
Managing Variety in roduction lanning
Major challenge of mass customization production planning results from the incre
ase of variety. he consequence of variety may manifest itself through several r
ami cations, including increasing costs due to the exponential growth of complexit
y, inhibiting bene ts from economy of scale, and exacerbating dif culties in coordin
ating product life cycles. acing such a variety dilemma, many companies try to
satisfy demands from their customers through engineering-to-order, produce-to-or
der, or assembly-to-order production systems (Erens and Hegge 1994). At the back
end of product realization, especially at the component level and on the fabric
ation aspect, today we have both exibility and agility provided by advanced manuf
acturing machinery such as CNC machines. hese facilities accommodate technical
variety originating from diverse needs of customers. However, at the front end,
from customer needs to product engineering and production planning, managing var
iety is still very ad hoc. or example, production control information systems,
such as MRII (manufacturing resource planning) and ER (enterprise resource pla
nning), are falling behind even though they are important ingredients in product
ion management (Erens et al. 1994). he dif culties lie in the necessity to specif
y all the possible variants of each product and in the fact that current product
ion management systems are often designed to support a production that is based
on a limited number of product variants (van Veen 1992). he traditional approac
h to variant handling is to treat every variant as a separate product by specify
ing a unique BOM for each variant. his works with a low number of variants, but
not when customers are granted a high degree of freedom in specifying products.
he problem is that a large
MASS CUSOMIZAION
695
number of BOM structures will occur in mass customization production, in which a
wide range of combinations of product features may result in millions of varian
ts for a single product. Design and maintenance of such a large number of comple
x data structures are dif cult, if not impossible. o overcome these limitations,
a generic BOM (GBOM) concept has been developed (Hegge and Wortmann 1991; van Ve
en 1992). he GBOM provides a means of describing, with a limited amount of data
, a large number of variants within a product family, while leaving the product
structure unimpaired. Underlying the GBOM is a generic variety structure for cha
racterizing variety, as schematically illustrated in igure 8. his structure ha
s three aspects: 1. roduct structure: All product variants of a family share a
common structure, which can be described as a hierarchy containing constituent i
tems (Ii) at different levels of abstraction, where {Ii} can be either abstract
or physical entities. Such a breakdown structure (AND tree) of {Ii} reveals the
topology for end-product con guration (Suh 1997). Different sets of Ii and their i
nterrelationships (in the form of a decomposition hierarchy) distinguish differe
nt common product structures and thus different product families. 2. Variety par
ameters: Usually there is a set of attributes, A, associated with each Ii. Among
them, some variables are relevant to variety and thus are de ned as variety param
eters, {j}
A. Like attribute variables, parameters can be inherited by child no
de(s) from a parent node. Different instances of a particular j, e.g., {Vk}, em
body the diversity resembled by, and perceived from, product variants. wo types
of class-member relationships can be observed between {j} and {Vk}. A leaf j
(e.g., 32) indicates a binary-type instantiation, meaning whether I32 is includ
ed in I3 (V32
1), or not (V32
0). On the other hand, a node j (e.g., 2) indica
tes a selective type instantiation, that is, I2 has several variants in terms of
values of 2, i.e., V2 {V2 1, V2 2}. 3. Con guration constraints: wo types of co
nstraint can be identi ed. Within a particular view of product families, such as t
he functional, behavioral, or physical view, restrictions on the
696
MASS CUSOMIZAION
697
* {Icolor
transparent}. On the other hand, the identi cation {Icolor
family front plate is I.
red} and I
3.2.
Coordination in Manufacturing Resource Allocation
Mass customization manufacturing is characterized by shortened product life cycl
e with high-mixed and low-volume products in a rapidly changing environment. In
customer-oriented plants, orders consisting of a few parts, or even one part, wi
ll be transferred directly from vendors to producers, who must respond quickly t
o meet short due dates. In contrast to mass production, where the manufacturer t
ells consumers what they can buy, mass customization is driven by customers tell
ing the manufacturer what to manufacture. As a result, it is dif cult to use tradi
tional nite capacity scheduling tools to support the new style of manufacturing.
Challenges of manufacturing resource allocation for mass customization include:
1. he number of product variety owing through the manufacturing system is approa
ching an astronomical scale. 2. roduction forecasting for each line item and it
s patterns is not often available. 3. Systems must be capable of rapid response
to market uctuation. 4. he system should be easy for recon gurationideally, one set
of codes employed across different agents. 5. he addition and removal of resou
rces or jobs can be done with little change of scheduling systems. Extensive res
earch on coordination of resource allocation has been published in connection wi
th scenarios of multiple resource providers and consumers. In the research, exis
tence of demand patterns is the prerequisite for deciding which algorithm is app
licable. he selection of a certain algorithm is often left to empirical judgmen
t, which does not alleviate dif culties in balancing utilization and level of serv
ices (e.g., meeting due dates). As early as 1985, Hatvany (1985) pointed out tha
t the rigidity of traditional hierarchical structures limited the dynamic perfor
mance of systems. He suggested a heterachical system, which is described as the
fragmentation of a system into small, completely autonomous units. Each unit pur
sues its own goals according to common sets of laws, and thus the system possess
es high modularity, modi ability, and extendibility. ollowing this idea, agent-ba
sed manufacturing (Sikora and Shaw 1997) and holonic manufacturing (Gou et al. 1
994), in which all components are represented as different agents and holons, re
spectively, are proposed to improve the dynamics of operational organizations.
rom traditional manufacturing perspectives, mass customization seems chaotic due
to its large variety, small batch sizes, random arrival orders, and wide span o
f due dates. It is manageable, however, owing to some favorable traits of modern
manufacturing systems, such as inherent exibility in resources (e.g., increasing
use of machining centers and exible assembly workstations) and similarities amon
g tools, production plans, and product designs. he challenge is thus how to enc
ode these characteristics into self-coordinating agents so that the invisible ha
nd, in the sense of Adam Smiths market mechanism (Clearwater 1996), will function
effectively. Market-like mechanisms have been considered as an appealing approa
ch for dealing with the coordination of resource allocation among multiple provi
ders and consumers of resources in a distributed system (Baker 1991; Malone et a
l. 1988; Markus and Monostori 1996). Research on such a distributed manufacturin
g resource-allocation problem can be classi ed into four categories: the bidding /
auction approach (Shaw 1988; Upton and Barash 1991), negotiation approach (Lin
and Solberg 1992), cooperative approach (Burke and rosser 1991) and pricing app
roach (Markus and Monostori 1996). Major considerations of scheduling for resour
ce allocation include: 1. 2. 3. 4. Decompose large, complex scheduling problems
into smaller, disjointed allocation problems. Decentralize resource access, allo
cation, and control mechanisms. Design a reliable, fault-tolerant, and robust al
location mechanism. Design scalable architectures for resource access in a compl
ex system and provide a plug-andplay resource environment such that resource pro
viders and consumers can enter or depart from the market freely. 5. rovide guar
antees to customers and applications on performance criteria. In this regard, th
e agent-based, market-like mechanism suggests itself as a means of decentralized
, scalable, and robust coordination for resource allocation in a dynamic environ
ment (seng et al. 1997). In such a collaborative scheduling system, each workst
ation is considered as an autonomous agent seeking the best return. he individu
al work order is considered as a job agent that vies for
698
ECHNOLOG
igure 9 A Market Structure for Collaborative Scheduling.
the lowest cost for resource consumption. System scheduling and control are inte
grated as an auctionbased bidding process with a price mechanism that rewards pr
oduct similarity and response to customer needs. A typical market model consists
of agents, a bulletin board, a market system clock, the market operating protoc
ol, the bidding mechanism, pricing policy, and the commitment mechanism. igure
9 illustrates how the market operating protocol de nes the rules for synchronizati
on among agents. he satisfaction of multiple criteria, such as costs and respon
siveness, cannot be achieved using solely a set of dispatching rules. A price me
chanism should be constructed to serve as an invisible hand to guide the coordin
ation in balancing diverse requirements and maximizing performance in a dynamic
environment. It is based on market-oriented programming for distributed computat
ion (Adelsberger and Conen 1995). he economic perspective on decentralized deci
sion making has several advantages. It overcomes the narrow view of dispatching
rules, responds to current market needs, uses maximal net present value as the o
bjective, and coordinates agents activities with minimal communication. In collab
orative scheduling, objectives of the job agent are transformed into a set of ev
aluation functions. he weights of the functions can be adjusted dynamically on
basis of system states and external conditions. Resource agents adjust the charg
ing prices based on their capability and utilization and the state of current sy
stem. Mutual selection and mutual agreement are made through two-way communicati
on. igure 10 depicts the market price mechanism. In the market model, the job a
gents change routings (i.e., select different resource agents), and adjust Job
rice as a pricing tool to balance the cost of resources and schedule exposure. R
esource agents adjust their prices according to market demands on their capabili
ty and optimal utilization and returns. or example, a powerful machine may attr
act many job agents, and thus the queue will build up and
MASS CUSOMIZAION
699
waiting time will increase. When a resource agent increases its price, the objec
tive of bidding job will decrease, thus driving jobs to other resources and dimi
nishing the demand for this resource. Speci cally, the price mechanism can be expr
essed as follows: 1. he job agent calculates the price to be paid for the i ope
ration: Job price
Basic price enalty price where the enalty price re ects the du
e date factor: enalty price
penalty
e (di ci) (2) (1)
where di represents the due date
tion time of the operation i. 2.
starts to rank the job according
Aindex s Opportunity cost)
where s represents the setup cost, tp denotes the processing time of the operati
on, and Aindex represents the product family consideration in setting up the m
anufacturing system. or instance, we can let Aindex
0, if two consecutive job
s, i and j, are in the same product family, and hence the setup charge can be el
iminated in the following job. Otherwise, Aindex
1. he Opportunity cost in Eq
. (3) represents the cost of losing particular slack for other job agents due to
the assignment of resource for one job agents operation i, as expressed below: O
pportunity cost
e
j
c(min(0,tsj))
(ec tsj
1)
(4)
where tsj tdj
t
wj, tdj represents the due date of the corresponding job, t repr
esents the current time (absolute time), wj represents the remaining work conten
t of the job, and c is a system opportunity cost parameter, setting to 0.01. In
Eq. (4), tsj represents the critical loss of slack of operation j due to the sche
duling of the operation i before j, as depicted below:
tsj
(5)
Meanwhile, the resource agent changes its resource price pi: pi
Job price
k
k
(6)
where q is a normalized constant. 3. The job agent, in a period of time, collect
s all bids and responds to its task announcement and selects the resource agent
with minimal Actual cost to con rm the contract: Actual cost
pi
tpi
max((ci
penalty (7)
di),0)
700
TECHNOLOGY
Figure 11 Message-Based Bidding and Dynamic Control.
Figure 12 Example of Installation of Mass Customization Manufacturing System at
the Hong Kong University of Science and Technology.
MASS CUSTOMIZATION
701
1998). Figure 12 illustrates an actual example of installation of a mass customi
zation manufacturing system in the Hong Kong University of Science and Technolog
y.
4.
SALES AND MARKETING FOR MASS CUSTOMIZATION
In the majority of industries and sectors, customer satisfaction is a powerful f
actor in the success of products and services. In the era of mass production, cu
stomers were willing to constrain their choices to whatever was available, as lo
ng as the price was right. However, customers today are liberated and better inf
ormed. This leads them to be choosy about their purchases and less willing to co
mpromise with what is on the shelf. Meeting customer requirements requires full
understanding of customers values and preferences. In addition, it is important t
hat customers know what the company can offer as well as their possible options
and the consequences of their choices, such as cost and schedule implications.
4.1.
Design by Customers
The setup time and its resulting economy of scale have been widely accepted as t
he foundation for the mass production economy, where batch size and lead time ar
e important instruments. Consequently, the popular business model of todays rms is
design for customers. Companies design and then produce goods for customers thr
ough conforming a set of speci cations that anticipates customers requirements. Oft
en the forecasting of end users requirements is developed by the marketing depart
ment. It is usually carried out through aggregating the potential needs of custo
mers with the consideration of market direction and technology trends. Given the
complexities of modern products, the dynamic changes in customers needs, and the
competitive environment in which most businesses have to operate, anticipating
potential customers needs can be very dif cult. Chances are that forecasting will d
eviate from the reality by a high margin. Three major economic de ciencies are oft
en encountered. Type A is the possibility of producing something that no custome
rs want. The waste is presented in the form of inventory, obsolescence, or scrap
. Although a signi cant amount of research and development has been conducted on i
mproving the forecast accuracy, inventory policy, increased product offerings, s
hortened time-to-market, and supply chain management, avoiding the possibility o
f producing products that no customer want is still a remote, if not impossible,
goal. Type B waste comes from not being able to provide what customers need whe
n they are ready to purchase. It often presents itself in the form of shortage o
r missing opportunity. The costs of retail stores, marketing promotion, and othe
r sales expenditures on top of the goods themselves will disappoint the customer
s who are ready to purchase. The cost of missing opportunities can be as signi can
t as the rst type. Type C de ciency results from customers making compromises betwe
en their real requirements and existing SKUs (stock keeping units), that is, wha
t is available on the shelf or in the catalogue. Although these compromises are
usually not explicit and are dif cult to capture, they lead to customer dissatisfa
ction, reduce willingness to make future purchases, and erode the competitivenes
s of a company. To minimize the effect of these de ciencies, one approach is to re
vise the overall systems design of a manufacturing enterprise. Particularly with
the growing exibility in production equipment, manufacturing information systems
and workforce, the constraints of setup cost and lead time in manufacturing hav
e been drastically reduced. The interface between customers and product realizat
ion can be reexamined to ensure that total manufacturing systems produce what th
e customers want and customers are able to get what the systems can produce with
in budget and schedule. Furthermore, with the growing trends of cultural diversi
ty and self-expression, more and more customers are willing to pay more for prod
ucts that enhance their individual sizes, tastes, styles, needs, comfort, or exp
ression (Pine 1993). With the rapid growth of Internet usage and e-commerce come
s an unprecedented opportunity for manufacturing enterprise to connect directly
customers scattered around the world. In addition, through the Internet and busi
ness-to-business e-commerce, manufacturing enterprise can now acquire access to
the most economical production capabilities on a global basis. Such connectivity
provides the necessary condition for customers to become connected to the compa
ny. However, by itself it will not enhance effectiveness. In the last decade, co
ncurrent engineering brought together design and manufacturing, which has dramat
ically reduced the product development life cycle and hence improved quality and
increased productivity and competitiveness. Therefore, design by customers has
emerged as a new paradigm to further extend concurrent engineering by extending
connectivity with customers and suppliers (Tseng and Du 1998). The company will
be able to take a proactive role in helping customers de ne needs and negotiate th
eir explicit and implicit requirements. Essentially, it brings the voice of cust
omers into design and manufacturing, linking customer requirements with the comp
anys capabilities and extending the philosophy of concurrent engineering to sales
and marketing as part of an integrated
702
TECHNOLOGY
product life cycle. Table 3 summarizes the comparison of these two principles fo
r customer-focused product realization. The rationale of design by customers can
be demonstrated by the commonly accepted value chain concept (Porter 1986). The
best match of customer needs and company capability requires several technical
challenges: 1. Customers must readily understand the capability of a company wit
hout being a design engineer or a member in the product-development team. 2. Com
pany must interpret the needs of customers accurately and suggest alternatives t
hat are closest to the needs. 3. Customers must make informed choices with suf cie
nt information about alternatives. 4. Company must have the ability to ful ll need
s and get feedback. To tackle these challenges, it is necessary that customers a
nd the company share a context-coherent framework. Product con guration has been c
ommonly used as a viable approach, primarily because it enables both sides to sh
are the same design domain. Based on the product con guration approach, the value
chain, which includes the customer interface, can be divided into four stages: 1
. Formation: Presenting the capability that a company can offer in the form of p
roduct families and product family structure. 2. Selection: Finding customers nee
ds and then matching the set of needs by con guring the components and subassembli
es within the constraints set by customers. 3. Ful llment: Includes logistics, man
ufacturing and distribution so that customer needs can be satis ed within the cost
and time frame speci ed. 4. Improvement: Customers preferences, choices, and unmet
expressed interests are important inputs for mapping out the future improvement
plan. Formation and selection are new dimensions of design for customer. They ar
e explained further below.
4.2.
Helping Customers Making Informed Choices: Conjoint Analysis
Design by customers assumes customers are able to spell out what they want with
clarity. Unfortunately, this is often not the case. To begin with, customers may
not be able to know what is possible. Then the system needs to pull the explici
t and implicit needs from customers. Conjoint analysis is a set of methods in ma
rketing research originally designed to measure consumer preferences by assessin
g the buyers multiattribute utility functions (Green and Krieger, 1989; IntelliQu
est 1990). It assumes that a product could be described as vectors of M attribut
es, Z1, Z2, . . . , ZM. Each attribute can include several discrete levels. Attr
ibute Zm can be at any one of the Lm levels, Zm1, Zm2, . . . , Zm,Lm, m [1, M ].
A utility functions is de ned as (McCullagh and Nelder 1989): Ur
m 1
W d
M Lm m l 1
ml
Xrml
m 1 l 1
U
M Lm
ml
Xrml
(8) (9)
Uml
customers utility for pro le r, r [1, R] importance of attribute Zm for the custome
r desirability for lth level of attribute m, l [1, Lm], m [1, M] utility of attr
ibute ms lth level
TABLE 3 Principle
A Comparison of Design for Customers and Design by Customers Design for Customer
s Mass production Anticipate what customers would buy Build to forecast Promote
what is available Design by Customers Mass customization Create platforms and ca
pabilities Build to order Assist customers to discover what can be done and bala
nce their needs
Manufacturing Practice Design Manufacturing Sales
MASS CUSTOMIZATION
703
In the above formulation, Xrml is a dummy variable indicating whether or not the
particular level of an attribute is selected, as expressed by: Xrml
1 0
if attribute m is on lth level; otherwise
(10)
Usually, a large number of attributes, discrete levels, and their preference ind
ices is required to de ne the preferred products through customer interaction, and
thus the process may become overwhelming and impractical. There are several app
roaches to overcoming this problem. Green et al. (1991) and IntelliQuest (1990)
have proposed adaptive conjoint analysis to explore customers utility with iterat
ions. Customers are asked to rate the relative importance of attributes and re ne
the tradeoffs among attributes in an interactive setting through comparing a gro
up of testing pro les. Other approaches, such as Kano diagram and analytic hierarc
hy process (AHP), can also be applied to re ne the utility value (Urban and Hauser
1993). With the utility function, Uml, customers can nd the relative contributio
n of each attribute to their wants and thus make necessary tradeoffs. Customers
can nalize their design speci cations by maximizing their own personal value for th
e unit price they are spending.
4.3. Customer Decision-Making Process
Design by customers allows customers to directly express their own requirements
and carry out the mapping to the physical domain. It by no means gives customers
free hand to design whatever they want in a vacuum. Instead, it guides customer
s in navigating through the capabilities of a rm and de ning the best alternative t
hat can meet the cost, schedule, and functional requirements of the customers. F
igure 13 illustrates the process of design by customers based on a PFA platform.
In the gure, arrows represent data ows, ovals represent processes, and variables
in uppercase without subscript represent a set of relevant variables. This proce
ss consists of two phases: the front-end customer interaction for analyzing and
matching customer needs, and the back-end supporting process for improving the c
ompatibility of customer needs and corporate capabilities. There are two actors
in the scenario: the customers and the system supported by the PFA.
4.3.1.
Phase I: Customer Needs Acquisition
1. Capability presentation: In order to make informed decisions, customers are rs
t informed of the capabilities of the rm, which is in the form of the spectrum of
product offerings, product attributes, and their possible levels. By organizing
these capabilities, the PFA provides a systematic protocol for customers to exp
lore design options. 2. Self-explication: Customers are then asked to prioritize
desired attributes for their requirements according to their concern about the
difference. Customers must assess the value they attach to each attribute and th
en specify their degree of relative preference between the most desirable and th
e least desirable levels. The results of this assessment are a set of Wm re ecting
the relative importance of each attribute. 3. Utility exploration: Based on Wm,
the next task is to nd a set of d (0) rml that re ect the desirability of attribut
e levels. Response surface can be applied here to create a set of testing pro les
to search for the value of desirability of each selected level. The AHP can be u
sed to estimate (0) d (0) rml. Substituting Wm and d rml in Eq. (9), the utility
of each attribute level can be derived.
4.3.2.
Phase II: Product Design
(0) 1. Preliminary design: With d (0) rml and Wm, U ml can be calculated with Eq
. (8). A base product (BP) can be determined in accordance with a utility value
close to U (0) ml . Base product selection can be further ne-tuned through iterat
ive re nement of Uml. 2. Customization: Customers can modify the attributes from Z
to Z
Z through the customization process of adding building blocks. Z will be ad
justed, and the utility will be recalculated, until the customer gets a satisfac
tory solution. 3. Documentation: After it is con rmed by the customers, the design
can be delivered. The results include re ned Z and Z and customized BP and BP. Thes
e will be documented for the continuous evolution of PFA. Over time, the PFA can
be updated so that prospective customers can be better served. This includes ch
anges not only in the offerings of product families but also in production capab
ilities so that the capabilities of a rm can be better focused to the needs of it
s customers.
In practice, customers may found this systematic selection process too cumbersom
e and time consuming. Research in the area of customer decision-making process i
s still undergoing.
704
TECHNOLOGY
Customer Needs Acquisition
Product Design
Figure 13 The Process of Customer Decision Making.
4.4.
One-to-One Marketing
With the rapid transmission of mass media, globalization, and diverse customer b
ases, the market is no longer homogeneous and stable. Instead, many segmented ma
rkets now coexist simultaneously and experience constant changes. Customers cast
their votes through purchasing to express their preferences on products. With t
he erce competition, customers can easily switch from one company to another. In
addition, they become less loyal to a particular brand. Because a certain portio
n of customers may bring more value-added to the business than the average custo
mers, it is imperative to keep customer loyalty by putting customers at the cent
er point of business. Such a concept has been studied by many researchers (e.g.,
Greyser 1997) Peppers and Rogers (1997, p. 22) state: The 1:1 enterprise practice
s 1:1 marketing by tracking customers individually, interacting with them, and i
ntegrating the feedback from each customer into its behavior towards that custom
er. With the growing popularity of e-commerce, customers can directly interact wit
h product and service providers on a real-time basis. With the help of system su
pport, each individual customer can specify his or her needs and make informed c
hoices. In the meantime, the providers can directly conduct market research with
more precise grasp of customer pro les. This will replace old marketing models (G
reyser 1997). At the beginning, the concern is the capability of manufacturers t
o make productsthat is, it is production oriented. Then the focus shifts towards
the capability to sell products that have already been madethat is, it is sales o
riented. Later on, the theme is customers preferences and how to accommodate thes
e preferences with respect to company capabilitiesthat is, it is marketing orient
ed. With the paradigm shift towards mass customization, manufacturers aim at pro
viding best values to meet customers individual needs within a short period. The
closed-loop
MASS CUSTOMIZATION
705
interaction between clients and providers on a one-to-one basis will increase th
e ef ciency of matching buying and selling. Furthermore, the advent of one-to-one
marketing transcends geographical and national boundaries. The pro le of customers
individual data can be accessed from anywhere, and in turn the company can serve
the customers from anywhere at any time. Nonetheless, it also creates a borderl
ess global market that leads to global competition. To be able to sustain, compa
nies are putting more attention on one-to-one marketing.
5.
MASS CUSTOMIZATION AND E-COMMERCE
The Internet is becoming a pervasive communication infrastructure connecting a g
rowing number of users in corporations and institutions worldwide and hence prov
iding immense business opportunities for manufacturing enterprises. The Internet
has shown its capability to connect customers, suppliers, producers, logistics
providers, and almost every stage in the manufacturing value chain. Leading comp
anies have already started to reengineer their key processes, such as new produc
t development and ful llment, to best utilize the high speed and low cost of the I
nternet. Impressive results have been reported with signi cant reduction in lead t
ime, customer value enhancements, and customer satisfaction improvement. Some ev
en predict that a new industrial revolution has already quietly started, geared
towards e-commerce-enabled mass customization (Economist 2000; Helander and Jiao
2000). In essence, mass customization attempts to bring customers and company c
apabilities closer together. With the Internet, customers and providers in diffe
rent stages of production can be connected at multiple levels of the Web. How th
is new capability will be utilized is still at a very early stage. For examples,
customers can be better informed about important features and the related costs
and limitations. Customers can then make educated choices in a better way. In t
he meantime, through these interactions the company will then be able to acquire
information about customers needs and preferences and can consequently build up
its capabilities in response to these needs. Therefore, e-commerce will be a maj
or driving force and an important enabler for shaping the future of mass customi
zation. Rapid communication over the Internet will revolutionize not only trade
but also all the business functions. A paradigm of electronic design and commerc
e (eDC) can be envisioned as shown in Figure 14. Further expanded to the entire
enterprise, it is often referred to as electronic enterprise (eEnterprise). Thre
e pillars support eDC or eEnterprise: the integrated product life cycle, mass cu
stomization, and the supply chain. The integrated product life cycle incorporate
s elements that are essential to companies, including marketing / sales, design,
manufacturing, assembly, and logistics. Using the Internet, some of these activ
ities may be handed over to the supply chain. There may also be other companies
similar to the regular supply chain that supplies services. These constitute bus
iness-to-service functions.
Figure 14 A Systems Model of eDC / eEnterprise.
706
TECHNOLOGY
With the communication and interactivity brought by the Internet, the physical l
ocations of companies may no longer be important. The common business model with
a core company that engages in design, manufacturing, and logistics will become
less common. Manufacturing, as well as design and logistics, may, for example,
be conducted by outside service companies. As a result, in view of the supply ch
ain and service companies, the core of business-to-business e-commerce is ourishi
ng. Figure 14 also illustrates a systems approach to manufacturing. It is a dyna
mic system with feedback. For each new product or customized design, one must go
around the loop. The purpose is to obtain information from marketing and other
sources to estimate customer needs. The product life cycle is therefore illustra
ted by one full circle around the system. The company can sell products to distr
ibutors and / or directly to customers through business-tocustomer e-commerce. I
n some cases, products may be designed by the customer himself or herself. This
is related to mass customization. Customer needs are then captured directly thro
ugh the customers preferencesthe customers understand what they want and can submi
t their preferred design electronically. A well-known example is Dell Computers,
where customers can select the elements that constitute a computer according to
their own preferences. Usually information about customer needs may be delivere
d by sales and marketing. Typically, they rely on analyses of customer feedback
and predictions for the future. These remain important sources of information fo
r new product development. From this kind of information, the company may redesi
gn existing products or decide to develop a new one. The design effort has to ta
ke place concurrently, with many experts involved representing various areas of
expertise and parties that collaborate through the system. The supply chain comp
anies may also participate if necessary. For manufacturing the product, parts an
d / or other services may be bought from the supply chain and delivered just-intime to manufacturing facilities. These constitute typical business-to-business
e-commerce. Some important technical issues associated with eDC include humancomp
uter interaction and usability (Helander and Khalid 2001), the customer decision
-making process over the Internet (So et al. 1999), product line planning and el
ectronic catalog, and Web-based collaborative design modeling and design support
.
6.
SUMMARY
Mass customization aims at better serving customers with products and services t
hat are closer to their needs and building products upon economy of scale leadin
g to mass production ef ciency. To this end, an orchestrated effort in the entire
product life cycle, from design to recycle, is necessary. The challenge lies in
how to leverage product families and how to achieve synergy among different func
tional capabilities in the value chain. This may lead to signi cant impact on the
organizational structure of company in terms of new methods, education, division
of labor in marketing, sales, design, and manufacturing. The technological road
map of mass customization can also lead to rede nition of job, methodology, and in
vestment strategies as witnessed in current practice. For instance, the sales de
partment will be able to position itself to sell its capabilities instead of a g
roup of point products. As a new frontier of business competition and production
paradigm, mass customization has emerged as a critical issue. Mass customizatio
n can best be realized by grounding up, instead of by directly synthesizing, exi
sting thrusts of advanced manufacturing technologies, such as JIT, exible, lean a
nd agile manufacturing, and many others. Obviously, much needs to be done. This
chapter provides materials for stimulating an open discussion on further explora
tion of mass customization techniques.
REFERENCES
Adelsberger, H. H., and Conen, W. (1995), Scheduling Utilizing Market Models, in Pro
ceedings of the 3rd International Conference on CIM Technology (Singapore), pp.
695702. Baker, A. D. (1991), Manufacturing Control with a Market-Driven Contract Ne
t, Ph.D. Thesis, Rensselaer Polytechnic Institute, Troy, NY. Baldwin, R. A., and C
hung, M. J. (1995), Managing Engineering Data for Complex Products, Research in Engi
neering Design, Vol. 7, pp. 215231. Boynton, A. C., and Bart, V. (1991), Beyond Fle
xibility: Building and Managing the Dynamically Stable Organization, California Ma
nagement Review, Vol. 34, No. 1, pp. 5366. Burke, P., and Prosser, P. (1991), A Dis
tributed Asynchronous System for Predictive and Reactive Scheduling, Arti cial Intel
ligence in Engineering, Vol. 6, No. 3, pp. 106124. Choi, S. C., and Desarbo, W. S
. (1994), A Conjoint-Based Product Designing Procedure Incorporating Price Competi
tion, Journal of Product Innovation Management, Vol. 11, pp. 451459.
MASS CUSTOMIZATION
707
Choi, S. C., Desarbo, W. S., and Harker, P. T. (1990), Product Positioning under P
rice Competition, Management Science, Vol. 36, pp. 175199. Clearwater, S. H., Ed. (
1996), Market-Based Control: A Paradigm for Distributed Resource Management, Wor
ld Scienti c, Singapore. Davis, S. M. (1987), Future Perfect, Addison-Wesley, Read
ing, MA. Economist, The (2000), All yours: The Dream of Mass Customization, Vol. 355
, No. 8164, pp. 5758. Erens, F. J., and Hegge, H. M. H. (1994), Manufacturing and S
ales Co-ordination for Product Variety, International Journal of Production Econom
ics, Vol. 37, No. 1, pp. 8399. Erens, F., McKay, A., and Bloor, S. (1994), Product
Modeling Using Multiple Levels of Abstraction Instances as Types, Computers in Ind
ustry, Vol. 24, No. 1, pp. 1728. Erixon, G. (1996), Design for Modularity, in Design
for X: Concurrent Engineering Imperatives, G. Q. Huang, Ed., Chapman & Hall, New
York, pp. 356379. Feitzinger, E., and Lee, H. L. (1997), Mass Customization at Hew
lett-Packard: The Power of Postponement, Harvard Business Review, Vol. 75, pp. 1161
21. Green, P. E., and Krieger, A. M. (1989), Recent Contributions to Optimal Produ
ct Positioning and Buyer Segmentation, European Journal of Operational Research, V
ol. 41, No. 2, pp. 127141. Green, P. E., Krieger, A. M., and Agarwal, M. K. (1991
), Adaptive Conjoint Analysis: Some Caveats and Suggestions, Journal of Marketing Re
search, Vol. 28, No. 2, pp. 215222. Greyser, S. A. (1997), Janus and Marketing: The
Past, Present, and Prospective Future of Marketing, in Re ections on the Futures of
Marketing, D. R. Lehmann and K. E. Jocz, Eds., Marketing Science Institute, Cam
bridge, MA. Gou, L., Hasegawa, T., Luh, P. B., Tamura, S., and Oblak, J. M. (199
4), Holonic Planning and Scheduling for a Robotic Assembly Testbed, in Proceedings o
f the 4th International Conference on CIM and Automation Technology (Troy, NY, O
ctober), IEEE, New York, pp. 142149. Hatvany, J. (1985), Intelligence and Cooperati
on in Heterarchic Manufacturing Systems, Robotics and Computer Integrated Manufact
uring, Vol. 2, No. 2, pp. 101104. Hegge, H. M. H., and Wortmann, J. C. (1991) Gener
ic Bills-of-Material: A New Product Model, International Journal of Production Eco
nomics, Vol. 23, Nos. 13, pp. 117128. Helander, M., and Jiao, J. (2000), E-Product D
evelopment (EPD) for Mass Customization, in Proceedings of IEEE International Conf
erence on Management of Innovation and Technology, Singapore, pp. 848854. Helande
r, M. G., and Khalid, H. M. (2001), Modeling the Customer in Electronic Commerce, Ap
plied Ergonomics, (In Press). IntelliQuest (1990), Conjoint Analysis: A Guide fo
r Designing and Integrating Conjoint Studies, Marketing Research Technique Serie
s Studies, American Marketing Association, Market Research Division, TX. Ishii,
K., Juengel, C., and Eubanks, C. F. (1995a), Design for Product Variety: Key to Pr
oduct Line Structuring, in Proceedings of Design Engineering Technical Conferences
, ASME, DE-Vol. 83, pp. 499506. Ishii, K., Lee, B. H., and Eubanks, C. F. (1995b)
, Design for Product Retirement and Modularity Based on Technology Life-Cycle, in Ma
nufacturing Science and Engineering, MED-Vol. 2-2 / MH-Vol. 3-2, ASME, pp. 921933
. Krause, F. L., Kimura, F., Kjellberg, T., and Lu, S. C.-Y. (1993), Product Model
ing, Annals of the CIRP, Vol. 42, No. 1, pp. 695706. Lin, G. Y., and Solberg, J. J.
(1992), Integrated Shop Floor Control Using Autonomous Agents, IIE Transactions, Vo
l. 24, No. 3, pp. 5771. Malone, T. W., Fikes, R. E., Grant, K. R., and Howard, M.
T. (1988), Enterprise: A Market-Like Task Scheduler for Distributed Computing Env
ironments, in The Ecology of Computation, B. A. Huberman, Ed., North-Holland, Amst
erdam, pp. 177205. Markus, A., and Monostori, L. (1996), A Market Approach to Holon
ic Manufacturing, Annals of the CIRP, Vol. 45, No. 1, pp. 433436. Martin, M. V., an
d Ishii, K. (1996), Design for Variety: A Methodology for Understanding the Costs
of Product Proliferation, in Proceedings of ASME Design Engineering Technical Conf
erences, DTM-1610, Irvine, CA. Martin, M. V., and Ishii, K. (1997), Design for Var
iety: Development of Complexity Indices and Design Charts, in Proceedings of ASME
Design Engineering Technical Conferences, DFM-4359, Sacramento, CA.
708
TECHNOLOGY
McCullagh, P., and Nelder, J. A. (1989), Generalized Linear Models, 2nd Ed., Cha
pman & Hall, London. Meyer, M., and Lehnerd, A. P. (1997), The Power of Product
Platforms: Building Value and Cost Leadership, Free Press, New York. Meyer, M. H
., and Utterback, J. M. (1993), The Product Family and the Dynamics of Core Capabi
lity, Sloan Management Review, Vol. 34, No. 3, pp. 2947. Newcomb, P. J., Bras, B.,
and Rosen, D. W. (1997), Implications of Modularity on Product Design for the Life
Cycle, in Proceedings of ASME Design Engineering Technical Conferences, DETC96 /
DTM-1516, Irvine, CA. Page, A. L., and Rosenbaum, H. F. (1987), Redesigning Produc
t Lines with Conjoint Analysis: How Sunbeam Does It, Journal of Product Innovation
Management, Vol. 4, pp. 120137. Peppers, D., and Rogers, M. (1997), Enterprise O
ne to One, Currency Doubleday, New York. Pine, B. J. (1993), Mass Customization:
The New Frontier in Business Competition, Harvard Business School Press, Boston
. Porter, M. (1986), Competition in Global Industries, Harvard Business School P
ress, Boston. Rezendes, C. (1997), More Value-Added Software Bundled with Your Sca
nners This Year, Automatic I.D. News, Vol. 13, No. 1, pp. 2930. Roberts, E. B., and
Meyer, M. H. (1991), Product Strategy and Corporate Success, IEEE Engineering Manag
ement Review, Vol. 19, No. 1, pp. 418. Ryan, N. (1996), Technology Strategy and Cor
porate Planning in Australian High-Value-Added Manufacturing Firms, Technovation,
Vol. 16, No. 4, pp. 195201. Sanderson, S., and Uzumeri, M. (1995), Managing Product
Families: The Case of the Sony Walkman, Research Policy, Vol. 24, pp. 761782. Schr
eyer, M., and Tseng, M. M. (1998), Modeling and Design of Control Systems for Mass
Customization Manufacturing, in Proceedings of the 3rd Annual Conference on Indus
trial Engineering Theories, Applications, and Practice (Hong Kong). Shaw, M. J.
(1988), Dynamic Scheduling in Cellular Manufacturing Systems: A Framework for Netw
orked Decision Making, Journal of Manufacturing Systems, Vol. 7, No. 2, pp. 8394. S
ikora, R., and Shaw, M. J. P. (1997), Coordination Mechanisms for Multi-agent Manu
facturing Systems: Application to Integrated Manufacturing Scheduling, IEEE Transa
ctions on Engineering Management, Vol. 44, No. 2, pp. 175187. So, R. H. Y., Ling,
S. H., and Tseng, M. M. (1999), Customer Behavior in Web-Based MassCustomizatio
n Systems: An Experimental Study on the Buying Decision-Making Process over the
Internet, Hong Kong University of Science and Technology. Subrahmanian, E., West
erberg, A., and Podnar, G. (1991), Towards a Shared Computational Environment for
Engineering Design, in Computer-Aided Cooperative Product Development, D. Sriram,
R. Logcher, and S. Fukuda, Eds., Springer, Berlin. Suh, N. P. (1990), The Princi
ples of Design, Oxford University Press, New York. Suh, N. P. (1997), Design of Sy
stems, Annals of the CIRP, Vol. 46, No. 1, pp. 7580. Suzue, T., and Kohdate, A. (19
90), Variety Reduction Program: A Production Strategy for Product Diversi cation,
Productivity Press, Cambridge, MA. Teresko, J. (1994), Mass Customization or Mass
Confusion, Industrial Week, Vol. 243, No. 12, pp. 4548. Tof er, A. (1971), Future Sho
ck, Bantam Books, New York. Tseng, M. M., and Jiao, J. (1996), Design for Mass Cus
tomization, Annals of the CIRP, Vol. 45, No. 1, pp. 153156. Tseng, M. M., and Du, X
. (1998), Design by Customers for Mass Customization Products, Annals of the CIRP, V
ol. 47, No. 1, pp. 103106. Tseng, M. M., Lei, M., and Su, C. J. (1997), A Collabora
tive Control System for Mass Customization Manufacturing, Annals of the CIRP, Vol.
46, No. 1, pp. 373377. Ulrich, K. (1995), The Role of Product Architecture in the
Manufacturing Firm, Research Policy, Vol. 24, pp. 419440. Ulrich, K. T., and Epping
er, S. D. (1995), Product Design and Development, McGraw-Hill, New York. Ulrich,
K. T., and Seering, W. P. (1990), Function Sharing in Mechanical Design, Design Stu
dies, Vol. 11, pp. 223234. Ulrich, K., and Tung, K. (1991), Fundaments of Product M
odularity, in Issues in Mechanical Design International 1991, A. Sharon, Ed., ASME
DE-39, New York, pp. 7379.
MASS CUSTOMIZATION
709
Upton, D. M., and Barash, M. M. (1991), Architectures and Auctions in Manufacturin
g, International Journal of Computer Integrated Manufacturing, Vol. 4, No. 1, pp.
2333. Urban, G. L., and Hauser, J. R. (1993), Design and Marketing of New Product
s, 2nd Ed., Prentice Hall, Englewood Cliffs, NJ. van Veen, E. A. (1992), Modelin
g Product Structures by Generic Bills-of-Material, Elsevier, New York.
710
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
Server
Application Data
Figure 2 Relationship of Client and Server.
712
TECHNOLOGY
Terminal
Host
User Interface Presentation
Application Data
(a)
User
Terminal Equipment Network
Mainframe
Client Application User Interface Presentation
Server
Application Data
Database (b)
User
Client Machine (PC, WS, etc...) Network
ServerMachine Machine Server Server Machine (PC,WS,etc...) (PC,WS,etc...) (PC, W
S, etc...)
Figure 3 Logical Structure vs. Physical Structure. (a) Host processing model. (b
) C / S processing model.
the server call back the client, so the client becomes a server. For example, in
an Internet banking system, a user program (WWW browser) is a client and a prog
ram located at the bank that offers services to the client is a server. However,
the role of the program located at the bank is not always xed to the server. To
offer a fund transfer service to the user, the program located at bank A asks fo
r a fund acceptance process that is located at bank B. In this case, the program
at bank A becomes a client and the program at bank B becomes a server. Thus, th
e roles of the client and server change dynamically depending on the circumstanc
es. In this chapter, client means a program that is located on the user side and pro
vides the user interface and presentation functions, and server means a program that
receives requests from the client and provides services to the client.
1.3. 1.3.1.
Computer Technology Trends Web Technologies
A C / S system is realized by a network over which clients and servers work toge
ther to accomplish a task. To develop the network applications, it is important
for a developer to select an appropriate communication protocol from the viewpoi
nt of cost and performance. The communication protocol
714
TECHNOLOGY
1.3.2.
Open System Technologies
A C / S system is one of distributed models of computing in the environment of h
eterogeneous computers. This is because open system technologies such as communi
cation protocols and developing languages are required. For example, the adoptio
n of the Java language as the developing language has increased, as has the adop
tion of CORBA (common object request broker architecture) as interconnection tec
hnology. Java is a new object-oriented programming language from Sun Microsystem
s and a portable operating system environment that enables the writing of portab
le components. CORBA is a standard architecture created by the OMG (Object Manag
ement Group) for a message broker called object request broker, a software compo
nent that enables communication among objects in a distributed environment. The
common feature of Java and CORBA is platform independence. The most commonly use
d operating system on which C / S systems are developed is UNIX, which mainly wo
rks on workstations or Windows NT, which mainly runs on server-class personal co
mputers. Java and CORBA are designed for running on both operating systems and c
ooperate mutually. In the future, technology like Java or CORBA that works in he
terogeneous computing environments will become more and more important. Due to t
he recent progress in hardware technologies, the processing capacity of a person
al computer has become almost equal to that of a workstation. Therefore, there a
re too many choices among various technologies to allow appropriate system compo
nents to be selected. The system con guration has become more complex because of a
combination of distributed components as well as heterogeneity.
2.
2.1.
FEATURES OF C / S SYSTEMS
Advantages
The advantages of C / S systems derive from their distributed processing structu
re, which C / S systems adopt, where processing is handled by two or more cooper
ating geographically distinct processors. 1. Performance tuning: In a distribute
d processing environment, the service requests from a client are not concentrate
d on a single server but can be distributed among several servers. Therefore, it
becomes possible to distribute required workload among processing resources and
improve the performance, such as response time to the client. Furthermore, adop
ting an ef cient algorithm of request distribution such as an adaptive algorithm t
aking account of current loads of all servers makes it possible to improve the t
hroughput (the amount of work processed per unit time) of the system. 2. Availab
ility improvement: Clients are sensitive to performance of the system, but they
do not perceive the physical structure of the distribution for requests. In othe
r words, the group of networked servers seems to the client like one server. In
a distributed processing environment, even if one server breaks, other servers c
an substitute for the broken server and continue to process its tasks. This can
guarantee the high availability of services to the client compared to a host-cen
tric processing environment. 3. Scalability and cost-ef ciency: Scalability is the
capacity of the system to perform more total work in the same elapsed time when
its processing power is increased. With a distributed system, the processing ca
pacity of the system can be scaled incrementally by adding new computers or netw
ork elements as the need arises, and excessive investment in a system can be avo
ided at the introduction stage of the C / S system. On the other hand, with a ho
st-centric system, if the load level is going to saturate the system, the curren
t system will be replaced by a more powerful computer. If further growth is plan
ned, redundant computer capacity will need to be built into the current system t
o cope with future growth in requirements.
2.2.
Disadvantages
1. Strong dependence on the network infrastructure: Because a distributed enviro
nment is largely based on a networking infrastructure, the performance and avail
ability of the distributed system are strongly in uenced by the state of the netwo
rk. For example, if a network failure, such as of the router or transmission lin
e, occurs, performance and availability for services may seriously deteriorate.
The system designer and manager should design the fault-tolerant system and plan
the disaster recovery taking account of the networking infrastructure. 2. Secur
ity: From the viewpoint of security, the possibility exists that con dential infor
mation stored on the database may leak out through the network. The system manag
er should take security measures such as authentication, access control, and cry
ptographic control according to the security level of information. 3. Complexity
of system con guration: C / S systems are composed of several components from mul
tiple vendors. Distributed applications often have more complex functionality th
an cen-
into two parts, which are placed on the client and the server respectively. In m
ost cases, the presentation logic and the application logic are placed on the cl
ient side and the data logic is placed on the server side. Examples of two-tier
C / S systems are le servers and database servers with stored procedures. Figure
5 shows a two-tier architecture. The advantage of the two-tier architecture is e
ase of development. Because the two-tier architecture is simple, a developer can
create applications quickly using a 4GL (fourth-generation language) developmen
t environment such as Microsofts Visual Basic or Inprises Delphi. However, as C /
S systems grew up to run mission-critical or enterprise-wide applications, short
comings of the two-tier architecture emerged: 1. Performance: Performance may de
teriorate markedly as the number of clients, the size of the database, or the vo
lume of transferred data increases. This deterioration is caused by a lack of ca
pacity of resources such as the processing unit, the memory area, and the networ
k bandwidth
716
TECHNOLOGY
The 1st tier Client Presentation/GUI
The 2nd tier Server Data Logic
Business Logic SQL Connection Network
Figure 5 Two-Tier Architecture.
SQL Connection
when the processing load is concentrated on the database server. Placing a part
of application logic on the database server allows network traf c between the clie
nt and the database server to be reduced and the performance even improved. For
example, a set of database access commands is compiled and saved in the database
of the server. This scheme is called stored procedure. Some DBMS packages suppo
rt this scheme. But there remains the problem that the application maintenance b
ecomes complicated. 2. Maintenance: Because the business logic is placed on the
client side in most two-tier C / S systems, version control of applications beco
mes complex. For example, in the case of an online shopping system, the business
logic (or rule), such as tax rate, is implemented on the client side. Although
this structure is effective in reducing network traf c between the client and the
server, a system manager has to replace the application of all the clients every
time the business rule or version of application is changed. This becomes a big
problem in the total cost of ownership of two-tier C / S systems: costs of main
taining, operating, and managing large-scale systems accommodating many users an
d services.
3.3.
Three-Tier Architecture
To solve some problems of the two-tier architecture and improve the reliability
and performance of the system, a three-tier architecture has been adopted. In a
three-tier architecture, the business logic is separated from the presentation a
nd data layers and becomes the middle layer between them. The presentation logic
resides in the rst tier (the client side), and the data logic in the third tier
(the server side), and the business logic in the second tier (the middle) betwee
n both tiers. This solves the problems occurring with the two-tier architecture.
Figure 6 shows a three-tier architecture where the business logic and some conn
ection functions are placed in the second tier. In a three-tier architecture, th
e second tier plays the most important role. Many functions such as data access
and connection with other systems are located in the second tier and can be used
by any client. That is, the second tier becomes the gateway to other systems. T
he second tier is often called the application server. The advantages of three-t
ier architecture derived from the application server are: 1. Reduction of networ
k traf c: Concentrating the function of accessing database and other systems on th
e application server makes it possible to reduce network traf c between the client
and the server. That is, instead of interacting with the database directly, the
client calls the business logic on the application server, and then the busines
s logic accesses the database on the database server on behalf of the client. Th
erefore, only service requests and responses are sent between the client and the
server. From the viewpoint of network traf c, comparisons of two-tier architectur
e and three-tier architecture are shown in Figure 7. 2. Scalability: Adding or r
emoving application servers or database servers allows the system to be scaled i
ncrementally according to the number of clients and volume of requests. 3. Perfo
rmance: Because workloads can be distributed across application servers and data
base servers, this architecture can prevent an application server from becoming
a bottleneck and can keep the performance of the system in a permissible range.
718
TECHNOLOGY
4. Availability: Even if an application server fails during processing the busin
ess logic, the client can restart the business logic on other application server
s.
4.
4.1.
FUNDAMENTAL TECHNOLOGIES FOR C / S SYSTEMS
Communication Methods
The communication method facilitating interactions between a client and a server
is the most important part of a C / S system. For easy development of the C / S
systems, it is important for processes located on physically distributed comput
ers to be seen as if they were placed in the same machine. Socket, remote proced
ure call, and CORBA are the main interfaces that realize such transparent commun
ications between distributed processes.
4.1.1.
Socket
The socket interface, developed as the communication port of Berkeley UNIX, is a
n API that enables communications between processes through networks like input
/ output access to a local le. Communications between two processes by using the
socket interface are realized by the following steps (see Figure 8): 1. For two
processes in different computers to communicate with each other, a communication
port on each computer must be created beforehand by using socket system call. 2. Ea
ch socket is given a unique name to recognize the communication partner by using
bind system call. The named socket is registered to the system. 3. At the server pr
ocess, the socket is prepared for communication and it is shown that the server
process is possible to accept communication by using listen system call. 4. The clie
nt process connects to the server process by using connect system call.
Client Process (1) socket (1)
Server Process
socket
(2) bind
(2) bind (3) listen
(4) connect (5) accept (6) read/write Network
Figure 8 Interprocess Communication via Socket Interface.
(6) write/read
720
TECHNOLOGY
which to manipulate them. Object-oriented computing is enabling faster software
development by promoting software reusability, interoperability, and portability
. In addition to enhancing the productivity of application developers, object fr
ameworks are being used for data management, enterprise modeling, and system and
network management. When objects are distributed, it is necessary to enable com
munications among distributed objects. The Object Management Group, a consortium
of object technology vendors founded in 1989, created a technology speci cation n
amed CORBA (Common Object Request Broker Architecture). CORBA employs an abstrac
tion similar to that of RPC, with a slight modi cation that simpli es programming an
d maintenance and increases extensibility of products. The basic service provide
d by CORBA is delivery of requests from the client to the server and delivery of
responses to the client. This service is realized by using a message broker for
objects, called object request broker (ORB). An ORB is the central component of
CORBA and handles distribution of messages between objects. Using an ORB, clien
t objects can transparently make requests to (and receive responses from) server
objects, which may be on the same computer or across a network. An ORB consists
of several logically distinct components, as shown in Figure 10. The Interface
De nition Language (IDL) is used to specify the interfaces of the ORB as well as s
ervices that objects make available to clients. The job of the IDL stub and skel
eton is to hide the details of the underlying ORB from application programmers,
making remote invocation look similar to local invocation. The dynamic invocatio
n interface (DII) provides clients with an alternative to using IDL stubs when i
nvoking an object. Because in general the stub routine is speci c to a particular
operation on a particular object, the client must know about the server object i
n detail. On the other hand, the DII allows the client to dynamically invoke a o
peration on a remote object. The object adapter provides an abstraction mechanis
m for removing the details of object implementation from the messaging substrate
: generation and destruction of objects, activation and deactivation of objects,
and invocation of objects through the IDL skeleton. When a client invokes an op
eration on an object, the client must identify the target object. The ORB is res
ponsible for locating the object, preparing it to receive the request, and passi
ng the data needed for the request to the object. Object references are used for
the ORB of the client side to identify the target object. Once the object has e
xecuted the operation identi ed by the request, if there is a reply needed, the OR
B is responsible for returning the reply to the client. The communication proces
s between a client object and a server object via the ORB is shown in Figure 11.
1. A service request invoked by the client object is handled locally by the cli
ent stub. At this time, it looks to the client as if the stub were the actual ta
rget server object. 2. The stub and the ORB then cooperate to transmit the reque
st to the remote server object. 3. At the server side, an instance of the skelet
on instantiated and activated by the object adapter is waiting for the clients re
quest. On receipt of the request from the ORB, the skeleton passes the request t
o the server object. 4. Then the server object executes the requested operations
and creates a reply if necessary. 5. Finally, the reply is sent back to the cli
ent through the skeleton and the ORB that perform the reverse processing.
Client
Object Implementation
Dynamic Invocation
IDL Stubs
ORB Interface ORB Core
IDL Skeleton
Object Adapter
Figure 10 The CORBA Architecture.
722
Source File
TECHNOLOGY
Defining Server I/F
Written in IDL
Compile with IDL compiler
auto generate Client Stub Server Skeleton
Implementing the Client
using
using
Implementing the Server
Compile with compiler
Written in C++/Java/COBOL...
Compile with compiler
Client
Server
Figure 12 Development Process of a CORBA C / S Application.
Atomicity means that all the operations of the transaction succeed or they all f
ail. If the client
or the server fails during a transaction, the transaction must appear to have ei
ther completed successfully or failed completely. Consistency means that after a
transaction is executed, it must leave the system in a correct state or it must
abort, leaving the state as it was before the execution began. Isolation means
that operations of a transaction are not affected by other transactions that are
executed concurrently, as if the separate transactions had been executed one at
a time. The transaction must serialize all accesses to shared resources and gua
rantee that concurrent programs will not affect each others operations. Durabilit
y means that the effect of a transactions execution is permanent after it complet
es commitments. Its changes should survive system failures. The transaction proc
essing should ensure that either all the operations of the transaction complete
successfully (commit) or none of them commit. For example, consider a case in wh
ich a customer transfers money from an account A to another account B in a banki
ng system. If the account A and the account B are registered in two separate dat
abases at different sites, both the withdrawal from account A and the deposit in
account B must commit together. If the database crashes while the updates are p
rocessing, then the system must be able to recover. In the recovery procedure, b
oth the updates must be stopped or aborted and the state of both the databases m
ust be restored to the state before the transaction began. This procedure is cal
led rollback.
con rmed that all the sites succeeded, the of cial commitment is performed. The two-phas
e commit protocol has some limitations. One is the performance overhead that is
introduced by all the message exchanges. If the remote sites are distributed ove
r a wide area network, the response time could suffer further. The two-phase com
mit is also very sensitive to the availability of all sites at the time of updat
e, and even a single point of failure could jeopardize the entire transaction. T
herefore, decisions should be based on the business needs and the trade-off betw
een the cost of maintaining the data on a single site and the cost of the two-ph
ase commit when data are distributed at remote sites.
5.
5.1.
CAPACITY PLANNING AND PERFORMANCE MANAGEMENT
Objectives of Capacity Planning
Client / server systems are made up of many hardware and software resources, inc
luding client workstations, server systems, and network elements. User requests
share the use of these common resources. The shared use of these resources gives
rise to contention that degrades the system behavior and worsens users perceptio
n of performance. In the example of the Internet banking service described in th
e Section 7, the response time, the time from when a user clicks the URL of a ba
nk until the home page of the bank is displayed on
724
TECHNOLOGY
User-1
He wants to decrease 100 pieces from the inventory Database
User-2
He wants to increase 200 pieces to the inventory
300 Inquire Inventory 300 Initial value is 300 300 Inquire Inventory
Decrease 100 pieces 200
Increase 200 pieces 500
Update Inventory
200
500
Update Inventory
Something Wrong!?
Figure 13 Inconsistency Arising from Simultaneous Updates.
the users Web page, affects users perception of performance and quality of service
. In designing a system that treats requirements from multiple users, some servi
ce levels may be set; for example, average response time requirements for reques
ts must not exceed 2 sec, or 95% of requests must exhibit a response time of les
s than 3 sec. For the given service requirements, service providers must design
and maintain C / S systems that meet the desired service levels. Therefore, the
performance of the C / S system should be evaluated as exactly as possible and k
inds and sizes of hardware and software resources included in client systems, se
rver systems, and networks should be determined. This is the goal of capacity pl
anning. The following are resources to be designed by capacity planning to ensur
e that the C / S system performance will meet the service levels:
Types and numbers of processors and disks, the type of operating system, etc. Ty
pe of database server, access language, database management system, etc. Type of
transaction-processing monitor
726
TECHNOLOGY
Because new technology, altered business climate, increased workload, change in
users, or demand for new applications affects the system performance, these step
s should be repeated at regular intervals and whenever problems arise.
5.3.
Performance Objectives and System Environment
Speci c objectives should be quanti ed and recorded as service requirements. Perform
ance objectives are essential to enable all aspects of performance management. T
o manage performance, we must set quanti able, measurable performance objectives,
then design with those objectives in mind, project to see whether we can meet th
em, monitor to see whether we are meeting them, and adjust system parameters to
optimize performance. To set performance objectives, we must make a list of syst
em services and expected effects. We must learn what kind of hardware (clients a
nd servers), software (OS, middleware, applications), network elements, and netw
ork protocols are presented in the environment. Environment also involves the id
enti cation of peak usage periods, management structures, and service-level agreem
ents. To gather these information about the environment, we use various informat
ion-gathering techniques, including user group meetings, audits, questionnaires,
help desk records, planning documents, and interviews.
5.4.
Performance Criteria
1. Response time is the time required to process a single unit of work. In inter
active applications, the response time is the interval between when a request is
made and when the computer responds to that request. Figure 15 shows a response
time example for a client / server database access application. As the applicat
ion executes, a sequence of interactions is exchanged among the components of th
e system, each of which contributes in some way to the delay that the user exper
iences between initiating the request for service and viewing the DBMSs response
to the query. In this example, the response time consists of several time compon
ents, such as (a) interaction with the client application, (b) conversion of the
request into a data stream by an API, (c) transfer of the data stream from the
client to the server by communication stacks, (d) translation of the request and
invocation of a stored procedure by DBMS, (e) execution of SQL (a relational da
ta-manipulation language) calls by the stored procedure, (f) conversion of the r
esults into a data stream by DBMS, (g) transfer of the data stream from the serv
er to the client by the API, (h) passing of the data stream to the application,
and (i) display of the rst result. 2. Throughput is a measure of the amount of wo
rk a component or a system performs as a whole or of the rate at which a particu
lar workload is being processed. 3. Resource utilization normally means the leve
l of use of a particular system component. It is de ned by the ratio of what is us
ed to what is available. Although unused capacity is a waste of resources, a hig
h utilization value may indicate that bottlenecks in processing will occur in th
e near future.
Client
Server Stored Procedure SQL connection Server Application API
Client Application API Communication Stack
DBMS
5.5.2.
Workload Modeling Methodology
If there is a system or service similar to a newly planned system or service, wo
rkloads are modeled based on historical data of request statistics measured by d
ata-collecting tools such as monitors. A monitor is used to observe the performa
nce of systems. Monitors collect performance statistics, analyze the data, and d
isplay results. Monitors are widely used for the following objectives: 1. 2. 3.
4. To nd the frequently used segments of the software and optimize their performa
nce. To measure the resource utilization and nd the performance bottleneck. To tu
ne the system; the system parameters can be adjusted to improve the performance.
To characterize the workload; the results may be used for the capacity planning
and for creating test workloads. 5. To nd model parameters, validate models, and
develop inputs for models. Monitors are classi ed as software monitors, hardware
monitors, rmware monitors, and hybrid monitors. Hybrid monitors combine software,
hardware, or rmware.
728
TECHNOLOGY
If there is no system or service similar to a newly planned system or service, w
orkload can be modeled by estimating the arrival process of requests and the dis
tribution of service times or processing times for resources, which may be forec
ast from analysis of users usage patterns and service requirements. Common steps
in a workload modeling process include: 1. Speci cation of a viewpoint from which
the workload is analyzed (identi cation of the basic components of the workload of
a system) 2. Selecting the set of workload parameters that captures the most re
levant characteristics of the workload 3. Observing the system to obtain the raw
performance data 4. Analyzing and reducing of the performance data 5. Construct
ing of a workload model. The basic components that compose the workload must be
identi ed. Transactions and requests are the most common.
5.6.
Performance Evaluation
Performance models are used to evaluate the performance of a C / S system as a f
unction of the system description and workload parameters. A performance model c
onsists of system parameters, resource parameters, and workload parameters. Once
workload models and system con gurations have been obtained and a performance mod
el has been built, the model must be examined to see how it can be used to answe
r the questions of interest about the system it is supposed to represent. This i
s the performance-evaluation process. Methods used in this process are explained
below:
5.6.1.
Analytical Models
Because generation of users requests and service times vary stochastically, analy
tical modeling is normally done using queueing theory. Complex systems can be re
presented as networks of queues in which requests receive services from one or m
ore groups of servers and each group of servers has its own queue. The various q
ueues that represent a distributed C / S system are interconnected, giving rise
to a network of queues, called a queueing network. Thus, the performance of C /
S systems can be evaluated using queueing network models. Figure 16 shows an exa
mple of queueing network model for the Web server, where the client and the serv
er are connected through a client side LAN, a WAN, and a server-side LAN.
5.6.2.
Simulation
Simulation models are computer programs that mimic the behavior of a system as t
ransactions ow through the various simulated resources. Simulation involves actua
lly building a software model of
Up-Link Client LAN-1 Router-2 LAN-2
Web Server CPU I/O
Router-1
Down-Link
Figure 16 An Example of a Queueing Network Model for a Web Server.
730
TECHNOLOGY
System Application OS & Protocol Physical
Management Target AP Version Resource Performance ... Server/Client OS Logical A
ddress Protocol ... Network Packet Computer H/W Network H/W ...
Figure 17 System Management Items.
performance. Real-time and historical statistical information about traf c volume,
resource usage, and congestion are also included in this area. 5. Accounting ma
nagement: This involves gathering data on resource utilization, setting usage sh
ares, and generating charging and usage reports.
6.1.2.
System Management Architecture
The items that are managed by system management can be classi ed into three layers
: physical layer, operating system and network protocol layer, and application l
ayer, as shown in Figure 17. 1. The physical layer includes client devices, serv
er devices, and network elements, including LANs, WANs, computing platforms, and
systems and applications software. 2. The operating system and network protocol
layer includes the items for managing communication protocols to ensure interop
erability between some units. In recent years, the adoption of TCP / IP standard
protocol for realizing the Internet has been increasing. In a system based on t
he TCP / IP protocol stack, SNMP can be used for system management. 3. The appli
cation layer includes the items for managing system resources, such as CPU capac
ity and memory. The system management architecture consists of the following com
ponents. These components may be either physical or logical, depending on the co
ntext in which they are used:
A network management station (NMS) is a centralized workstation or computer that
collects
data from agents over a network, analyzes the data, and displays information in
graphical form.
A managed object is a logical representation of an element of hardware or softwa
re that the
management system accesses for the purpose of monitor and control.
An agent is a piece of software within or associated with a managed object that
collects and
stores information, responds to network management station requests, and generat
es incidental messages. A manager is software contained within a computer or wor
kstation that controls the managed objects. It interacts with agents according t
o rules speci ed within the management protocol. A management information base (MI
B) is a database containing information of use to network management, including
information that re ects the con guration and behavior of nodes, and parameters that
can be used to control its operation.
6.2.
Network Management Protocol
An essential function in achieving the goal of network management is acquiring i
nformation about the network. A standardized set of network management protocols
has been developed to help extract the necessary information from all network e
t in a message
TECHNOLOGY
GetBulk: a command issued by a manager, by which an agent can return as many suc
cessor Set: a request issued by a manager to modify the value of a managed objec
t Trap: a noti cation issued from an agent in the managed system to a manager that
some unusual
event has occurred
Inform: a command sent by a manager to other managers, by which managers can exc
hange
management information In this case, the managed system is a node such as a work
station, personal computer, or router. HPs OpenView and Sun Microsystems SunNet Ma
nager are well-known commercial SNMP managers. System management functions are e
asily decomposed into many separate functions or objects that can be distributed
over the network. It is a natural idea to connect those objects using CORBA ORB
for interprocess communications. CORBA provides a modern and natural protocol f
or representing managed objects, de ning their services, and invoking their method
s via an ORB. Tivoli Management Environment (TME) is a CORBA-based system-manage
ment framework that is rapidly being adopted across the distributed UNIX market.
6.3.
Security Management
C / S systems introduce new security threats beyond those in traditional host-ce
ntric systems. In a C / S system, it is more dif cult to de ne the perimeter, the bo
undary between what you are protecting and the outside world. From the viewpoint
of distributed systems, the problems are compounded by the need to protect info
rmation during communication and by the need for the individual components to wo
rk together. The network between clients and servers is vulnerable to eavesdropp
ing crackers, who can sniff the network to obtain user IDs and passwords, read c
on dential data, or modify information. In addition, getting all of the individual
components (including human beings) of the system to work as a single unit requ
ires some degree of trust. To manage security in a C / S system, it is necessary
to understand what threats or attacks the system is subject to. A threat is any
circumstance or event with the potential to cause harm to a system. A systems se
curity policy identi es the threats that are deemed to be important and dictates t
he measures to be taken to protect the system.
6.3.1.
Threats
Threats can be categorized into four different types: 1. Disclosure or informati
on leakage: Information is disclosed or revealed to an unauthorized person or pr
ocess. This involves direct attacks such as eavesdropping or wiretapping or more
subtle attacks such as traf c analysis. 2. Integrity violation: The consistency o
f data is compromised through any unauthorized change to information stored on a
computer system or in transit between computer systems. 3. Denial of service: L
egitimate access to information or computer resources is intentionally blocked a
s a result of malicious action taken by another user. 4. Illegal use: A resource
is used by an unauthorized person or process or in an unauthorized way.
6.3.2.
Security Services
In the computer communications context, the main security measures are known as
security services. There are some generic security services that would apply to
a C / S system:
Authentication: This involves determining that a request originates with a parti
cular person or
process and that it is an authentic, nonmodi ed request.
Access control: This is the ability to limit and control the access to informati
on and network
resources by or for the target system.
Con dentiality: This ensures that the information in a computer system and transmi
tted information are accessible for reading only by authorized persons or processes
Data integrity: This ensures that only authorized persons or processes are able
to modify data
in a computer system and transmitted information.
Nonrepudiation: This ensures that neither the sender nor the receiver of a messa
ge is able to
deny that the data exchange occurred.
e second part to the server along with the encryption [IDc, T ]K of IDc and T us
ing the session key K, which is decrypted from the rst part. 4. On receipt of thi
s message, the server decrypts the rst part, [T, L, K, IDc]Ks, originally encrypt
ed by the authentication server using Ks, and in so doing recovers T, K, and IDc
. Then the server con rms that IDc and T are consistent in the two halves of the m
essage. If they are consistent, the server replies with a message [T
1]K that en
crypts T 1 using the session key K. 5. Now the client and the server can communi
cate with each other using the shared session key K. 6.3.3.3. Message Integrity
Protocols There are two typical ways to ensure the integrity of a message. One u
ses a public-key cryptosystem such as RSA to produce a digital signature, and th
e other uses both a message digest such as MD5 and a public-key cryptosystem to
produce a digital
734
TECHNOLOGY
(1) IDc, IDs
Authentication Server
(2) [T, L, K, IDs]Kc + [ T, L, K, IDc]Ks
User
Client (3) [T, L, K, IDc]Ks + [IDc, T ]K Server (4) [T+1]K
Figure 19 Third-Party Authentication in Kerberos.
signature. In the latter type, a hash function is used to generate a message dig
est from the message content requiring protection. The sender encrypts the messa
ge digest using the public-key cryptosystem in the authentication mode; the encr
yption key is the private key of the sender. The encrypted message digest is sen
t an appendix along with the plaintext message. The receiver decrypts the append
ix using the corresponding decryption key (the public key of the sender) and com
pares it with the message digest that is computed from the received message by t
he same hash function. If the two are the same, then the receiver is assured tha
t the sender knew the encryption key and that the message contents were not chan
ged en route. 6.3.3.4. Access Control Access control contributes to achieving th
e security goals of con dentiality, integrity, and legitimate use. The general mod
el for access control assumes a set of active entities, called subjects, that at
tempt to access members of a set of resources, called objects. The access-contro
l model is based on the access control matrix, in which rows correspond to subje
cts (users) and columns correspond to objects (targets). Each matrix entry state
s the access actions (e.g., read, write, and execute) that the subject may perfo
rm on the object. The access control matrix is implemented by either:
Capability list: a row-wise implementation, effectively a ticket that authorizes
the holder (subject) to access speci ed objects with speci ed actions
Access control list (ACL): a column-wise implementation, also an attribute of an
object stating
which subjects can invoke which actions on it 6.3.3.5. Web Security Protocols: S
SL and S-HTTP As the Web became popular and commercial enterprises began to use
the Internet, it became obvious that some security services such as integrity an
d authentication are necessary for transactions on the Web. There are two widely
used protocols to solve this problem: secure socket layer (SSL) and secure HTTP
(S-HTTP). SSL is a generalpurpose protocol that sits between the application la
yer and the transport layer. The security services offered by the SSL are authen
tication of the server and the client and message con dentiality and integrity. Th
e biggest advantage of the SSL is that it operates independently of applicationlayer protocols. HTTP can also operate on top of SSL, and it is then often denot
ed HTTPS. Transport Layer Security (TLS) is an Internet standard version of SSL
and is now in the midst of the IETF standardization process. Secure HTTP is an a
pplication-layer protocol entirely compatible with HTTP and contains security ex
tensions that provide client authentication, message con dentiality and integrity,
and nonrepudiation of origin. 6.3.3.6. Firewall Because the Internet is so open
, security is a critical factor in the establishment and acceptance of commercia
l applications on the Web. For example, customers using an Internet
The Internet
The Internet or Private Network
BankB
ServerB (WWW+Application)
balance
Database
Figure 20 Internet Banking System.
736
TECHNOLOGY
Step 3: The entered user ID and password are sent back to the server via the Int
ernet by the secured communication protocol HTTPS. The server receives that info
rmation and checks information on the database that accumulates information abou
t the client and other various data. If the user ID and password correspond to t
hose already registered, the customer is permitted to use the service. Step 4: T
he customer selects fund transfer from the menu displayed on the Web page. Step 5: T
he server receives the above request, searches the database, and retrieves the c
ustomers account information. Step 6: After con rming that there is enough money fo
r payment in the customers account, the server asks another server in bank B for
processing of deposit. In this case, the server of bank A becomes client and the ser
ver of bank B becomes server. Step 7: After the server of bank A con rms that the proc
essing was completed normally in the server of bank B, it updates (withdraws) th
e account information of the customer in the database. The acknowledgement that
the request for fund transfer is successfully completed is also displayed on the
customers Web page.
ADDITIONAL READING
Crowcroft, J., Open Distributed Systems, UCL Press, London, 1996. Davis, T., Ed.
, Securing Client / Server Computer Networks, McGraw-Hill, New York, 1996. Edwar
ds, J., 3-Tier Client / Server at Work, Rev. Ed., John Wiley & Sons, New York, 1
999. Menasce , D. A., Almeida, V. A. F., and Dowdy, L. W., Capacity Planning and
Performance Modeling: From Mainframes to ClientServer Systems, Prentice Hall, En
glewood Cliffs, NJ, 1994. Orfali, R., Harkey, D., and Edwards, J., Client / Serv
er Survival Guide, 3d Ed., John Wiley & Sons, New York, 1999. Renaud, P. E., Int
roduction to Client / Server Systems, John Wiley & Sons, New York, 1993. Vaughn,
L. T., Client / Server System Design and Implementation, McGraw-Hill, New York,
1994.
738
TECHNOLOGY
advanced, hospitals actually became centers for treating patients. They also bec
ame resources for community physicians, enabling them to share high-cost equipme
nt and facilities that they could not afford by themselves. Some hospitals are g
overnment-owned, including federal, state, county, and city hospitals. Federal h
ospitals include Army, Navy, Air Force, and Veterans Administration hospitals. N
ongovernment hospitals are either not-for-pro t or for-pro t. The not-for-pro t hospit
als are owned by church-related groups or communities and constitute the bulk of
the hospitals in the country. Investor-owned forpro t hospitals are mostly owned
by chains such as Hospital Corporation of America (HCA) and Humana.
1.2.
Government Regulations
A number of federal, state, county, and city agencies regulate hospitals. These
regulations cover various elements of a hospital, including physical facilities,
medical staff, personnel, medical records, and safety of patients. The Hill-Bur
ton law of 1948 provided matching funds to towns without community hospitals. Th
is resulted in the development of a large number of small hospitals in the 1950s
and 1960s. The enactment of Medicare legislation for the elderly and the Medica
id program for the poor in 1965 resulted in rapid growth in hospitals. Because h
ospitals were paid on the basis of costs, much of the cost of adding beds and ne
w expensive equipment could be recovered. Many states developed certi cate-of-need pro
grams that required hospitals planning to expand or build a new facility to obta
in approval from a state agency (Mahon 1978). To control expansion and associate
d costs, the National Health Planning and Resources Development Act was enacted
in 1974 to require hospitals that participated in Medicare or Medicaid to obtain
planning approval for capital expenditures over $100,000. The result was a huge
bureaucracy at local, state, and national levels to implement the law. Most of
the states have revised or drastically reduced the scope of the law. Many states
have changed the review limit to $1 million. In other states, only the construc
tion of new hospitals or new beds is reviewed (Steinwelds and Sloan 1981).
1.3.
Health Care Financing
Based on the method of payment for services, patients can be classi ed into three
categories. The rst category includes patients that pay from their own pocket for
services. The second category includes patients covered by private insurance co
mpanies. The payment for services for patients in the last category is made by o
ne of the government programs such as Medicare. A typical hospital receives appr
oximately 35% of its revenue from Medicare patients, with some inner-city hospit
als receiving as much as 50%. A federal agency called the Health Care Financing
Administration (HCFA) manages this program. Medicare formerly paid hospitals on
the basis of cost for the services rendered to Medicare patients. Under this sys
tem of payment, hospitals had no incentive to control costs and deliver the care
ef ciently. To control the rising costs, a variety of methods were used, includin
g cost-increase ceilings, but without much success. In 1983, a new means of paym
ent was introduced, known as the diagnostic related group (DRG) payment system.
Under this system, hospital cases are divided into 487 different classes based o
n diagnosis, age, complications, and the like. When a patient receives care, one
of these DRGs is assigned to the patient and the hospital receives a predetermi
ned amount for that DRG irrespective of the actual cost incurred in delivering t
he services. This system of payment encouraged the hospitals to deliver care ef ci
ently at lower cost. Many of the commercial insurance companies such as Blue Cro
ss and Blue Shield have also adopted this method of payment for the hospitals.
1.4.
Emerging Trends
During the past decade, a few key trends have emerged. A shift to managed health
care from traditional indemnity insurance is expected to continue. This has res
ulted in the growth of health maintenance organizations (HMOs) and preferred pro
vider organizations (PPOs), which now account for as much as 6070% of the private
insurance in some markets. Employers have also been changing health care bene ts
for their employees, increasingly going to so-called cafeteria plans, where empl
oyers allocate a xed amount of bene t dollars to employees and allow them to alloca
te these dollars among premiums for various services. This has resulted in incre
ased out of pocket costs and copayments for health care for the employees. Medic
are has reduced payments for DRGs and medical education to the hospitals. It has
put many health care systems in serious nancial trouble. The increase in payment
s is expected to stay well below the cost increases in the health care industry.
These trends are expected to continue in the near future (Sahney et al. 1986.)
Since the advent of the DRG system of payment, the length of stay has continuall
y dropped in hospitals from an average of over 10 days in the 1970s to below 5 d
ays. Hospital admission rates have also dropped because many of the procedures d
one on an inpatient basis are now being performed on an ambulatory care basis. I
npatient admissions and length of stay are expected to continue to decline, and
ambulatory care is expected to continue to grow over the next decade.
and health systems, while others work as consultants in health care. The indust
rial engineers working in health care systems are usually referred as management
engineers or operations analysts. Smalley (1982) gives a detailed history of th
e development of the use of industrial engineering in hospitals. Industrial engi
neers gradually realized that many industrial engineering techniques initially a
pplied to manufacturing / production systems were equally applicable in service
systems such as health care systems. Almost all of the industrial engineering to
ols and techniques have been applied to health care systems. In this chapter, th
e application of only some of these techniques to health care systems will be di
scussed. From the examples presented here, readers should be able to appreciate
the application of other techniques to health care systems. The application of t
he following techniques in health care systems is discussed here: 1. 2. 3. 4. Me
thods improvement and work simpli cation Staf ng analysis Scheduling Queuing and sim
ulation
742
TECHNOLOGY
to perform the constant tasks and the variable tasks at a given frequency level
associated with a given demand for services. One disadvantage of using the commo
n work-measurement techniques is that the process is very cumbersome and time-co
nsuming. Moreover, any time the method changes, the work content of various task
s has to be measured again. Another alternative may be to use one of the standar
d-staf ng methodologies available in the health care industry. These methodologies
describe the typical tasks that are performed in a particular hospital departme
nt and assign a standard time to each task. Resource Monitoring System (RMS), de
veloped by the Hospital Association of New York State, is an example of one such
system. The time available per full time equivalent (FTE) is also calculated. I
f the department is open 8 am to 5 pm with a one-hour lunch break and if two 15minute breaks are allowed during the shift, the time available per FTE per day w
ill be 7.5 hours. The actual required FTE depends on demand variability, coverag
e issues, scheduling constraints, skill level requirements, and similar factors.
One approach is to determine the ef ciency level (a number less than one) that co
uld be expected because of these factors. Ef ciency is de ned as the ratio of the ac
tual hours of work done to the actual number of hours used to do the work, inclu
ding idle time, if any. The FTE needed is then determined using the following eq
uation: total number of hours of work to be performed per day time available per
FTE per day efficiency level
FTE needed
4.2.
A Case Study in the Ambulatory Surgery Center
An ambulatory surgery center can be de ned as a specialized facility organized to
perform surgical cases in an ambulatory setting. Ambulatory surgery centers are
normally staffed by registered nurses (RNs), who provide care in preoperative, o
perating, and recovery rooms. A staff of nurse anesthetists (CRNAs) works in the
operating rooms and also performs patient assessments in the preoperative area
before the patient enters surgery. Nurses aides clean and package surgical instru
ments and assist with room turnover between cases. Room turnover involves cleani
ng and restocking the operating room between cases. The patient registration and
all the initial laboratory work are done prior to the day of the surgery. On th
e day of surgery, the patient is asked to arrive one hour prior to the scheduled
surgery time. After checking in with the receptionist, the patient is sent to p
reoperative area for assessment by CRNAs. After the surgery, the patient is in t
he recovery room until the patient is stable enough to be sent home. The objecti
ve of the study was to determine the staf ng requirement for each skill level incl
uding RNs, CRNAs, and nursing aides. As stated above, all the constant and varia
ble tasks were identi ed for each skill level. The time required to perform each t
ask was determined using a combination of approaches including direct observatio
n and Delphi consensus-building methodology. The calculations for determining st
af ng requirements for nursing aides are shown in Table 1. The actual ef ciency shou
ld be computed and monitored each week. If there are de nite upward or downward tr
ends in ef ciency, adjustments in staf ng levels should be made. The work level at w
hich a department is staffed is a critical parameter. One approach could be to p
erform a sensitivity analysis and compute the FTE requirement at various levels
of case activity and then decide on the appropriate staf ng level.
5.
5.1.
APPLICATION OF SCHEDULING METHODOLOGIES
Introduction
In health care systems, the term scheduling is used in a number of different way
s. It can refer to the way in which appointments are scheduled for patient visit
s, testing procedures, surgical procedures, and the like. This form of schedulin
g is called work scheduling because patients represent work. In certain departme
nts, it may be possible to move the workload from peak periods to periods of low
demand to balance the workload. An example is an outpatient clinic where only a
given number of appointment slots are made available. The number of appointment
slots in a session determines the maximum number of patients that can be seen d
uring that session, thereby restricting the workload to a de ned level. In other d
epartments, such as an emergency room, the patients or workload cannot be schedu
led. Another use of the this term is in the scheduling of personnel in a departm
ent based on the varying demand for services during the various hours of the day
and days of the week while providing equitable scheduling for each individual o
ver weekends and holidays. It must be noted that in the inpatient setting in hea
lth care, certain staff, such as nursing staff, need to be scheduled for work ev
en over holidays. In the scheduling of personnel, each individual is given a sch
edule indicating what days of the week and what hours of the day he or she would
be working over a period.
5.2.1.
Appointment Scheduling in a Primary Care Clinic
Most people have a speci c physician whom they see when they have a health-related
problem. Individuals belonging to an HMO (health maintenance organization) sele
ct a primary care physician
744
TECHNOLOGY
who takes care of all of their health care needs. For infants and young children
, this would be a pediatrician, and for adults and teenagers, an internist or a
family practitioner. Appointment scheduling or work scheduling in a primary care
clinic (internal medicine) is discussed here. An internal medicine clinic had b
een struggling with the appointment scheduling issue for a period of time. In pa
rticular, they wanted to determine the number of slots to be provided for physic
al exams, same-day appointments, and return appointments. The rst step was to est
imate the total demand for physical exams and for other appointments in the clin
ic. Data on average number of visits per person per year to the internal medicin
e clinic were available from past history. Multiplying this number by the total
number of patients seen in the clinic yielded an estimate of total number of vis
its to be handled in the clinic. The number of visits to the clinic varied by mo
nth. For example, clinic visits were higher during the u season than in the summe
r months. Past data helped in the estimation of the number of visits to be handl
ed each month. It was determined that the number of visits varied by day of the
week, too. Mondays were the busiest and Fridays the least busy. The number of vi
sits to be handled each day of the month was estimated based on this information
. Staff scheduling was done to meet this demand. The total number of appointment
slots was divided among physical exams, same-day visits, and return visits base
d on historical percentages. Other issues such as no-show rates and overbooking
were also taken into account in appointment scheduling.
5.3.
Personnel Scheduling
Once the work has been scheduled in areas where work scheduling is possible and
the staf ng requirements have been determined based upon the scheduled work, the n
ext step is to determine which individuals will work which hours and on what day
s. This step is called personnel scheduling. In other areas of the hospital, suc
h as the emergency room, where work cannot be scheduled, staf ng requirements are
determined using historical demand patterns by day of the week and hour of the d
ay. The next step again is to determine the schedule for each staff member. The
personnel schedule assigns each staff member to a speci c pattern of workdays and
off days. There are two types of scheduling patterns: cyclical and noncyclical.
In cyclical patterns, the work pattern for each staff is repeated after a given
number of weeks. The advantage of this type of schedule is that staff members kn
ow their schedules, which are expected to continue until there is a need for cha
nge due to signi cant change in the workload. The disadvantage is the in exibility o
f the schedule in accommodating demand uctuations and the individual needs of wor
kers. In noncyclical schedules, a new schedule is generated for each scheduling
period (usually two to four weeks) and is based on expected demand and available
staff. This approach provides exibility to adjust the staf ng levels to expected d
emand. It can also better accommodate requests from workers for days off during
the upcoming scheduling period. But developing a new schedule every two to four
weeks is very time consuming. A department may allow working ve 8-hour days, four
10-hour days, or three 12-hour shifts plus one 4-hour day. The personnel schedu
le has to be able to accommodate these various patterns. The schedule must also
provide equitable scheduling for each individual when coverage is needed over we
ekends and holidays. Personnel schedules are developed by using either a heurist
ic, trial-and-error approach or some optimization technique. Several computerize
d systems are available using both types of approaches. Warner (1974) developed
a computer-aided system for nurse scheduling that maximizes an expression repres
enting the quality of schedule subject to a constraint that minimum coverage be
met. Sitompul and Randhawa (1990) give detailed review and bibliography of the v
arious nurse-scheduling models that have been developed.
6.
APPLICATION OF QUEUING AND SIMULATION METHODOLOGIES
There are a number of examples of queues or waiting lines in health care systems
where the customers must wait for service from one or more servers. The custome
rs are usually the patients but could be hospital employees too. When patients a
rrive at the reception desk in a clinic for an appointment, they may have to wai
t in a queue for service. A patient calling a clinic on the phone for an appoint
ment may be put on hold before a clinic service representative is able to answer
the call. An operating room nurse may have to wait for service at the supply de
sk before a supply clerk is able to handle the request for some urgently needed
supply. These are all examples of queuing systems where the objective may be to
determine the number of servers to obtain a proper balance between the average c
ustomer waiting time and server idle time based on cost consideration. The objec
tive may also be to ensure that only a certain percentage of customers wait more
than a predetermined amount of time (for example, that no more than 5% of the p
atients wait in the queue for more than ve minutes.) In simple situations, queuei
ng models can be applied to obtain preliminary results quickly. Application of q
ueueing models to a situation requires that the assumptions of the model be met.
These assumptions relate to the interarrival time and service time probability
distributions. The simplest model assumes Poisson arrivals and exponential servi
ce times. These assumptions may be approxi-
s need the data regarding various service times and other probabilistic elements
of the model. Statistical methods have also been used to study and manage varia
bility in demand for services (Sahney 1982).
7.1.
Use of Regression Models
Regression models can be used to predict important response variables as a funct
ion of variables that could be easily measured or tracked. Kachhal and Schramm (
1995) used regression modeling to predict the number of primary care visits per
year for a patient based upon patients age and sex and a measure of health status
. They used ambulatory diagnostic groups (ADGs) encountered by a patient during
a year as a measure of health status of the patient. All the independent variabl
es were treated as 01 variables. The sex variable was 0 for males and 1 for femal
es. The age of the patient was included in the model by creating ten 01 variables
based on speci c age groups. The model was able to explain 75% of the variability
in the response variable. Similar models have been used to predict health care
expenses from patient characteristics.
7.2.
Determination of Sample Size
Industrial engineers frequently encounter situations in health care systems wher
e they need to determine the appropriate amount of data needed to estimate the p
arameters of interest to a desired level of accuracy. In health care systems, wo
rk-sampling studies are frequently done to answer questions
746
TECHNOLOGY
such as What percentage of a nurses time is spent on direct patient care? or What frac
ion of a clinic service representatives time is spent on various activities such
as answering phones, checkingin the patients, looking for medical records, and t
he like? In work-sampling studies, the person being studied is observed a predeter
mined number of times at random time intervals. For each observation, the task b
eing performed is recorded. The estimate of the fraction of time spent on an act
ivity is obtained by dividing the number of occurrences of that activity by the
total number of observations. The number of observations needed for a work-sampl
ing study can be calculated from the following formula: N where N Z p I
Z 2
p(1
p) I2
ng, building models to project revenues and costs, analyzing various reimburseme
nt scenarios, and the like. The Journal of the Society for Health Systems (1991)
published an issue with focus on decision support systems that presents a good
overview of DSS in health care. The other type of DSS used in health care are th
e clinical decision support systems (CDSS), which use a set of clinical ndings su
ch as signs, symptoms, laboratory data, and past history to assist the care prov
ider by producing a ranked list of possible diagnoses. Some CDSS act as alerting
systems by triggering an alert when an abnormal condition is recognized. Anothe
r set of these systems act as critiquing systems by responding to an order for a
medical intervention by a care provider. Several systems that detect drug aller
gies and drug interactions fall into this category. Another use of CDSS is in co
nsulting systems that react to a request for assistance from a physician or a nu
rse to provide suggestions about diagnoses or concerning what steps to take next
. The application of CDSS in implementing clinical practice guidelines has been
a popular use of these systems. Doller (1999) provides a current status of the u
se of these systems as medical expert systems in health care industry. The autho
r predicts that the major use of CDSS in future will be in implementing clinical
748
TECHNOLOGY
guidelines and allowing nonphysicians to handle routine care while directing the
attention of true medical experts to cases that are not routine. While some ind
ustrial engineers may be involved in the development of home-grown DSS or CDSS,
the role of other industrial engineers is still in the evaluation and costbene t an
alysis of the standard systems available in the market.
11.
APPLICATION OF OTHER INDUSTRIAL ENGINEERING TECHNIQUES
The application of other industrial engineering techniques such as forecasting,
inventory control, materials management, facilities planning, economic analysis,
and cost control is also common in health care systems. Examples of many of the
se applications are available in the proceedings of the annual Quest for Product
ivity and Productivity Conferences of the Society for Health Systems and the Hea
lthcare Information and Management Systems Society Conferences. See also Journal
of the Society for Health Systems (1992a), and Sahney (1993) on the use of indu
strial engineering tools in health care.
12.
FUTURE TRENDS
Most healthcare systems around the country are facing serious nancial problems du
e to the reductions in Medicare and Medicaid reimbursements. Health maintenance
organizations (HMOs) have also reduced payments to health systems for services d
ue to reductions in health care premiums forced by competition from other HMOs a
nd by employers looking for cost reductions in their own organizations. Health c
are institutions are looking for ways to reduce costs while maintaining or impro
ving quality. Industrial engineers are being asked to conduct benchmarking studi
es to identify areas where costs are higher than the norm and need reduction. Th
e nancial problems are also leading to mergers and takeovers of nancially troubled
hospitals by health care systems in better nancial health. Industrial engineers
are being asked to work with individuals from nance on projects related to merger
s and consolidation of services to evaluate the nancial impact of alternative cou
rses of action. Many health care systems have undertaken TQM programs that inclu
de training of managers and supervisors in quality improvement tools such as owch
arts, causeeffect diagrams, run charts, and control charts. Quality improvement t
eams (QITs) consisting of employees from the departments in which the process re
sides are now conducting many of the studies previously conducted by industrial
engineers. Industrial engineers are being asked to serve on these teams as facil
itators or in staff capacity to conduct quantitative analysis of the alternative
s. Although simulation tools have been available to health care systems for deca
des, it is only recently that simulation models have been increasingly used as t
ools to model real systems to predict the impact of various policies and resourc
e levels. Administrators and department managers are aware of the capabilities o
f these models and are requesting the use of simulation models to make sound dec
isions. The availability of reasonably priced, user-friendly simulation packages
has increased the use of simulation modeling. The same is true for advance stat
istical tools. With the availability of user-friendly statistical software, more
sophisticated statistical models are being built to guide in decision making. A
nother area gaining attention is supply chain management. As health care systems
are looking for ways to reduce costs without impacting the quality of patient c
are, procurement of supplies has stood out as an area where substantial reductio
ns can be achieved through supply chain management. Some hospitals are having so
me of their supplies shipped directly from manufacturers to the user departments
, bypassing all the intermediate suppliers and distributors. Other hospitals hav
e eliminated their warehouses and distribution centers, giving the responsibilit
y for daily deliveries and restocking of departments to external vendors. Indust
rial engineers are evaluating the alternative courses of action in supply chain
management. Industrial engineers are also working for consulting companies contr
acted by health care systems to assist them out of nancial problems. The use of s
ophisticated industrial engineering tools in health care still lags behind the u
se in manufacturing industry. Over the next decade, it is expected that the use
of these tools will become more prevalent in health care.
REFERENCES
Benneyan, J. (1997), An Introduction to Using Simulation in Health Care: Patient W
ait Case Study, Journal for the Society for Health Systems, Vol. 5, No. 3, pp. 116.
Bressan, C., Facchin, P., and Romanin Jacur, G. (1988), A Generalized Model to Si
mulate Urgent Hospital Departments, in Proceedings of the IMACS Symposium on Syste
m Modeling and Simulation, pp. 421425.
750
TECHNOLOGY
Mahon, J. J., Ed. (1978), Hospitals, Mahons Industry Guides for Accountants and A
uditors, Guide 7, Warren, Gorham & Lamont, Boston. McGuire, F. (1997), Using Simul
ation to Reduce the Length of Stay in Emergency Departments, Journal of the Societ
y for Health Systems, Vol. 5, No. 3, pp. 8190. Nock, A. J. (1913), Frank Gilbreths
Great Plan to Introduce Time-Study into Surgery, American Magazine, Vol. 75, No. 3
, pp. 4850. Ortiz, A., and Etter, G. (1990), Simulation Modeling vs. Queuing Theory
, in Proceedings of the 1990 Annual Healthcare Information and Management Systems
Conference, AHA, pp. 349357. Roberts, S. D., and English, W. L., Eds. (1981), Sur
vey of the Application of Simulation to Healthcare, Simulation Series, Vol. 10,
No. 1, Society for Computer Simulation, La Jolla, CA. Sahney, V. (1982), Managing
Variability in Demand: A Strategy for Productivity Improvement in Healthcare, Heal
th Care Management Review, Vol. 7, No. 2, pp. 3741. Sahney, V. (1993), Evolution of
Hospital Industrial Engineering: From Scienti c Management to Total Quality Manag
ement, Journal of the Society for Health Systems, Vol. 4, No. 1, pp. 317. Sahney, V
., and Warden, G. (1989), The Role of Management in Productivity and Performance M
anagement, in Productivity and Performance Management in Health Care Institutions,
AHA, pp. 2944. Sahney, V., and Warden, G. (1991), The Quest for Quality and Produc
tivity in Health Services, Frontiers of Health Services Management, Vol. 7, No. 4,
pp. 240. Sahney, V. K., Peters, D. S., and Nelson, S. R. (1986), Health Care Deliv
ery System: Current Trends and Prospects for the Future, Henry Ford Hospital Medic
al Journal, Vol. 34, No. 4, pp. 227 232. Sahney, V., Dutkewych, J., and Schramm,
W. (1989), Quality Improvement Process: The Foundation for Excellence in Health Ca
re, Journal of the Society for Health Systems, Vol. 1, No. 1, pp. 17 29. Saunders,
C. E., Makens, P. K., and Leblanc, L. J. (1989), Modeling Emergency Department Ope
rations Using Advanced Computer Simulation Systems, Annals of Emergency Medicine,
Vol. 18, pp. 134140. Schroyer, D. (1997), Simulation Supports Ambulatory Surgery Fa
cility Design Decisions, in Proceedings of the 1997 Annual Healthcare Information
and Management Systems Conference, AHA, pp. 95108. Sitompul, D., and Randhawa, S.
(1990), Nurse Scheduling Models: A State-of-the-Art Review, Journal of the Society
for Health Systems, Vol. 2, No. 1, pp. 6272. Smalley, H. E. (1982), Hospital Mana
gement Engineering, Prentice Hall, Englewood Cliffs, NJ. Steinwelds, B., and Slo
an, F. (1981) Regulatory Approvals to Hospital Cost Containment: A Synthesis of th
e Empirical Evidence, in A New Approach to the Economics of Health Care, M. Olsin,
Ed., American Enterprise Institute, Washington, DC., pp. 274308. Trivedi, V. (19
76), Daily Allocation of Nursing Resources, in Cost Control in Hospitals, J. R. Grif t
h, W. M. Hancock, and F. C. Munson, Eds., Health Administration Press, Ann Arbor
, MI, pp. 202226. Warner, D. M. (1976), Computer-Aided System for Nurse Scheduling, i
n Cost Control in Hospitals, J. R. Grif th, W. M. Hancock, and F. C. Munson, Eds.,
Health Administration Press, Ann Arbor, MI, pp. 186201. Warner, M., Keller, B.,
and Martel, S. (1990), Automated Nurse Scheduling, Journal of the Society for Health
Systems, Vol. 2, No. 2, pp. 6680. Weissberg, R. W. (1977), Using Interactive Graph
ics in Simulating the Hospital Emergency Department, in Emergency Medical Systems
Analysis, T. Willemain and R. Larson, Eds., Lexington Books, Lexington, MA, pp.
119140. Wilt, A., and Goddin, D. (1989), Healthcare Case Study: Simulating Staf ng Ne
eds and Work Flow in an Outpatient Diagnostic Center, Industrial Engineering, May,
pp. 2226.
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
751
752
TECHNOLOGY
generate superior investment performance than plain vanilla portfolios of stocks
and bonds. Better returns reduce the need for future cash injections to fund ob
ligations. Unfortunately, market risks have increased as the array of available
investment instruments has broadened. For example, the Mexican peso crisis in 19
94, the Asian currency debacle and recession beginning in 1997, the Russian debt
default and the unprecedented hedge fund bail-out coordinated by the Federal Re
serve Bank in 1998, and a 30% drop in the price of technology shares early in 20
00 all had major repercussions for nancial markets. Where is an investor to nd sol
ace in such an unfriendly and disturbing environment? The obvious answer to heig
htened complexity and uncertainty lies in utilizing nancial engineering technique
s to manage asset portfolios. This chapter reviews the current state of the art
from a practitioners perspective. The prime focus is on mean-variance optimizatio
n techniques, which remain the principal application tool. The key message is th
at while the methods employed by todays specialists are not especially onerous ma
thematically or computationally, there are major issues in problem formulation a
nd structure. It is in this arena that imagination and inventiveness take center
stage.
2.
THE ASSET-MANAGEMENT PROBLEM
The job of investment managers is to create portfolios of assets that maximize i
nvestment returns consistent with risk tolerance. In the past, this required sim
ply selecting a blend of stocks, bonds, and cash that matched the clients needs.
Asset managers typically recommended portfolios heavily weighted in stocks for a
ggressive investors desiring to build wealth. They proffered portfolios overweig
ht in bonds and cash for conservative investors bent on wealth preservation. Agg
ressive stockheavy portfolios would be expected to yield higher returns over tim
e but with considerable uctuations in value. Concentrated cash and bond portfolio
s would be less volatile, thus preserving capital, but produce a lower return.
2.1.
Origins
Until recently, a casual rule-of-thumb approach suf ced and was deemed adequate to
produce reasonable performance for investors. For example, a portfolio consisti
ng of 65% equities and 35% bonds generated returns and risk similar to a 75% equ
ities and 25% bonds portfolio. However, following the in ation trough in the 1990s
, bond returns declined and investors with low equity exposure suffered. In addi
tion, those investors ignoring alternative assets such as hedge funds and ventur
e capital found themselves with lagging performance and greater portfolio volati
lity. Studies have consistently shown that selection of the asset mix is the mos
t important determinant of investment performance. Early in uential research by Br
inson et al. (1986, 1991) and a more recent update by Ibbotson and Kaplan (1999)
indicate that asset allocation explains up to 90% of portfolio returns. Securit
y selection and other factors explain the remainder. Consequently, the asset ble
nd is the key intellectual challenge for investment managers and should receive
the most attention. Traditional rules of thumb no longer work in a dynamic world
with many choices and unexpected risks.
2.2.
Problem Structure and Overview
Markowitz was the rst to propose an explicit quanti cation of the asset-allocation
problem (Markowitz 1959). Three categorical inputs are required: the expected re
turn for each asset in the portfolio, the risk or variance of each assets return,
and the correlation between asset returns. The objective is to select the optim
al weights for each asset that maximizes total portfolio return for a given leve
l of portfolio risk. The set of optimum portfolios over the risk spectrum traces
out what is called the ef cient frontier. This approach, usually referred to as m
ean-variance (MV) analysis, lies at the heart of modern portfolio theory. The te
chnique is now mainstream, regularly taught in investment strategy courses. A ma
ssive literature exists exploring the methodology, its intricacies, and variatio
ns. Black box MV optimization programs now reside on the desks of thousands of broke
rs, nancial advisors, and research staff employed by major nancial institutions. T
he principal advantage of MV analysis is that it establishes portfolio construct
ion as a process that explicitly incorporates risk in a probabilistic framework.
In this way, the approach acknowledges that asset returns are uncertain and req
uires that precise estimates of uncertainty be incorporated in the problem speci c
ation. On the surface, MV analysis is not especially dif cult to implement. For ex
ample, it is very easy to guess at future stock and bond returns and use histori
cal variances and correlations to produce an optimum portfolio. It is not so sim
ple to create a multidimensional portfolio consisting of multiple equity and xed
income instruments combined with alternative assets such as private equity, vent
ure capital, hedge funds, and other wonders. Sophisticated applications require
a lot of groundwork, creativity, and rigor.
1.0
0.55 0.30 0.00 0.00 1.0 0.10 0.30 0.00 1.0 0.30 0.00 1.0 0.00 1.0
Optimum portfolios are obtained by selecting the asset weights that maximize por
tfolio return for a speci c portfolio risk. The problem constraints are that (1) t
he portfolio return is a linear combination of the separate asset returns, (2) p
ortfolio variance is a quadratic combination of weights, asset risks, and asset
correlations, and (3) all weights are positive and sum to one. The results of pr
oducing MV portfolios are presented in Table 1. The corresponding ef cient frontie
r is shown in Figure 1. The allocations obtained are typical for analyses of thi
s type. That is, to obtain higher portfolio returns, the investor must take on m
ore risk. This is accomplished by weighting equities more heavily. The most cons
ervative portfolios have both low returns and low risk. They consist primarily o
f cash and bonds. For moderate levels of risk, investors should blend stocks and
bonds together in more equal proportions. In addition, note that the most risky
portfolios skew to higher weights for international equities, which are more ri
sky than U.S. stocks. It should be obvious that the production of MV portfolios
is not extraordinarily input intensive. Ef cient frontiers for ve asset portfolios
require only ve predicted returns, ve standard deviations, and a ve-by- ve symmetrica
l matrix of correlation coef cients. Yet the process yields indispensable informat
ion that allows investors to select suitable portfolios.
3.2.
Investor Risk Preferences
Once the ef cient frontier is established, the issue of investor risk preferences
must be addressed. Individuals exhibit markedly different attitudes towards risk
. Some are extremely risk averse, tolerating nothing but near certainty in life.
Others relish risk taking. Most are somewhere between these two extremes. Risk
attitudes depend on personality, life experience, wealth, and other socioeconomi
c factors. By default, extremely risk-averse investors are not interested in bui
lding wealth via asset accumulation, because they are unwilling to tolerate the
risk necessary to obtain high returns. The opposite is true for adventurous soul
s ready to gamble for large payoffs. In this regard, old-money investors
754 TABLE 1 Return 12.6% 12.5% 12.2% 11.8% 11.3% 10.8% 10.3% 9.7% 9.1% 8.5% 7.9%
7.4% 6.8% 6.2% 5.6% Selected Optimal Mean-Variance Portfolios Risk 15.0% 14.0%
13.0% 12.0% 11.0% 10.0% 9.0% 8.0% 7.0% 6.0% 5.0% 4.0% 3.0% 2.0% 1.0% Sharpe Rati
oa 0.84 0.89 0.94 0.99 1.03 1.08 1.14 1.21 1.30 1.42 1.59 1.84 2.26 3.09 5.59 U.
S. Stocks 9% 25% 43% 42% 40% 37% 32% 27% 23% 19% 16% 12% 8% 4% International Sto
cks 100% 91% 75% 55% 48% 40% 34% 29% 25% 22% 18% 15% 11% 7% 4% U.S. Bonds 6% 17%
19% 17% 14% 11% 8% 6% 3%
TECHNOLOGY
International Bonds 2% 10% 20% 23% 22% 19% 17% 14% 11% 8% 6% 3%
Cash 9% 22% 35% 48% 61% 74% 87%
Portfolio Allocation
Source: Results of optimization analysis. a The Sharpe ratio is the portfolio re
turn less the risk-free Treasury rate divided by portfolio risk.
are usually more risk averse than new-money investors. They require portfolios heavi
ly weighted in conservative assets. New-money investors are comfortable with inh
erently risky equities (Figure 2). Age often plays an important role. Older inve
stors are commonly more risk averse because they are either retired or close to
retirement and dissaving (Figure 3). How does one ascertain risk tolerance to gu
arantee a relevant portfolio is matched to the investors needs? There are a numbe
r of ways. One is to estimate the risk-aversion parameter based on the investors
response to a battery of questions designed to trace out their return preference
s with different payoff probabilities. Some nancial managers simply use personal
knowledge of the investor to deduce risk tolerance. For example, if an investor
with a $10 million portfolio is distraught if the portfolios value plunges 20% in
a year, he or she is not a candidate for an all-equity portfolio. Yet another a
pproach used by some managers is to work backwards and determine the risk necess
ary for the investor to reach speci c wealth-accumulation goals.
20% 18% Feasible portfolios Investor risk preferences Efficient frontier 16% 14%
12% 10% 8% Optimum portfolio (at tangency) 6% 4% 2% 0% 1% 2% 3% 4% 5% 6% 7% 8%
9% 10% 11% 12% 13% 14% 15% Portfolio risk Portfolio return
Figure 1 The Ef cient Frontier and Investor Risk Preferences.
1 wj 0 where w r S R
[w1 w2 . . . wJ]T
the portfolio weights for each asset [r1 r2 . . . rJ]T the ret
urn vector the risk matrix with diagonal standard deviations s1 s2 . . . sJ the
J J asset correlation matrix the investors risk-aversion parameter
Childhood/ college
Peak earning years
Retirement
Saving
Income
Spending
Younger Older
Figure 3 The Investors Life Cycle.
756
TECHNOLOGY
Equation (1) incorporates the investors attitude to risk via the objective functi
on U. Equation (2) represents the portfolio return. Equation (3) is portfolio ri
sk. Constraint (4) requires that the portfolio weights sum to 100%, while constr
aint (5) requires that weights be positive. This latter restriction can be dropp
ed if short selling is allowed, but this is not usually the case for most invest
ors. This formulation is a standard quadratic programming problem for which an a
nalytical solution exists from the corresponding KuhnTucker conditions. Different
versions of the objective function are sometimes used, but the quadratic versio
n is appealing theoretically because it allows investor preferences to be convex
. The set of ef cient portfolios can be produced more simply by solving max r
wTr
subject to
wTSTRSw, colsum w
1, wj 0 for target
or alternatively, max
wTSTRSw s
bject to r wTr, colsum w
1, wj 0 for target r (7) (6)
A vast array of alternative speci cations is also possible. Practitioners often em
ploy additional inequality constraints on portfolio weights, limiting them to ma
ximums, minimums, or linear combinations. For example, total equity exposure may
be restricted to a percentage of the portfolio or cash to a minimum required le
vel. Another common variation is to add inequality constraints to force solution
s close to benchmarks. This minimizes the risk of underperforming. With respect
to computation, for limited numbers of assets (small J ), solutions are easily o
btained (although not necessarily ef ciently) using standard spreadsheet optimizer
s. This works for the vast majority of allocation problems because most applicat
ions typically include no more than a dozen assets. More specialized optimizers
are sometimes necessary when there are many assets. For example, if MV is applie
d to select a stock portfolio, there may be hundreds of securities used as admis
sible assets.
5.
5.1.
CAVEATS
Shortcomings of Mean-Variance Analysis
One must be cognizant that the MV approach has several important shortcomings th
at limit its effectiveness. First, model solutions are sometimes very sensitive
to changes in the inputs. Second, the number of assets that can be included is g
enerally bounded. Otherwise, collinearity problems can result that produce unsta
ble allocations and extreme asset switching in the optimal portfolios. Third, th
e asset allocation is only as good as forecasts of prospective returns, risk, an
d correlation. Inaccurate forecasts produce very poorly performing portfolios. T
he rst two limitations can be addressed through skillful model speci cation. The th
ird requires that one have superlative forecasting ability. A common mistake com
mitted by naive users of MV analysis is to use recent returns, risk, and correla
tion as predictors of the future. Portfolios produced using such linear extrapol
ation methods normally exhibit poor performance. Table 2 shows historical return
s and risk for various asset classes over the last decade. A variety of extraneo
us market developments, shocks, and one-time events produced these results. Rest
assured that future time paths will not closely resemble those of the 1990s. Fo
r this reason, while one cannot ignore history, extending past performance into
the future is a poor technique.
5.2.
Dangers of Extrapolating from History
As an example of the extrapolation fallacy, consider portfolio performance over
the last two decades. If one constructed an ef cient portfolio in 1990 based on th
e 1980s history, large allocations would have been made to international equitie
s. This is primarily due to the fact that Japanese stocks produced the best retu
rns in the world up to 1989. Yet in the 1990s, Japanese equities fell by more th
an 50% from their 1989 peak, and the best asset allocation would have been to U.
S. equities. Using the 1980s history to construct MV portfolios would have produ
ced dismal portfolio returns (Table 3). Empirical work by Chopra and Ziembra (19
93) demonstrated that the most critical aspect of constructing optimal portfolio
s is the return forecast. For this reason, a shortcut employed by some practitio
ners is to concentrate on the return forecast and use historical risk and correl
ation to construct optimum portfolios. This may prove satisfactory because corre
lations and risk are more stable than returns and are therefore more easily pred
icted. However, this line of attack may be ineffective if return forecasts subst
antially deviate from history and are combined with historical risk and correlat
ions. In this case, the optimal allocations can skew overwhelmingly to the high
return assets Whatever method is used to obtain forecasts, once the optimum port
folio is determined, the manager can momentarily relax and wait. Of course, actu
al outcomes will seldom match expectations.
Fixed income
Alternative
Source: Deutsche Bank.
No investment manager possesses perfect foresight. Errors in forecasting returns
, risk, and correlation will produce errors in portfolio composition. Obviously,
managers with the best forecasting ability will reap superior results.
5.3.
Asset Selection
If a manager does not forecast extremely well, it is possible to produce superio
r investment performance via asset selection. That is, choosing an exceptional a
rray of candidate assets, can enhance portfolio returns due to diversi cation bene t
s that other managers miss. For example, consider a simple portfolio of U.S. equ
ities and bonds. Normally managers with the best forecasts will achieve better p
erformance than other managers investing in the same assets. But another manager
, who may not forecast U.S. equity and bond returns extremely well, can outperfo
rm by allocating funds to assets such as international equities and bonds. These
assets possess different returns, risks, and correlations with each other and U
.S. assets. Their inclusion shifts the ef cient frontier upward beyond that result
ing when only U.S. stocks and bonds are considered.
TABLE 3
Optimum Portfolios: The 1980s vs. the 1990s Asset Portfolio Bonds 34% 12% 12% 25
% 8% 4% Cash 4% 9% 0% 0% 5% 0% Return 17.4% 10.5% 15.2% 10.5% 8.9% 10.8% Risk In
ternational Stocks 47% 22% 18% 0% 7% 18%
Period 1980s
Measure Weight Returns Risk Weight Returns Risk
S&P 15% 18% 16% 75% 18% 14%
1990s
1980s portfolio performance in 1990s:
Source: Deutsche Bank.
758
TECHNOLOGY
The primary distinction between asset selection and asset allocation is that the
thought processes differ. In asset selection, a manager focuses on de ning the ca
ndidate universe broadly. In asset allocation, assets are typically viewed as gi
ven and the effort is on forecast accuracy. A deep knowledge of markets and inve
stment possibilities is necessary to identify the broadest possible asset univer
se. A manager who incorporates new assets with enticing features has a larger in
formation set than a manager laboring in a narrowly de ned world. This is why astu
te investment managers are constantly searching for new assetstheir goal is to ga
in an edge over competitors by shifting the ef cient frontier outward (Figure 4).
Neglecting the opportunity to employ an ubiquitous asset domain is a common fail
ure of beginners who rely on black box MV solutions they do not fully understand
.
6.
NEW ASSET CLASSES
A virtually endless stream of new investment products is paraded forth by nancial
service companies each year. Although promoted as innovative with superlative b
ene ts, in reality most such creations are often narrowly focused with high risk o
r are recon gured and redundant transformations of assets already available. Sorti
ng through the muck and discovering innovative products that shift the ef cient fr
ontier outward is not an easy task
6.1.
Desirable Asset Characteristics
In general, new assets that effectively shift the ef cient frontier outward must p
ossess differential return and risk pro les, and have low correlation with the exi
sting assets in the portfolio. Incorporating an asset that has similar returns a
nd risk and is collinear with an asset already in the portfolio is duplicative.
One needs only one of the two. Differential returns, risk, and correlation are n
ot enough, however. A new asset that has a lower return and signi cantly higher vo
latility will not likely enter optimal portfolios even if it has a low correlati
on with existing assets. In contrast, a new asset with a higher return, lower ri
sk, and modest correlation with existing assets will virtually always enter at l
east some of the optimal portfolios on the ef cient frontier. In decades past, add
ing real estate, commodities, and gold to stock and bond portfolios produced the
desired advantages. Low returns for these assets in the 1980s and subsequent ye
ars led to their eventual purging from most portfolios by the mid-1990s. They we
re replaced with assets such as international stocks and bonds, emerging market
securities, and high-yield debt. Today, most managers employ multiple equity cla
sses in their portfolios to achieve asset diversi cation. They usually include U.S
. as well as international and emerging market stocks. Some managers split the U
.S. equities category into large, midsized, and small capitalization securities,
or alternatively into growth and value stocks. Some managers use microcap stock
s. The xed-income assets commonly used include U.S. bonds, international bonds, h
igh-yield debt, and emerging market debt. Some managers further break U.S. bonds
into short, medium, and long-duration buckets. A
20% 18% 15% Portfolio return 13% 10% 8% 5% 3% 0% 4% 5% 6% 7% 8% Portfolio risk 9
% 10% 11% 12% Initial efficient frontier Frontier with superior assets
Figure 4 Improving Investment Performance via Asset Selection.
counterintuitive to many readers, who likely perceive hedge funds as very risky
investments. However, this result is now strongly supported by MV analysis and
occurs primarily because hedge fund managers practice diverse investment strateg
ies that are independent of directional moves in stocks and bonds. These strateg
ies include merger and acquisition arbitrage, yield curve arbitrage, convertible
securities arbitrage, and market-neutral strategies in which long positions in
equities are offset by short positions. In response to recent MV studies, there
has been a strong in ow of capital into hedge funds. Even some normally risk-avers
e institutions such as pension funds are now making substantial allocations to h
edge funds. Swensen (2000) maintains that a signi cant allocation to hedge funds a
s well as other illiquid assets is one of the reasons for substantial outperform
ance by leading university endowment funds over the last decade.
6.3.
Private Equity and Venture Capital
As in the case of hedge funds, managers are increasingly investing in private eq
uity to improve portfolio ef ciency. Although there is much less conclusive resear
ch on private equity and investing, the recent performance record of such invest
ments is strong.
760 TABLE 4 Hedge Fund Return 10.5% Optimum Portfolios with Hedge Funds and Conv
entional Assets Allocation Total Portfolio Risk 12.5% 10.5% 8.5% 6.5% 4.5% 12.5%
10.5% 8.5% 6.5% 4.5% 12.5% 10.5% 8.5% 6.5% 4.5% 12.5% 10.5% 8.5% 6.5% 4.5% 12.5
% 10.5% 8.5% 6.5% 4.5% Return 12.2% 11.9% 11.5% 11.2% 10.6% 12.1% 11.6% 11.0% 10
.5% 9.6% 12.0% 11.3% 10.6% 9.8% 8.8% 12.4% 11.7% 10.7% 9.7% 8.4% 12.4% 11.6% 10.
4% 9.2% 7.9% U.S. 15% 11% 7% 3% 33% 26% 19% 11% 5% 35% 34% 26% 19% 10% 35% 41% 3
4% 23% 16% Equities International 83% 66% 50% 31% 4% 76% 62% 46% 29% 7% 66% 53%
40% 25% 8% 65% 48% 36% 23% 8% 65% 44% 31% 22% 14% U.S.
ional 10% 3% 11% 19% 15% 22% 20% 14%
TECHNOLOGY
Hedge Funds 17% 34% 50% 69% 96% 9% 27% 46% 67% 90% 1% 21% 41% 64% 77% 18% 35% 48
% 62% 6% 20% 17%
Cash 27%
9.5%
8.5%
7.5%
6.5%
Source: Lamm and Ghaleb-Harter 2000b.
There are varying de nitions of private equity, but in general it consists of inve
stments made by partnerships in venture capital (VC) leveraged buyout (LBO) and
mezzanine nance funds. The investor usually agrees to commit a minimum of at leas
t $1 million and often signi cantly more. The managing partner then draws against
commitments as investment opportunities arise. In LBO deals, the managing partne
r aggregates investor funds, borrows additional money, and purchases the stock o
f publicly traded companies, taking them private. The targeted companies are the
n restructured by selling off noncore holdings, combined with similar operating
units from other companies, and costs reduced. Often existing management is repl
aced with better-quali ed and experienced experts. After several years, the new co
mpany is presumably operating more pro tably and is sold back to the public. Inves
tors receive the returns on the portfolio of deals completed over the partnershi
ps life. The major issue with private equity is that the lock-up period is often
as long as a decade. There is no liquidity if investors want to exit. Also, beca
use the minimum investment is very high, only high-net-worth individuals and ins
titutions can participate. In addition, investments are often concentrated in on
ly a few deals, so that returns from different LBO funds can vary immensely. Eff
orts have been made to neutralize the overconcentration issue with institutions
offering funds of private equity funds operated by different managing partners.
VC funds are virtually identical in structure to LBO funds except that the partn
ership invests in start-up companies. These partnerships pro t when the start-up rm
goes public via an initial public offering (IPO). The partnership often sells i
ts shares during the IPO but may hold longer if the companys prospects are especi
ally bright. VC funds received extraordinary attention in 1999 during the intern
et stock frenzy. During this period, many VC funds realized incredible returns p
ractically overnight as intense public demand developed for dot-com IPOs and pri
ces were bid up speculatively. Because LBO and VC funds are illiquid, they are n
ot typically included in MV portfolio optimizations. The reason is primarily tim
e consistency of asset characteristics. That is, the risk associated with LBOs a
nd VC funds is multiyear in nature. It cannot be equated directly with the risk
of stocks,
allocation.
7.
THE FORECASTING PROBLEM
Beyond asset selection, the key to successful investment performance via MV asse
t allocation depends largely on the accuracy of return, risk, and correlation fo
recasts. These forecasts may be subjective or quantitative. A subjective or pure
ly judgmental approach allows one the luxury of considering any number of factor
s that can in uence returns. The disadvantage of pure judgment is that it is somet
imes nebulous, not always easily explained, and may sometimes be theoretically i
nconsistent with macroeconomic constraints. Model forecasts are rigorous and exp
licit. They derive straightforwardly from a particular variable set. Most quanti
tative approaches to forecasting are based on time series methods, explored in g
reat depth in the nance literature over the years. Beckers (1996) provides a good
review of such methods used to forecast returns, while Alexander (1996) surveys
risk and correlation forecasting. In addition, Lummer et al. (1994) describe an
extrapolative method as a basis for long-term asset allocation. An alternative
to pure time series is causal models. For example, Connor (1996) explored macroe
conomic factor models to explain equity returns. Lamm (2000) proposes a modi ed bi
variate Garch model as one way of improving MV forecast accuracy.
762
TECHNOLOGY
In the case of a two-asset stock and bond portfolio, the modi ed Garch model propo
sed by Lamm is simply: r1t
11
12r1,t 1
kxk,t 1
1t r2t
21
22r2,t 1
10
2 1t 2 11 1,t 1 2 21 2,t 1
kxk,t
21
2 1,t 1 2 2,t 1
20
2 2t
12t
30
31 1,t 1 2,t 1
31 12,t 1
where rit
the returns on equities and bonds, respectively 2 it
the corresponding
variances it
residuals 12t the covariance between stocks and bonds xk
exogenous fa
ctors (e.g., in ation, industrial production, corporate earnings, and interest rat
es) that determine market returns The other symbols are estimated parameters. Th
e rst two equations predict returns, while the second two forecast associated ris
k. The correlation between stocks and bonds in any period is simply
12t / 1t 2t, whi
ch is derived from the last three equations. This model postulates that when ext
raneous changes occur that are not re ected in economic variables, the resulting p
rediction errors push up risk and reduce correlationexactly the pattern observed
in response to market shocks through time. Lamm reports that augmenting Garch wi
th exogenous variables signi cantly improves forecast accuracy. These ndings have i
mportant implications for portfolio management. Critically, augmented Garch prov
ides a more logical and systematic basis for reallocation decisions through the
economic cycle (Figure 5) and changing in ation scenarios, which shift ef cient fron
tiers (Figure 6). The augmented Garch process also allows one to distinguish eco
nomic in uences from purely unexpected shocks, which are often event driven. A pur
e time series approach provides no such delineation. If one desires to focus onl
y on return forecasting, a useful approach is vector autoregression (VAR). Altho
ugh largely atheoretic, except regarding system speci cation, such models have bee
n shown to have superior predictive capability. In particular, VAR forecasts are
more accurate the longer the periodicity of the data. For example, monthly VAR
models typically explain 510% of the variation in returns, while annual VAR model
s often explain 90% or more. Quarterly forecasting models produce results in bet
ween. VAR models thus provide a reasonably good method for annual asset allocati
on while providing only a slight edge for monthly allocation. VAR models are fai
rly simple and are speci ed as:
150 140 130
Index
Asia recession/ Russia default
$50 $45 $40 $35 $30 $25 $20
S&L crisis Rust belt deindustrialization
120 110 100 90 80 70 60 50 70 72
US industrial production S&P earnings per share (right)
$15 $10 $5 $0
74
76
78
80
82
84
86
88
90
92
94
96
98
00
Year
Figure 5 Corporate Earnings and Stock Prices Follow Economic Growth Cycles.
AL(yt 1)
et where yt L(. . .) et A
(13)
the vector of system variables (including returns) that are to be predicted the
lag operator a vector of errors the parameter matrix.
If some of the system variables are policy variables (such as Federal Reserve in
terest rate targets), they can be endogenized for making forecasts and evaluatin
g scenarios. The forecasting system then becomes: zt 1 BL(zt)
CL(xt) where z conta
ins the system variables to be forecast from y [zx]T x contains the expected poli
cy values B and C are the relevant coef cient submatrices contained in A (14)
8. ENGINEERING CLIENT-TAILORED SOLUTIONS: APPLYING PORTFOLIO RESTRICTIONS
The discussion so far has focused on asset allocation as generally applicable to
a broad cross-section of investors. In reality, the vast majority of individual
investors face special restrictions that limit their exibility to implement opti
mal portfolios. For example, many investors hold real estate, concentrated stock
holdings, VC or LBO partnerships, restricted stock, or incentive stock options
that for various reasons cannot be sold. For these investors, standard MV optimi
zation is still appropriate and they are well advised to target the prescribed o
ptimum portfolio. They should then utilize derivative products such as swaps to
achieve a synthetic replication. Synthetic rebalancing cannot always be done, ho
wever. This is likely to be the case for illiquid assets and those with legal co
venants limiting transfer. For example, an investor may own a partnership or hol
d a concentrated stock position in a trust whose position cannot be swapped away
. In these situations, MV optimization must be amended to include these assets w
ith their weights restricted to the prescribed levels. The returns, risk, and co
rrelation forecasts for the restricted assets must then be incorporated explicit
ly in the analysis to take account of their interaction with other assets. The r
esulting constrained optimum portfolios will comprise a second-best ef cient front
ier but may not be too far off the unconstrained version. An interesting problem
arises when investors own contingent assets. For example, incentive stock optio
ns have no liquid market value if they are not yet exercisable, but they are non
etheless worth
764
TECHNOLOGY
something to the investor. Because banks will not lend against such options and
owners cannot realize value until exercise, it can be argued that they should be
excluded from asset allocation, at least until the asset can be sold and income
received. Proceeds should then be invested consistent with the investors MV port
folio. An alternative is to probabilistically discount the potential value of th
e option to the present and include the delta-adjusted stock position in the ana
lysis. To be complete, the analysis must also discount the value of other contin
gent assets such as income ow after taxes and living expenses.
9.
TAXATION
Taxes are largely immaterial for pension funds, foundations, endowments, and off
shore investors because of their exempt status. Mean-variance asset allocation c
an be applied directly for these investors as already described. However, for do
mestic investors, taxes are a critical issue that must be considered in modeling
investment choices.
9.1.
Tax-Ef cient Optimization
Interest, dividends, and short-term capital gains are taxed at ordinary income rates
in the United States. The combined federal, state, and local marginal tax rate
on such income approaches 50% in some jurisdictions. In contrast, long-term capi
tal gains are taxed at a maximum of 20%, which can be postponed until realizatio
n upon sale. For this reason, equities are tax advantaged compared with bonds an
d investments that deliver ordinary income. In an unadulterated world, MV analys
is would simply be applied to after-tax return, risk, and correlations to produc
e tax-ef cient portfolios. Regrettably, tax law is complex and there are a variety
of alternative tax structures for holding nancial assets. In addition, there are
tax-free versions of some instruments such as municipal bonds. Further complica
ting the analysis is the fact that one must know the split between ordinary inco
me and long-term gains to forecast MV model inputs. This muddles what would othe
rwise be a fairly straightforward portfolio-allocation problem and requires that
tax structure subtleties be built into the MV framework to perform tax-ef cient p
ortfolio optimization.
9.2.
Alternative Tax Structures
As one might imagine, modeling the features of the tax system is not a trivial e
xercise. Each structure available to investors possesses its own unique advantag
es and disadvantages. For example, one can obtain equity exposure in a number of
ways. Stocks can be purchased outright and held. Such a buy-and-hold strategy has t
he advantage that gains may be realized at the investors discretion and the lower
capital gains rate applied. Alternatively, investors may obtain equity exposure
by purchasing mutual funds. Mutual fund managers tend to trade their portfolios
aggressively, realizing substantially more short-term and longterm gains than m
ay be tax ef cient. These gains are distributed annually, forcing investors to pay
taxes at higher rates and sacri cing some of the bene ts of tax deferral. If portfo
lio trading produces higher returns than a simple buy-and-hold strategy, such tu
rnover may be justi ed. But this is not necessarily the case. A third option for e
quity exposure is to purchase stocks via retirement fund structures using pretax
contributions. Tax vehicles such as 401k, IRA, Keogh, and deferred compensation
programs fall in this category. The major bene t of this approach is that it allo
ws a larger initial investment to compound. However, upon withdrawal, returns ar
e taxed at ordinary income tax rates. A fourth alternative for achieving equity
exposure is through annuity contracts offered by insurance companies. These are
often referred to as wrappers. This structure allows investors to purchase equit
y mutual funds with after-tax dollars and pay no taxes until distribution. Thus,
gains are sheltered. Even so, all income above basis is taxed at ordinary rates
upon distribution. Investors receive deferral bene ts but lose the advantage of p
aying lower rates on returns arising from long-term capital gains. Although I ha
ve described four alternative tax vehicles for equity exposure, the same general
conclusions apply for other assets. The one difference is that consideration of
ordinary vs. capital gains differentials must explicitly be considered for each
type of asset. Table 5 summarizes the tax treatment of investment income throug
h currently available tax structures.
9.3.
Results of Tax-Ef cient MV Optimizations
Another perplexing feature of the tax system is that after-tax returns are affec
ted by the length of the holding period. For example, it is well known that the
advantage of owning equities directly increases the longer is the holding period
. This is because deferring capital gains produces bene ts via tax-free compoundin
g (Figure 7). Similarly, assets in insurance wrappers must be held over very lon
g periods before the positive effects of tax-free compounding offset the potenti
al negatives of converting long-term gains to ordinary tax rates.
)t or
765
Annualized After-Tax Return
dt
[ (rT)1 / T
1] dt
(2
(dt
) (dt
rt)
(1
)]rt
rt)
766 TABLE 6 After-Tax Returns 5-Year Horizon Asset U.S. stocks International sto
cks U.S. bonds International bonds Muni bonds REITs Hedge funds Cash TE cash Pre
-Tax Return 11.5% 13.1% 6.5% 7.4% 5.0% 8.25% 10.5% 5.5% 3.2% Buy / Hold 8.6% 9.8
% 3.5% 4.0% 5.0% 4.8% 6.2% 3.0% 3.2% Fund 7.8% 8.8% 3.5% 4.0% 5.0% 4.8% 6.2% 3.0
% 3.2% Wrap 5.9% 6.9% 2.7% 3.3% 5.0% 4.1% 5.2% 2.1% 3.2% 10-Year Horizon Buy / H
old 9.0% 10.3% 3.5% 4.0% 5.0% 4.8% 6.2% 3.0% 3.2% Fund 7.8% 8.8% 3.5% 4.0% 5.0%
4.8% 6.2% 3.0% 3.2% Wrap 6.6% 7.8% 3.0% 3.6% 5.0% 4.5% 5.8% 2.3% 3.2%
TECHNOLOGY
20-Year Horizon Buy / Hold 9.6% 11.0% 3.5% 4.0% 5.0% 4.8% 6.2% 3.0% 3.2% Fund 7.
8% 8.8% 3.5% 4.0% 5.0% 4.8% 6.2% 3.0% 3.2% Wrap 7.6% 9.1% 3.4% 4.1% 5.0% 5.3% 6.
8% 2.6% 3.2%
Source: Lamm and Ghaleb-Harter 2000c. Ordinary tax rates are assumed to be 46% (
state and federal). Incremental costs of 1% annually are deducted for the wrap s
tructure.
These generalizations presume investors are in the top marginal tax bracket (fed
eral, state, and local) both at the beginning of the horizon and at the end. The
extent to which investors move into lower brackets via diminished post-retireme
nt income or by migrating to lower tax jurisdictions make the bene ts of deferral
even greater.
10.
TIME HORIZON
Beyond the taxation issue, a key component of MV problem speci cation is properly
de ning the horizon over which the optimization is applied. In particular, risk an
d correlation can differ signi cantly, depending on the time frame considered. For
example, equities are much less risky over veyear horizons than over one year (F
igure 8). This has led some analysts to argue for larger equity allocations for
long-term investors because stock risk declines relative to bonds over lengthy h
orizons.
60% 50% 40% Equity 30% return 20% 10% 0% -10% -20% -30% -40% 1 2 3 4 5 6 Years 7
8 9 10
+2 std. devs. -2 std. devs. Expected return
Figure 8 Equity Risk Is Lower over Long Time Horizons.
1%
2%
3% Portfolio risk
4%
Figure 9 Long-Run vs. Short-Run Time Horizons.
768
TECHNOLOGY
is, a $10 million portfolio might be found to have a value at risk on a speci c da
y of $1.1 million with 95% con dence. This means the value of the portfolio would
be expected to fall more than this only 5% of the time. Changing the composition
of the portfolio allows a value-at-risk ef cient frontier to be traced out. Duart
e also proposes a generalized approach to asset allocation that includes mean se
mi-variance, mean absolute deviation, MV, and value-at-risk as special cases. Wh
ile cleverly broad, Duartes method relies on simulation techniques and is computa
tionally burdensome. In addition, because simulation approaches do not have expl
icit analytical solutions, the technique loses some of the precision of MV analy
sis. For example, one can examine solution sensitivities from the second-order c
onditions of MV problems, but this is not so easy with the simulation. It remain
s to be seen whether simulation approaches receive widespread acceptance for sol
ving portfolio problems. Nonetheless, simulation techniques offer tremendous pot
ential for future applications.
1, . . . , J; k
1, . . . , K
(15)
the the the the
return for the j th hedge fund in a portfolio of J funds skill performance of th
e fund exposure of the fund to the kth traditional asset class traditional asset ret
urn
Time subscripts, which apply to hj and rk, are dropped. Equation (15) can be wri
tten more simply as: h a
Br where h
[h1 h2 . . . hJ] a [ 1 2 . . . J]T r
. rk]T
T
(16)
and the asset exposure matrix is:
[r1 r2 .
11
21
J1
12
22
J2
. . . .
. . . .
. 1K .
2K . .
JK
(17)
The return for the hedge fund portfolio is: h
wTh
wT(a Br) where w
[w1 w2 . . .
wJ]T are the portfolio weights assigned to each hedge fund. The variance of the
hedge fund portfolio is: (18)
This is the same as the standard MV model except the portfolio return takes a mo
re complex form (21) and an additional constraint (23) is added that forces net
exposure to traditional assets to zero. Also, constraint (26) forces suf cient div
ersi cation in the number of hedge funds. Obtaining a feasible solution for this p
roblem requires a diverse range of beta exposures across the set of hedge funds
considered for inclusion. For example, if all equity betas for admissible hedge
funds are greater than zero, a feasible solution is impossible to attain. Fortun
ately, this problem is alleviated by the availability of hundreds of hedge funds
in which to invest. A signi cant amount of additional information is required to
solve this problem compared to that needed for most MV optimizations. Not only a
re individual hedge fund returns, risk, and correlations necessary, but returns
must also be decomposed into skill and style components. This requires a series
of regressions that link each hedge funds return to those of traditional assets.
If 200 hedge funds are candidates, then the same number of regressions is necess
ary to estimate the and
parameters. Table 7 presents optimum hedge fund portfoli
os that maximize returns for given risk levels while constraining beta exposure
to traditional assets to zero. The optimization results are shown for (1) no con
straints; (2) restricting net portfolio beta exposure to traditional assets to z
ero; (3) restricting the weight on any hedge fund to a maximum of 10% to assure
adequate diversi cation; and (4) imposing the maximum weight constraint and the re
quirement that net beta exposure equals zero. Alternatives are derived for hedge
fund portfolios with 3% and 4% risk. Adding more constraints shifts the hedge f
und ef cient frontier backward as one progressively tightens restrictions. For exa
mple, with total portfolio risk set at 3%, hedge fund portfolio returns fall fro
m 27.4% to 24.4% when zero beta exposure to traditional assets is required. Retu
rns decline to 23.0% when hedge fund weights are restricted to no more than 10%.
And returns decrease to 20.4% when net beta exposure is forced to zero and hedg
e fund weights can be no more than 10%.
13.
CONCLUSION
This chapter has presented a brief introduction to asset management, focusing on
primary applications. The basic analytical tool for portfolio analysis has been
and remains MV analysis and variants of the technique. Mean-variance analysis i
s intellectually deep, has an intuitive theoretical foundation, and is mathemati
cally ef cient. Virtually all asset-management problems are solved using the appro
ach or modi ed versions of it. The MV methodology is not without shortcomings. It
must be used cautiously with great care paid to problem formulation. The optimum
portfolios it produces are only as good as the forecasts used in their derivati
on and the asset universe from which the solutions are derived. Mean-variance an
alysis without a concerted effort directed to forecasting and asset selection is
not likely to add signi cant value to the quality of portfolio decisions and may
even produce worse results than would otherwise be the case. While MV analysis s
till represents the current paradigm, other approaches to portfolio optimization
exist and may eventually displace it. Value-at-risk simulation methodologies ma
y ultimately prove more than tangential. Even so, for many practitioners there i
s still a long way to go before forecasting techniques, asset identi cation, and t
ime horizon considerations are satisfactorily inte-
Manager wi 0.1 wi
0.1 ID None k 0 wi
0.1 k 0 None k 0 wi
0.1 k 0 6%
5% 2% 3% 3% 100% 16 27.4 4% 5% 53% 4% 3% 3% 3% 7% 2%
2% 10% 10% 10% 8% 10% 2% 7% 4% 5% 2% 3% 2% 3% 6% 7% 100% 24 23.0 10% 10%
9% 10% 8% 10% 5% 4% 7% 7% 100% 19 20.4 75% 3% 2% 3%
60% 2% 2% 3% 11% 2% 2% 6% 8% 100% 15 26.6 10% 2% 10% 10% 10% 10%
4% 5% 4% 5% 6% 6% 100% 19 23.5 10% 10% 10% 7% 10% 8% 10% 3% 2% 10
100% 17 21.2
L / S #1 CA #1 CA #2 BH #1 RO #1 AG #1 AG #2 Event DA #1 driven DS #1 ME #1 Equi
ty DM #1 hedge DM #2 DM #3 DM #4 OP #1 GI #1 GI #2 Global asset Sy #1 allocation
Sy #2 Sy #3 Sy #4 Short-sellers SS #1 SS #2 SS #3 Others Total Funds Portfolio
Return
Source: Lamm and Ghaleb-Harter 2000b. L / S
long / short; CA convertible arb; BH
bond hedge; RO
rotational; AG aggressive; DA
deal arb; DS distressed; ME
multievent; DM domestic; GI global / international; Sy
systematic; SS short sellers.
grated with core MV applications. Greater rewards are likely to come from effort
s in this direction rather than from devising new methodologies that will also r
equire accurate forecasting and asset selection in order to be successful.
REFERENCES
Agarwal, V., and Naik, N. (2000), On Taking the Alternative Route: The Risks, Rewa
rds, and Performance Persistence of Hedge Funds, Journal Of Alternative Investment
s, Vol. 3, No. 2, pp. 623. Alexander, C. (1996), Volatility and Correlation Forecas
ting, in The Handbook of Risk Management and Analysis, C. Alexander, Ed., John Wil
ey & Sons, New York, pp. 233260. Beckers, S. (1996), A Survey of Risk Measurement T
heory and Practice, in The Handbook of Risk Management and Analysis, Carol Alexand
er, Ed., John Wiley & Sons, New York, pp. 171192 Brinson, G. P., Randolph Hood, L
., and Beebower, G. L. (1986), Determinants of Portfolio Performance, Financial Anal
ysts Journal, Vol. 42, No. 4, pp. 3944. Brinson, G. P., Singer, B. D., and Beebow
er, G. L. (1991), Determinants of Portfolio Performance II: An Update, Financial Ana
lysts Journal, Vol. 47, No. 3, pp. 4048. Chopra, V., and Ziembra, W. (1993), The Ef
fect of Errors in Means, Variances, and Covariances on Portfolio Choices, Journal
of Portfolio Management, Vol. 20, No. 3, pp. 5158. Connor, G. (1996), The Three Typ
es of Factor Models: A Comparison of Their Explanatory Power, Financial Analysts J
ournal, Vol. 52, No. 3, pp. 4246.
any as 200,000 boxes a day. One to seven days later, Bob is another one of over
13 million customers satis ed with Amazon. The moral of the storyBob really didnt ca
re if the companies cared or what they had to go through to get the book to him.
HE JUST WANTED THE BOOK.
2.
INTRODUCTION TO RETAIL LOGISTICS
If your business has all the time in the world, no sales or pro t pressures, and a
strong competitive advantage in the marketplace, then you do not have to be con
cerned about the ef ciency and effectiveness of your supply chain management. This
is not the world of retailing, where margins are thin and competition plentiful
(and aggressive).
2.1.
The Retail Supply Chain
By and large, consumers dont know (or really care) how the product made it to the
retail shelf (or to their door). They do care that retailers have the right pro
ducts, in the right sizes, in the right colors, at the right prices, where and w
hen they want it. The business challenge for retailers is to manage the thousand
s of paths that exist between raw material to putting the product on the shelf,
from the small one-store operation to the 2600 store chain Wal-Mart.
774
TECHNOLOGY
The retailers role is to manage these complex relationships and make it possible
for the consumer to make a choice easily among the many thousands of vendors and
manufacturers creating products that consumers would want. By reducing the comp
lexity of nding and acquiring the right merchandise, retailers help deliver somet
hing of value to the consumer. The goal of the retail supply chain should be to
minimize expense and time between the manufacturer and the consumer so that cons
umers nd what they want at a competitive price. Customer relationships are only d
eveloped by designing product paths that deliver what the customer wants, when t
hey want it, at the price they want it. This is what shareholders pay the execut
ives to do. Executives who do not marshal the supply chain well will have a busi
ness that will underperform (see Figure 1).
2.2.
Strategic Advantages through Retail Supply Chain Management
Today an executive whose goal is not lowering costs is not shepherding the resou
rces of the company well. Procurement, distribution, and storing accounts for ab
out 55% of all costs. By contrast, labor
2
1 4
6
r
5 3
1. Production cannot meet initial projected demand, resulting in real shortages.
Retailers frustrated because they cannot get the merchandise they want. Consume
rs dissatisfied because they cannot find what they want. 2. Retailers overorder
in an attempt to meet demand and stock their shelves. Retailers are angry at man
ufacturer and lose confidence. 3. As supply catches up with demand, orders are c
ancelled or returned. Retailers lose money and lose customers. 4. Financial and
production planning are not aligned with real demand; therefore production conti
nues. Overproduction means manufacturers lose money. 5. As demand declines, all
parties attempt to drain inventory to prevent writedown 6. Supply and demand mat
ch closely. Everyone maximizes profit.
Figure 1 SupplyDemand Misalignment. (From J. Gattorna, Ed., Strategic Supply Chai
n Alignment: Best Practice in Supply Chain Management. Reprinted by permission o
f the publisher, Gower Publishing, UK.)
Figure 2 Supply Management Bene ts. (Adapted from The Executives Guide to Supply Ch
ain Management Strategies, Copyright
1998 David Riggs and Sharon Robbins. Used w
ith permission of the publisher, AMACOM, a division of American Management Assoc
iation International, New York, NY. All rights reserved. http: / / www.amacomboo
ks.com)
776
TECHNOLOGY
supply chain partnerships, ef cient consumer response,
oss-docking), and enterprise resource planning.
2.4.
A Short History of Supply Chain Management
In the beginning, it was suf cient for the owner / proprietor simply to order the
products he or she thought the customer would want, and these would be delivered
to the store. As business got more complex (and larger), the need arose for a d
ivision of labor in which a separate expert department / function was born for t
he major areas of the business. One part of the organization and set of people w
as empowered to order, track, plan, negotiate, and develop contacts. The greater
the size of the retailer, the greater the number of buying functionsaccessories,
socks, underwear, coats sportswear, petites, mens wear, furnishings. Each depart
ment worthy of its name had an expert buyer and assistant buyer. Each buyer reported
to a divisional merchandise manager (DMM). Each DMM reported to a general merch
andise manager (GMM), and the GMM reported to a senior vice president. Small ret
ailers have direct store deliveries from the manufacturer or middleman. Larger r
etailers have centralization of storing, transporting, and distributing from reg
ional distribution facilities (distribution). Technology development over the pa
st 15 years has focused on more ef cient and faster ways to get stuff into the regiona
l distribution facilities and out to the stores or end user. Success was measure
d in the number of days or hours it took merchandise to hit the distribution cen
ter from the manufacturer or supplier and go out the distribution door. The end
result of these developments is the situation today (Fernie 1990; Riggs and Robb
ins 1998):
Many small transactions and high transaction costs Multiple suppliers delivering
similar goods with the same speci cation but little price competition
Very low levels of leveraging purchasing or achieving economies of scale Too man
y experts and executives focusing time and effort on very narrow areas that freq
uently
have nothing to do with the core competency of the business, essentially wasting
time that could be otherwise be devoted to improving the businesss competitive a
dvantage Little time for strategy because the experts are involved in daily tran
sactions
2.5.
The Current State of Supply Chain Management
We have seen what was. Now here is what some retailers have achieved. Expected d
emand for a product is forecasted (forecasts can be updated as changes in demand
re ect change is needed). Stock levels are constantly monitored. The frequency of
production and shipment is based on the difference between demand and on-hand g
oods. When stock falls below some number, the product is ordered by electronic d
ata interchange (EDI). There is immediate electronic acknowledgment with a promi
sed delivery date and mode of shipping. The mode will be automatically selected
by comparing the current level of inventory and demand. When the product is ship
ped, it is bar coded and a packing skip is EDIed to the company. When the produc
ts arrive, bar codes are matched to the order. Any discrepancies are handled ele
ctronically. The product is cross-shipped, inventoried, picked, and entered into
the purchasers inventory electronically and is immediately available. Store ship
ment goes out according to store inventory levels and cost considerations. There
are a number of state-of-the-art processes and technologies at work here. Forec
asting and monitoring is used to minimize inventory levels while maintaining ver
y high levels of in-stock positions for the consumer. Money is saved by reducing
the inventory at each distribution center. Inventory can be assessed at any of
the distribution centers. Almost all handling and clerical labor is eliminated a
nd greater accuracy is achieved. Lower handling costs per unit translate into sa
vings and ef ciency. Finally, if you dont have excessive inventory in the stores, i
t may be possible to raise the percentage of goods sold prior to the need to pay
for them. This reduction of cash to cash cycle time is a major bene t of supply c
hain management.
3.
RETAIL SUPPLY CHAIN COMPONENTS
Retail supply chains are different than other industry models. Many of the compo
nents of the supply chain are the same: product sourcing, inbound transportation
, processing, location and storage of inventory, outbound transportation, compan
y operations, and information. However, retailers are at the end of the chain, j
ust before the products touch the consumer. As a result, the retailer is at the
end of the cumulative ef ciencies and de ciencies of all the chain partners. It may
be that retail supply chains are just a bit more complex. Imagine the thousands
of vendors, each with their own ideas and operations, all moving with a thousand
different retailers set of unique requirements and multiply this by the 90,000 di
fferent stock keeping units (SKUs) in the typical large discount store.
product. There is a need to make certain that products coming in match what was
ordered. Discrepancies with merchandise not caught here will cause problems as
the merchandise ows through the retail organization. The second signi cant function
of the warehouse system is put away and replenishment. Inventory that will not be c
ross-docked has to be stored and available for retrieval. Merchandise that has b
een stored will need to be found and shipped out to stores that need it. In mode
rn warehouses, this function is highly computerized, with automated picking tool
s and belts. In less sophisticated facilities, a lot is still done slowly and by
hand. Finally, the warehouse system must pack the items and make certain accura
te paperwork and order consolidation occur. Given the complexity of the warehous
e, it is easy to see how systems and technologies can be of considerable bene t. A
n effective warehouse-management system saves labor costs, improves inventory ma
nagement by reducing inaccuracies, speeds delivery of merchandise to store or co
nsumer, and enhances cross-docking management.
3.5.
Distribution
Distribution is the set of activities involved in storing and transporting goods
and services. The goal has been to achieve the fastest throughput of merchandis
e at a distribution center with minimal cost. These efforts are measured by inte
rnal standards with very little involvement by manufacturers / vendors and consu
mers. The service standards are typically set by pursuing ef ciencies, not necessa
rily making sure that consumers have what they want, when and how they want it.
3.6.
Outbound Transportation
Moving the correct quantity and type of merchandise to stores or direct to the c
onsumer can create great economies, exibility, and control. Inef cient transportati
on can be disastrous. Systems that do
778
TECHNOLOGY
not get the correct merchandise to the proper stores during a selling season or
when an advertising campaign is scheduled (e.g., winter goods three weeks after
the consumer starts shopping for winter clothing) face very poor sales. The effe
ct is more than simply the store not selling a particular piece of merchandise.
Each time the consumer does not nd what he or she wants, the customers chance of s
hopping at a competitor increases. The diversity of merchandise and the number o
f stores create several important challenges for retailers. How do you transport
merchandise to reach all the stores ef ciently at the lowest cost and time? How d
o you ll the trucks to maximize space in the trucks? Differences in demand and ge
ographic concerns make empty trucks very expensive. Every day, merchandise moves
into the city from a variety of vendors to a variety of stores. Yet for most st
ores, full truckloads have not been purchased. Third-party distributors that com
bine loads from different vendors to different merchants allow economies of scal
e to develop but nothing like the advantage of a large store that receives and b
uys full truckloads many times a week. The delivery expense to cost is a signi can
t one for retailers. It cuts into the pro t and makes prices for the smaller retai
ler less competitive. Getting the goods from point A to point B is neither simpl
e nor pedestrian. Tomorrows business leaders will see this as part of the seamles
s experience leading to short lead times and reliable service at the lowest pric
e. It has recently been estimated that the cost of transporting goods is 50% of
the total supply chain costs. Signi cant improvements in the movement of goods hav
e been advanced by improvements in software that aid in planning transportation,
vehicle routing and scheduling, delivering tracking and execution, and managing
the enterprise. Cross-docking of merchandise occurs when the delivery of produc
t to the distribution center is put directly into the trucks heading for the sto
res. For cross-docking to work well, suppliers and retailers must have fully int
egrated information systems. For example, Federal Express manages the complex ta
sk of getting all the component parts to Dell at the same time so that a particu
lar Dell Computer order can be manufactured without Dell having to have inventor
ies on hand.
3.7.
Outsourcing
Many retailers make use of third-party logistics in the form of warehouse manage
ment, shipment consolidation, information systems, and eet management. Outsourcin
g allows a retail management to focus on core competencies of purchasing, mercha
ndising, and selling.
3.8.
Storage
Consumers buy merchandise that is availableon the shelfnot merchandise in some bac
kroom storage area. Storage costs money (which must be added to the price that c
onsumers pay). Not every retailer is Wal-Mart, which claims to be able to restoc
k products in any of its 2600 stores within 24 hours. As a result, Wal-Mart store
s can have minimal storage areas and maximal selling areas with shelves that alw
ays have the merchandise. Just-in-time, quick response, and vendor-managed inven
tory hold the promise that less merchandise will have to be stored at the local
level because systems will dictate that manufacturers or central storage facilit
ies will ll shelves with product as the product starts to decline. Most retailers
must still carry inventory so that they may achieve a responsible level of serv
ice for the consumer. As a result, compared to Wal-Mart, they have fewer product
s to sell, more out-of-stock positions, greater consumer frustration, and greate
r cost structure because they have more nonproductive space used for storing inv
entory. The Internet is increasing the pressure on retailers to have better in-s
tock positions. The only real sustainable competitive advantage that retailers h
ave right now over the Internet is their ability to get product in consumers hand
s immediately. If the consumer cannot nd what he or she wants in the stores, and
its takes just as long for the store to order the merchandise and get it to the
customer, the customer might just take advantage of the Internet. Because inform
ation and pricing are more transparent on the Internet, if stores do not have th
e product, the competitive advantage goes to the Internet. This is assuming that
the Internet has the same product available at a lower price. The consumer can
use a shopping agent like My Simon (www.mysimon.com) to compare all the prices f
or a particular product on the Internet and choose the lowest price. As this cha
pter was being written (December 1999), Toys R Us reported very signi cant problems in
getting Internet orders delivered to consumers. Not only will this impact their
bottom line in cancelled orders (all customers who were promised delivery that
now could not be met were offered cancellation), but all affected customers rece
ived a certi cate worth $100 on their next Toys R Us purchase. This will cost Toys R
llions of dollars which would have gone directly toward their bottom line. More
importantly, we cannot estimate the number of customers who will never go to a T
oys R Us website again. Internet retailers understand this, and independent or third
-party businesses are offering almost immediate delivery in major metropolitan a
reas. If this is successful, Internet retailers will be removing one of the only
barriers and will be competing head-on with store retailers. This suggests
vicing and repairing products), and customer contact management (problem resolut
ion, contact management, training). For example, a large Japanese consumer elect
ronics company believed its products to be of superior quality. As a result, the
y saw no need to inventory parts in the United States. When product repairs were
needed, they had to ship the required parts to the United States. Not only was
this expensive, but consumer satisfaction was low with the repair process and th
e electronics manufacturer. The low satisfaction had an impact on repurchase of
the product in that product category and other product categories carrying that
brand name.
4. STRATEGIC OBJECTIVES FOR RETAIL SUPPLY CHAIN MANAGEMENT
4.1. Improved Forecasting Ability
Collaborative forecasting and replenishment is the term used to represent the jo
int partnership between retailers and vendors / manufacturers to exchange inform
ation and synchronize the demand and supply. With improved forecasting, the amou
nt of material and inventory that vendors, manufacturers, and retailers have to
have in hand is reduced. Most forecasting, if it is done (much more likely in la
rger retails), is done separately. It is only within the past ve years that retai
lers have discovered the value or implemented any joint forecasting partnerships
(e.g., Procter & Gamble and Wal-Mart. P&G maintains an of ce in Bentonville, Arka
nsas, where Wal-Mart world headquarters is located, to serve this important part
nership).
780
TECHNOLOGY
Supply chain management advances rest rmly on the ow of information. Overwhelming
paper ow, separate computer systems and databases, and nonnetworked computer syst
ems are unacceptable in the supply chain era. Linking computers on a single plat
form of information with fast access to all information needed to make decisions
fosters enterprise-wide solutions that make up supply chain management. The suc
cess of the Procter & Gamble / Wal-Mart relationship has been looked to as the m
odel for what will be happening in retailing over the next 10 years. It has also
served as the model that P&G will be using with its business partners to grow a
nd prosper. Since supply chain management procedures were implemented, market sh
are for P&G went from 24.5% in 1992 to 28% in 1997, while net margin increased f
rom 6.4% to 9.5% (Drayer 1999). Typically it is the retailer that develops a for
ecast and hands it to the vendor / manufacturer. Trusting mutually bene cial relat
ionships are not the operational reality between retailers and vendors/ manufact
urers (see Figure 3). Even greater ef ciency will be achieved when multiple retail
ers in a geographic region combine their information seamlessly with multiple ve
ndors / manufacturers to allow the most ef cient delivery and manufacturing of pro
duct. Why should Procter & Gamble plan and ship with Wal-Mart, then do it again
with Target, with K-Mart, and with the hundreds of smaller retailers in the area
, when if they did it together, the costs would be lower for all?
4.2.
Faster and More Accurate Replenishment
Direct product replenishment offers to increase the ef ciency and effectiveness of
retail operations. According to a Coopers & Lybrand study (1996), direct replen
ishment improves retail operations by allowing:
Integration of suppliers with mission and function of store More reliable operat
ions Synchronized production with need Cross-docking and associated savings and
ef ciencies Continuous replenishment and fewer out-of-stock positions Automated st
ore ordering and more attention and money available for more important areas of
the business
Retailers would need fewer people to be experts in a product area. Fewer mistake
s would be made. Markdowns (need to sell inventory that did not or would not sel
l) would be minimized making margin enhancement possible. Goods would reach the
consumer faster. Less inventory storage would be needed. There would be lower di
stribution expenses since much of the inventory would be stored at the manufactu
rer until needed. Since inventory is a retailers number one asset, making that as
set more productive feeds the bottom line directly. Continuous and time-minimal
replenishment schemes are easier with basic everyday items that move through the
store quickly. Grocery stores have widely implemented electronic data system te
chniques to maximize the ef cient replenishment of fast-moving merchandise.
high
Vendor holds the power
Retailer and vendor are partners
Retailers Point of View
Very effective relationship Retailer is hostage
t retailers can spend less of their time and effort in accomplishing this portio
n of their operation, freeing time, energy, people, and nancial resources for dev
eloping and maintaining customer relationships. According to Lewis (1995), these
partnerships allow:
Ongoing cost reductions Quality improvements Faster design cycle times Increased
operating exibility More value to the customers customer More powerful competitiv
e strategies
A partnership starts with a discussion based on You are important to us and we are
important to you. Now how can we do this better so we can both make more money. T
o be successful in this new paradigm will require skills that have not been well
developed as yet.
5.2.
Integrated Forecasting, Planning, and Execution
Multiple forecasts for the same line of business within the organization are com
mon (if any planning is even done). The gap between what is planned and what act
ually happens represents lost pro ts and lost opportunities. The new paradigm for
retail supply chain management begins with an accurate view of customer demand.
That demand drives planning for inventory, production, and distribution within s
ome understood error parameters. Consumers will never be completely predictable.
At the same time, prediction is bounded by limitations in our statistical and m
odeling sciences. We can
782
TECHNOLOGY
predict only as well as our tools allow us. Computer advances will allow easier
and affordable management of millions of data points so that demand can be measu
red. Effective demand management represents an untapped and signi cant opportunity
for most if not all retailers. Effective demand modeling allows greater forecas
t accuracy, increases supply chain effectiveness, reduces costs, and improves se
rvice levels, and all re ect greater pro t. The resultlower inventories, higher servi
ce levels, better product availability, reduced costs, satis ed customers, and mor
e time available to respond to other areas of the organization.
5.3.
Statistical Techniques
Understandable, usable, and affordable software to accommodate millions and mill
ions of individual consumer purchases has only recently become available. But th
at is only part of the equation. It is still unusual to nd company executives who
understand the capability and use of these sophisticated software packages. Of
equal importance is the need to have a uniform database representing the forecas
ting target. Typically, different data needed to accomplish accurate forecasting
resides in isolated silos of databases that cannot talk to each other. In addit
ion to decisions regarding the supply chain, these tools would:
Provide the CEO with an important strategic view of the business Provide opportu
nities for cost savings Aid in developing long-range strategic plans Assist in r
esource allocation Identify inadequate areas in existing management and operatio
ns
5.4.
The Role of Senior Management
Logistic, warehouse, and other supply chain management ef ciency can result from a
piecemeal decision-making process. But true partnerships must ow from a strategi
c alignment with the mission and values of the organization. Wal-Marts industry-l
eading supply chain operation did not result from chance but from Sam Waltons und
erstanding of the business and a strategic view that saw that investments in sta
te-of-the-art technology and systems will pay off handsomely in the future. For
most companies, the appointment of a supply chain czar is needed to push supply
chain management through the organization.
5.5.
Information Technology
No advances in supply chain management can occur without a layer of middleware.
Information and its collection, management, availability, and use serve as the f
oundation for all advances in supply chain management. Sustained success of reta
ilers comes from an understanding of how information technology (IT) creates com
petitive advantage. Prepackaged enterprise-wide IT solutions are available, but
they make it dif cult to be different. Unique and strategic IT enterprise-wide sol
utions are dif cult to integrate and implement. The IT system should allow new and
better customervendor relations, provide new insights into a companys consumers a
nd markets, maximally exploit ef ciencies in the supply chain, transform product d
evelopment and delivery, increase the capability to make real-time decisions, in
crease planning ability, manage increasingly complex national and global markets
customer write his or her own order, there are fewer errors made than with placi
ng the order by calling the 800 number. The site tells customers whether the pro
duct is in stock at a local store that will deliver it the same day (or overnigh
t) without the overnight shipping fee. The cost of selling a product is a fracti
on of what the costs are in other areas of Graingers business, allowing greater d
iscounts and greater pro ts. To encourage sales, account managers encourage their
accounts to order on the Internet, and the salespeople will still get commission
on their accounts, even when products are ordered from the website. One of the
biggest problems that retailers face in Internet retailing is the ability of man
ufacturers to go directly to the consumer. Why pay for a GE microwave from an ap
pliance store on the Web when you can buy it from GEs website (probably for less
money). What is particularly interesting about Grainger is that by selling a wi
de variety of manufactured products, it gives the customer a side-by-side compar
ison of all available brands when the customer is searching. The customer can bu
y all the brands he or she wants conveniently and easily instead of having to go
to each manufacturers website and buy separately.
784
TECHNOLOGY
6.2.2.
The Internet Mindset
Yes, the Internet will spur many direct-to-consumer businesses. Yes, the Interne
t will allow existing retailers to expand their markets and provide a new channe
l for their existing companies. And yes, the Internet will allow retailers to de
velop ef ciencies of time and money with their vendors and manufacturers. But more
importantly, the Internet will force retail leaders to develop an Internet fram
e of mind that will allow challenges to the way business is being done at all le
vels in all ways. The advances that will come from this change of mindset cannot
be predicted but can be expected.
6.3.
One to One Marketing
A companys success at individualized product offerings means greater market penet
ration, greater market share, greater share of the consumers wallet, and improved
satisfaction as customers get exactly what they want instead of settling for wh
at the retailer has. Mass customization / one to one marketing is the real nal fr
ontier for retailers. Responsive manufacturing, IT, and ef cient replenishment sys
tems allow Levis to offer the consumer a pair of jeans made to his or her exact s
peci cations. Here we have a vision of the ultimate supply chain management. Custo
mers have exactly what they want. A long-term relationship is established so tha
t the customer has no reason to desert to a competitor. Because a customer is ge
tting a unique product made just for him or her, a competitor cannot steal that
customer simply by offering price inducement. The jeans are not manufactured unt
il they are needed, and a third party delivers the product. The very issues that
supply chain management has been concerned with are reduced and eliminated by o
ne to one marketing.
6.4.
The Product Comes Back: Reverse Logistics
If the supply chain is relatively invisible, the reverse supply chain (getting b
ack to the manufacturer products that need repair or replacement) is practically
ethereal. However, the costs of getting products back through these reverse cha
nnels makes process and management issues extremely important to retailers. Acco
rding to the National Retail Federation, the average return rate from products b
ought in specialty stores is 10%, and in department stores, 12%. Catalogs have a
return rate three times higher. Early indications are that Internet purchases a
re twice that. Reverse logistics is maximizing the value from returned products
and materials. Frequently, the products / material put into the reverse supply c
hain can be refurbished or remanufactured. Xerox reports that recovering and rem
anufacturing saves over $200 million annually, which can then be passed along to
customers in lower prices or shareholders in higher earnings and pro ts. A produc
t returned for repair can either be replaced (and the product broken down and it
s parts used in other manufacturing) or repaired. Repaired or returned products
that cannot be sold as new can be resold, frequently as refurbished, using parts
from other returned products. Unfortunately, retailers and manufacturers have m
ade poor use of the strategic information available in returned products to chan
ge and improve design. Mostly the repairs are made or the broken product sits in
a warehouse. The information about your product and the company sits useless. A
growing area of concern is how to conscientiously dispose of products or their
components. Black & Decker avoided signi cant land ll costs and made money by sellin
g recyclable commodities. When there are no alternatives except disposal, the pr
oduct or material needs to be appropriately scrapped. In an era of environmental
concern (which will continue to grow and drive marketplace decisions), companie
s should take proactive action to develop environmentally sound land ll or inciner
ation programs. Not only will this allow favorable consumer reactions, but it ma
y forestall government regulation and control.
7.
THE FUTURE FOR SUPPLY CHAIN MANAGEMENT
The logic of supply chain management is compelling. Its bene ts are well understoo
d. The trend toward supply chain management is clear. Companies are excelling in
parts of the equation. As retailing moves into the 21st century, more retailers
will be adopting the following lessons from supply chain management as we see i
t today:
Advances in and use of IT to drive supply chain decisions. Large integrated stoc
k-replenishment
systems will control the storage and movement of large numbers of different good
s into the stores. A restructuring of existing distribution facilities and strat
egic placement and development of new distribution facilities to reduce inventor
y and move the inventory more ef ciently. Greater and more effective adoption of q
uick response. More frequent delivery of less. Greater use of cross-docking so t
hat merchandise hits the docks of the distribution centers and is immediately lo
aded into destination trucks. EDI and POS electronics will track what is selling
and transmit directly to distribution center to plan shipment.
. Technology and the Web will be driving the most innovative changes and applica
tions in retail supply chain management in the next 10 years, and this will driv
e retail performance and excellence.
REFERENCES
Anderson, D., and Lee, H. (1999), Synchronized Supply Chains: The New Frontier, Achi
eving Supply Chain Excellence Through Technology Project, www.ascet.com / ascet,
Understanding the New Frontier. Coopers & Lybrand (1996), European Value Chain
Analysis Study, Final Report, ECR Europe, Utrecht. Drayer, R. (1999), Procter & Ga
mble: A Case Study, Achieving Supply Chain Excellence through Technology Project,
www.ascet.com / ascet, Creating Shareholder Value. Fernie, J. (1990), Retail Dis
tribution Management, Kogan Page, London. Fernie, J., and Sparks, L. (1999). Log
istics and Retail Management: Insights into Current Practice and Trends from Lea
ding Experts, CRC Press, Boca Raton.
786
TECHNOLOGY
Gattorna, J., Ed. (1998), Strategic Supply Chain Management, Gower, Hampshire. K
uglin, F. (1998), Customer-Centered Supply Chain Management: A Link-by-Link Guid
e, AMACOM, New York. Lewis, J. (1995), The Connected Corporation, Free Press, Ne
w York. Quinn, F. (1999), The Payoff Potential in Supply Chain Management, Achieving
Supply Chain Excellence through Technology Project, www.ascet.com / ascet, Driv
ing Successful Change. Riggs, D., and Robbins, S. (1998), The Executives Guide to
Supply Chain Strategies: Building Supplies Chain Thinking into All Business Pro
cesses, AMACOM, New York.
788 10. DRIVER SCHEDULING 10.1. 10.2. 10.3. 10.4. 10.5. 10.6. Tractor-Trailer-Dr
iver Schedules Driver-Scheduling Problem 10.2.1. Notation Set-Partitioning Formu
lation with Side Constraints 812 812 813 813 813 11. 12. 10.8. 10.9.
TECHNOLOGY Generation of Schedules Beyond Algorithms 816 816 817 819 819 819 819
822
QUALITY IN TRANSPORTATION TECHNOLOGY 12.1. 12.2. 12.3. Vehicle Routing Informati
on Gathering and Shipment Tracking New Trends: Intelligent Transportation System
s (ITS)
Set-Covering Formulation with Soft Constraints 814 Column-Generation Methodology
814
Iterative Process for Solving the Driver-Scheduling Problem with Column Generati
on 815 Integrated Iterative Process for Solving the Driver-Scheduling Problem 81
5
REFERENCES
10.7.
1.
OVERVIEW
Transportation and distribution play a critical role in the successful planning
and implementation of todays supply chains. Although many view the transportation
of goods as a non-value-added activity, effective transportation planning and e
xecution will not only enhance a companys productivity but will also increase cus
tomer satisfaction and quality. In this chapter, we will explore the factors tha
t impact the transportation of goods and the different tools and techniques that
the industrial engineer can apply in the development of effective transportatio
n networks and systems to reduce or minimize costs, improve cycle time, and redu
ce service failures. A similar but inherently different aspect of transportation
is that of transporting people. Although this chapter is concerned with the tra
nsportation of goods, the industrial engineer also plays an important role in de
signing these types of systems. Todays logistics activities are concerned with th
e movement of goods, funds, and information. Information technology has now beco
me an integral component of any transportation system. Technology is being used
for scheduling and creating complex delivery and pickup routes and also for prov
iding customers with up-to-the-minute information on the status of their shipmen
ts. The industrial engineer will not only aid in the development of ef cient deliv
ery routes and schedules, but will also help in the design of state-of-the-art t
ransportation information systems.
2.
INTRODUCTION
Transport is the process of transferring or conveying something from one place t
o another. Transportation is the process of transporting. The transportation of
people, goods, funds, and information plays a key role in any economy, and the i
ndustrial engineer can play a key role in properly balancing the parameters and
constraints that affect the effectiveness and ef ciency of transportation systems.
This chapter will cover certain applications of industrial engineering in trans
portation. The emphasis is placed on the movement of goods. However, the methodo
logies described in the chapter can be applied to a variety of transportation pr
oblems. Transportation plays a critical role in todays increasingly small world.
Economies, businesses, and personal travel are, in one word, global. Information
on the status of the items or persons being moved is as crucial as the movement
itself. Con rmation of delivery in a timely, electronic form is often as importan
t as on-time, damage-free, value-priced arrival. The industrial engineer can app
ly a variety of mathematical and engineering tools and techniques in the plannin
g and management of effective transportation networks and systems in order to re
duce or minimize costs, improve cycle time, reduce service failures, and so on.
The industrial engineer plays a critical role in the development of ef cient deliv
ery routes, schedules, and plans and also helps in the design and implementation
of transportation information systems.
3.
3.1
TRANSPORTATION AND INDUSTRIAL ENGINEERING
Transportation as a System
Designing a transportation system means, in most cases, the design of multiple i
ntegrated systems systems to move goods or people, systems to move information ab
out goods or people, and systems
ter the costs and quality of service, the greater the demand and therefore, the
greater the need to alter the service territory and the frequency of service. Tr
ansportation planning, execution, and measurement are the fundamental functions
associated with transportation and are integral to the success of the transporta
tion system. Planning is needed across all areas of the transportation system. P
lanning the overall distribution network, regional (e.g., certain geography) pla
nning, and site (e.g., local) planning are crucial. Asset planning, including bu
ildings and facilities, vehicles, aircraft, trailers, and materials-handling equ
ipment, is also required. Demand variations drive decisions for asset quantity a
nd capacity. The use of owned vehicles supplemented by leased vehicles during pe
ak (high-demand) times is an example of the decisions that need to be made durin
g the planning process. The impact of demand on scheduling, vehicle routing, and
dispatching drives labor and asset needs. Labor can often be the largest compon
ent of cost in the transportation industry. The transportation planning activity
is charged with developing plans that, while minimizing costs, meet all service
requirements.
790
TECHNOLOGY
The execution of the plan is as important as the planning of the transportation
process itself. An excellent plan properly executed results in lower costs and h
igher quality of service. This in turn drives demand and therefore the need for
new plans. Finally, the proper measurement of the effectiveness of the transport
ation system, in real time, offers a mechanism by which transportation managers,
supply chain specialists, and industrial engineers can constantly reduce costs
and improve service. With new technologies such as wireless communication, globa
l positioning systems (GPS), and performance-monitoring technology, measurement
systems allow the users to improve transportation processes as needed.
5. THE ROLE OF THE INDUSTRIAL ENGINEER IN TRANSPORTATION PLANNING AND TRANSPORTA
TION OPERATIONS
The role of the industrial engineer in the transportation industry is primarily
to aid the organization in providing a high level of service at a competitive pr
ice. The industrial engineer has the skills necessary to assist in many areas th
at impact the effectiveness of the transportation system:
Work measurement and methods analysis: Labor is a large component of the total c
ost of service
As indicated by this list, the industrial engineer adds great value to the trans
portation industry. Industrial engineering principles are highly applicable in t
his complex industry. Industrial engineers continue to serve as key members of t
he teams responsible for the design and integration of systems to facilitate the
movement of goods, people, funds, and information in this increasingly competit
ive industry.
6.
TRANSPORTATION AND THE SUPPLY CHAIN
Supply chain management is a comprehensive concept capturing objectives of funct
ional integration and strategic deployment as a single managerial process. Figur
e 1 depicts the supply chain structure. This structure has been in place for dec
ades. Even when the manufacturing activity takes place in minutes, the nal delive
ry of a product may take days, weeks, or months, depending on the ef ciency of the
supply chain. The operating objectives of a supply chain are to maximize respon
se, minimize variance, minimize inventory, maximize consolidation, maintain high
levels of quality, and provide life-cycle support. Transportation is part of th
e physical distribution. The physical distribution components include transporta
tion, warehousing, order processing, facility structure, and inventory. The majo
r change in
792
TECHNOLOGY
minimized directly or not, the model may include time constraints (e.g., time wi
ndows, total time of a drivers route) or the time element may be incorporated in
the input data of the model (e.g., included in the underlying network). There is
a tradeoff between transportation cost and transportation time. In the last dec
ades, shorter product cycles in manufacturing and notions like just-in-time inve
ntory have resulted in an increasing demand for smaller transportation times at
higher prices and higher transportation cost. The smallpackage transportation in
dustry has responded with many premium services that guarantee short transportation
times. Even when the transportation time is not guaranteed, it represents a prim
ary element of quality of service along with other considerations such as minimi
zation of lost and damaged goods, which are often handled indirectly or are exte
rnal to the cost optimization models.
7.2.
Integrating Customer Needs in Transportation Planning
Although we often try to minimize cost in transportation planning, what we reall
y want to achieve is maximization of pro t (revenue minus cost). Cost minimization
assumes that demand is external to the model and unaffected by the solution obt
ained. Demand remains at assumed levels only if the obtained solution satis es cus
tomer requirements and customer needs are integrated into the transportation pla
nning process. Excluding price and transportation time, many customer needs are
not easily incorporated into a mathematical model. They may be included in the i
nput data (e.g., different types of service offered) or considered when alternat
ive solutions obtained by optimization are evaluated. Flexibility, a good set of
transportation options, and good communication are of primary importance to cus
tomers, permitting them to effectively plan their own operations, pickups, and d
eliveries.
7.3.
Forecasting in Transportation Planning
The development of effective transportation plans is highly dependent on our abi
lity to forecast demand. Demand, in the case of the transportation of goods, ref
ers to the expected number of shipments, the number of packages associated with
a shipment, and the frequency with which such shipments occur. When developing t
ransportation routes and driver schedules, demand levels are used as input and t
herefore need to be forecasted. Changes in demand can occur randomly or can foll
ow seasonal patterns. In either case, if an accurate forecast is not produced, t
he transportation planning effort will yield less accurate results. These result
s have implications in the design of facilities (e.g., capacity), the acquisitio
n of assets (e.g., delivery vehicles), and in the development of staf ng plans (e.
g., labor requirements). It is important to note that several factors affect dem
and in the transportation industry. Business cycles, business models, economic g
rowth, the performance of the shippers business, competition, advertising, sales,
quality, cost, and reputation all have a direct impact on the demand for transp
ortation services. When developing a forecast, the planner must address some bas
ic questions: 1. 2. 3. 4. 5. 6. Does a relationship exist between the past and t
he future? What will the forecast be used for? What system is the forecast going
to be applied to? What is the size of the problem being addressed? What are the
units of measure? Is the forecast for short-range, long-range, or medium-range
planning purposes?
Once these questions have been addressed, the steps shown in Figure 3 guide the
planner towards the development of a forecasting model that can be used in gener
ating the forecast to support the transportation planning process. Depending on
the type of the transportation problem being solved (e.g., local pickup and deli
very operations, large-scale network planning), the user may select different fo
recasting techniques and planning horizons. For long-range network planning in w
hich the planner is determining the location of future distribution centers, lon
g-range forecasts are required. Sales studies, demographic changes, and economic
forecasts aid in the development of such forecasts. When developing aggregate p
lans in order to determine future staf ng needs and potential facility expansion r
equirements, the planner develops medium-range forecasts covering planning horiz
ons that range from one or two quarters to a year. Known techniques such as time
series analysis and regression are usually applied to historical demand informa
tion to develop the forecast. Finally, in order to develop weekly and daily sche
dules and routes to satisfy demand in local pickup and delivery operations, the
planner may develop daily, weekly, or monthly forecasts (short-range forecasts).
Because demand uctuations can have a direct impact on the effectiveness of picku
p and delivery routes, the use of up-to-date information from shippers is critic
al. Techniques such as exponential smoothing and trend extrapolation facilitate
this type of forecasting.
794
TECHNOLOGY
These windows are usually one-sided: a delivery must occur before a given time a
nd a pickup after a given time. Some windows, though, may be wide open (e.g., a
delivery must occur by the end of the workday) and other windows may be two-side
d (a pickup must occur before an early closing time). Vehicle capacities are not
a limitation in this problem; therefore, the eet of vehicles may be considered h
omogenous if the difference in type of vehicle does not have a signi cant effect o
n cost. Maximum on-road time for the workday is an important limitation, where o
n-road time is de ned as the total elapsed time from the moment a vehicle leaves t
he depot until it returns to the depot. The importance of on-road time derives f
rom the fact that often drivers are paid overtime for hours of work beyond a cer
tain length (e.g., 8 hours), and work rules prohibit work days beyond a certain
length (e.g., 10 hours). Ef cient distribution implies good asset utilization. The
primary assets in this problem are vehicles and drivers. Minimization of the nu
mber of drivers and of the total cost of the routes are frequently used objectiv
es. Ef ciency needs to be achieved while level of service is maintained; that is,
service is provided to all customers in such a way as to satisfy all time-window
constraints. This problem has a very short-term planning horizon. Since custome
r stops may differ from day to day and all the stops are known only a few hours
before the actual pickup-and-delivery operation, the problem needs to be solved
a few hours before implementation.
8.2.
Modeling
The pickup-and-delivery problem can be modeled as a variation of the vehicle rou
ting problem with time windows (VRPTW) and a single depot. Inputs for the VRPTW
include matrices that specify the distance and travel time between every pair of
customers (including the depot); service time and time window for each customer
; maximum (or speci ed) number of drivers; starting time of the workday; and maxim
um on-road time of the workday. The maximum on-road time of the workday can be i
mplemented as a time window on the depot, so that the pickup and delivery proble
m described above can be considered as a traveling salesman problem with time wi
ndows and multiple routes (m-TSPTW). However, the term VRPTW will be used in thi
s section for this uncapacitated pickupand-delivery problem. Distances and trave
l times can be derived from the longitude and latitude of the customers and depo
t, assuming speci c speed functions. Alternatively, distances and travel times can
be obtained from an actual street network; this latter method provides greater
accuracy. Longitude and latitude values are relatively easy to obtain; it is oft
en considerably more dif cult to acquire data for a street network. The objective
in the VRPTW is to minimize the number of drivers and / or minimize the total co
st of the routes while satisfying all constraints. Cost is a function of total d
istance and on-road time. The output is a set of routes, each of which speci es a
sequence of customers and starts and ends at the depot. Because of the short-ter
m planning horizon, the drivers need to be noti ed for work before the problem is
solved. In many cases, therefore, the number of drivers may be estimated and ass
umed given; the objective then becomes to minimize total cost. The number of dri
vers is unlikely to be changed daily. However, the transportation of small packa
ges is characterized by seasonality of demand. In the United States, demand incr
eases by 40 50% during the months before Christmas. A pickup-and-delivery model c
an be used to obtain the minimum number of additional drivers that must be used
to maintain level of service when demand increases. The VRPTW is an extension of
the traveling salesman problem with time windows (TSPTW), which in turn is an e
xtension of the classical traveling salesman problem (TSP). The TSP and its vari
ants have been studied extensively, and many algorithms have been developed base
d on different methodologies (Lawler et al. 1985). The TSP is NP-complete (Garey
and Johnson 1979) and therefore is presumably intractable. For this reason, it
is prohibitively expensive to solve large instances of the TSP optimally. Becaus
e the TSPTW (and the VRPTW with a maximum number of available drivers) are exten
sions of the TSP, these problems are NP-complete as well. In fact, for the TSPTW
and VRPTW, not only is the problem of nding an optimal solution NP-complete; so
is the problem of even nding a feasible solution (a solution that satis es all time
window constraints) (Savelsbergh 1985). For more on time constrained routing an
d scheduling problems, see Desrosiers et al. (1995). Existing exact algorithms f
or the VRPTW have been reported to solve problems with up to 100 stops. However,
the problems encountered in parcel delivery are often substantially larger than
this. Moreover, these problems need to be solved quickly because of the short-t
erm planning horizon. For these reasons, much of the work in this area has focus
ed on the development of heuristic algorithms algorithms that attempt to nd good s
olutions instead of optimal solutions. Heuristic algorithms for the TSPTW and VR
PTW are divided into two general categories: routeconstruction heuristics and ro
ute-improvement heuristics. The rst type of heuristic constructs a set of routes
for a given set of customers. The second type of heuristic starts from an existi
ng feasible solution (set of routes), and attempts to improve this solution. Com
posite procedures employ both
There are two general strategies that a route-construction algorithm for the VRP
TW can adopt. The rst strategy is cluster rst, route second, which rst assigns stops t
drivers, and then constructs a sequence for the stops assigned to each driver.
The second strategy carries out the clustering and sequencing in parallel. Becau
se clustering is based primarily on purely spatial criteria, the cluster- rst, rou
te-second strategy is often more appropriate for the vehicle routing problem (VR
P), where time windows are not present than for the VRPTW. However, cluster- rst s
trategies can play a useful role in some instances of the VRPTW, as will be disc
ussed later. For now, we will con ne our attention to algorithms that carry out cl
ustering and sequencing in parallel. Within this general class of algorithms, th
ere is a further distinction between sequential heuristics, which construct one
route at a time until all customers are routed, and parallel heuristics, which c
onstruct multiple routes simultaneously. In each case, one proceeds by inserting
one stop at a time into the emerging route or routes; the choice of which stop
to insert and where to insert it are based on heuristic cost measures. Recent wo
rk suggests that parallel heuristics are more successful (Potvin and Rousseau 19
93), in large part because they are less myopic than sequential approaches in de
ciding customer-routing assignments. In the following discussion, we focus on he
uristic insertion algorithms for the VRPTW. Solomon (1987) was the rst to general
ize a number of VRP route-construction heuristics to the VRPTW, and the heuristi
c insertion framework he developed has been adopted by a number of other researc
hers. Solomon himself used this framework as the basis for a sequential algorith
m. In what follows, we brie y describe Solomons sequential algorithm and then turn
to extensions of the generic framework to parallel algorithms.
8.4.
A Sequential Route-Construction Heuristic
This heuristic builds routes one at a time. As presented by Solomon (1987), sequ
ential route construction proceeds as follows: 1. Select a seed for a new route. 2.
If not all stops have been routed, select an unrouted stop and a position on the
current route that have the best insertion cost. If a feasible insertion exists
, make the insertion, else go to step 1. In step 1, the heuristic selects the rst
stop of a new route (seed). There are various strategies for selecting a seed.
One approach is to select the stop that is farthest from the depot; another is t
o select the stop that is most urgent in the sense that its time window has the
earliest upper bound. In step 2, the heuristic determines the unrouted stop to b
e inserted next and the position at which it is to be inserted on the partially
constructed route. In order to make this determination, it is necessary to compu
te, for each unrouted stop, the cost of every possible insertion point on the ro
ute. Solomon introduced the following framework for de ning insertion cost metrics
. Let (s0, s1, . . . , sm) be the current partial route, with s0 and sm represen
ting the depot. For an unrouted stop u, c1(si, u, sj) represents the cost of ins
erting u between consecutive stops si and sj. If the insertion is not feasible,
the cost is in nite. For a given unrouted stop u, the best insertion point (i(u),
j(u))
minp 1,...,m [c1(sp 1, u, sp)] The next
the one for which c1(i(u*), u*, j(u*))
mi
inserted in the route between i(u*) and j
function c1 are considered later.
8.5.
A Parallel Route-Construction Heuristic
A parallel route-construction heuristic that extends Solomons sequential algorith
m is presented next; the presentation is based on ideas presented by Potvin and
Rousseau (1993, 1995) and Russell (1995).
796
TECHNOLOGY
1. Run the sequential algorithm to obtain an estimate of the number of routes k.
We can also retain the k seeds obtained by the sequential algorithm. Alternativ
ely, once one has obtained an estimate of the number of routes, one can use some
other heuristic to generate the seeds. A representative method is the seed-poin
t-generation procedure of Fisher and Jaikumar (1981). The result of this process
is a set of k routes that start from the depot, visit their respective seeds, a
nd return to the depot. 2. If there are no unrouted stops, the procedure termina
tes. Otherwise, for each unrouted stop u and each partially constructed route r
(s0, . . . , sm), nd the optimal insertion point ( pr(u), qr(u)) for which c1( pr
(u), u, qr(u)) minj 1,...,m [c1(sj 1, u, sj)] for some cost function c1. For each un
routed u, its optimal insertion point ( p(u), q(u)) is taken to be the best of i
ts route-speci c insertion points: c1( p(u), u, q(u))
minr [c1( pr(u), u, qr(u))]
Select for actual insertion the node u* for which c2( p(u*), u*, q(u*))
optimalu
[c2( p(u), u, q(u)] for some cost function c2 that may (but need not) be identi
cal to c1. If a feasible insertion exists, insert the stop u* at its optimal ins
ertion point and repeat step 2. Otherwise, go to step 3. 3. Increase the number
of routes by one and go to step 2. Possible de nitions for c1 and c2 are presented
below; again, the presentation here follows that of Potvin and Rousseau (1993).
We can measure the desirability of inserting u between i and j as a weighted co
mbination of the increase in travel time and cost that would result from this in
sertion. Thus, let dij d(i,u, j) ej eu, j t(i,u, j)
cost from stop i to stop j diu
duj
dij current service start time at j new servi
ce start time at j, given that u is now on the route eu, j
ej
Then a possible measure of the cost associated with the proposed insertion is gi
ven by c1(i,u, j)
1d(i,u, j)
2t(i,u, j) where 1 and 2 are constants satisfying
1
2
1 1, 2 0
(1)
One way of de ning the measure c2 is simply by setting c2
c1. In this case, the op
timal c2-value is a minimum. An alternative approach is to de ne c2 as a maximum reg
ret measure. A regret measure is a kind of look-ahead heuristic: it helps select the n
ext move in a search procedure by heuristically quantifying the negative consequ
ences that would ensue, if a given move were not selected. In the context of veh
icle routing, a simple regret measure for selecting the next customer to be rout
ed might proceed as follows. For each unrouted customer, the regret is the differenc
e between the cost of the best feasible insertion point and the cost of the seco
nd-best insertion point; the next customer to be added to a route is one whose r
egret measure is maximal. But this is still a relatively shortsighted approach.
One obtains better results by summing the differences between the best alternati
ve and all other alternatives (Potvin and Rousseau 1993; Kontoravdis and Bard 19
95). The regret measure that results from this idea has the form c2(u)
r
r*
[c1( pr(u), u, qr(u))
(2)
Underlying the use of this regret measure is a notion of urgency: the regret mea
sure for a stop u is likely to be high if u has relatively few feasible or low-c
ost insertion points available. We would like
798
TECHNOLOGY
routed node, the best feasible insertion cost for each of the three current rout
es is computed from formula (1), where 1
0, 2
1, and time is used as the cost meas
ure. This computation obtains for node 1 the best insertion costs (55 63 53) for
the three routes. Hence, the best feasible insertion of node 1 is into route 3
(r* 3 in formula (2)). The regret measure for node 1 using formula (2) is calcul
ated as follows: c2 (1)
55 53
63 53
53 53
12 The regret measures for the remain
ng unrouted nodes are computed similarly. The regret values for each node are sh
own in the following array, where x indicates a node that has already been inclu
ded in a route. Regrets (12 16 52 x 100 x 3 23 96 29 86 21 31 1 24 50 48 x) At e
ach iteration of the procedure, the next stop to be inserted into a route is a s
top with the maximal regret measure. In the present case, node 5 has the maximal
regret. Hence, the procedure inserts node 5 into its best feasible insertion po
int ( rst stop after the depot on route 1). After node 5 is inserted, the three ro
utes under construction become (05180), (060), and (040). The algorithm repeats this p
ocess until all nodes have been inserted. The three routes returned by the proce
dure are shown in Table 3 and Figure 4.
8.7.
Infeasible Problems
One of the inputs to a typical instance of the VRPTW is the maximum number of dr
ivers available. In the case where the time windows are relatively tight, an alg
orithm may not be able to nd a feasible solution (i.e., a solution in which all t
ime windows are satis ed). The way one proceeds in this case depends on the nature
of the application. In some applications, where there is a service guarantee an
d the same penalty applies no matter how close to meeting the service commitment
a late delivery happens to be, we wish to minimize the number of missed time wi
ndows. In other applications, where the penalty is proportional to the lateness
of a delivery, we wish to minimize,
TABLE 3 Node Route 1 0 5 18 16 13 15 0 Route 2 0 12 11 6 9 17 3 0 Route 3 0 2 14
7 1 8 10 4 0
Solution Routes Time Window 875 875 875 875
875 875 875 875 875 875 875 875 875 875 875
300 1200 1050 1200 1200 1200 1200 1300 1300
300 Arrival Time 875 929 980 1034 1058 1077
095 875 904 936 959 975 993 1021 1058 1110
800 startTimec
startTimea
svcTimea
svcTimeb
tac
TECHNOLOGY
Recall that svcTimeb is just the duration of the break. This is the basic pictur
e, but there is at least one complication that we have to take into account. Sup
pose, as before, that we wish to insert break node b between a and c. Suppose al
so that the travel time from a to c is 30 minutes, that service is completed at
a at 10:45, and that the time window for b is [11:00, 1:00]. Then, according to
the simple scheme just described, we would have to wait for 15 minutes at locati
on a before starting lunch at a! To avoid this sort of awkward behavior, we woul
d actually insert b at the rst point on segment (a, c) such that there is no wait
ing time at b. If there is no such point, then we conclude that no lunch break c
an be inserted between stops a and c. For example, in the scenario just describe
d, we would insert b halfway between a and c. When a break has been fully repres
ented this way as a node with time window, location, and service time, it is tre
ated just like any other node within the heuristic insertion framework.
8.9.
Route-Improvement Heuristics
Heuristic route-improvement procedures play a fundamental role in vehicle routin
g algorithms. Such procedures take as input a feasible solution consisting of a
route or set of routes and seek to transform this initial solution into a lowercost feasible solution. One of the most successful route-improvement strategies
has involved the use of edge-exchange heuristics. The edge-exchange technique wa
s introduced by Lin and Kernighan (1973) as a local search strategy in the conte
xt of the conventional traveling salesman problem, but it has since been applied
to a variety of related problems, including the vehicle routing problem. For an
individual route, a k-exchange involves replacing k edges currently on the rout
e by k other edges. For example, a 2-exchange involves replacing two edges (say
(i, i
1) and ( j, j
1)) by two other edges ((i, j) and (i
1, j 1)). Usually, all
the available k-exchanges are examined and the best one is implemented. This is
repeated as long as an improved solution is obtained. Since n there are subsets
of k edges in a cycle of n edges, the computational complexity of the edgek exc
hange method increases rapidly with k. Even one k-exchange requires O(nk) time,
so attention is usually con ned to the cases k
2 and k
3. The idea of an edge exch
ange can be extended in a straightforward way to the case of pairs of routes. In
this case, one exchanges entire paths between routes, where a path consists of
a sequence of one or more nodes. For example, let routeA (a1, a2, . . . , ak), r
outeB
(b1, b2, . . . , bn). Then, after exchanging paths between the two routes,
the routes that result would be of the form, routeA (a1, . . . , ai, bj 1, . . .
, bj r , am, . . . , ak), and similarly for routeB. As mentioned, the goal of a ro
ute-improvement heuristic is to reduce routing costs; typically this means that
we wish to minimize route duration. However, when we are dealing with time windo
ws, we must also verify that any proposed exchange retains time window feasibili
ty. Techniques for ef cient incorporation of time window constraints into edge-exc
hange improvement methods were developed by Savelsbergh (1985, 1990, 1992); see
also Solomon et al. (1988). The most recent work on route improvement has focuse
d on the development of metaheuristics, which are heuristic strategies for guidi
ng the behavior of more conventional search techniques, with a view towards avoi
ding local optima and thereby achieving higher-quality solutions. Metaheuristic
techniques include tabu search, genetic algorithms, simulated annealing, and gre
edy randomized adaptive search (Glover 1989, 1990; Kontoravdis and Bard 1995; Po
tvin et al. 1996; Rochat and Taillard 1995; Taillard et al. 1997; Thangiah et al
. 1995). To illustrate the pertinent concepts, we focus here on tabu search. Tab
u search helps overcome the problem of local optimality, which often arises when
traditional deterministic optimization heuristics are applied. The problem of l
ocal optimality arises in the following way. A wide range of optimization techni
ques consists of a sequence of moves that lead from one trial solution to anothe
r. For example, in a vehicle-routing algorithm, a trial solution consists of a r
oute for each vehicle; and a move, in the route-improvement phase, might consist of
some sort of interroute exchange. A deterministic algorithm of this general type
selects a move that will most improve the current solution. Thus, such a proced
ure climbs a hill through the space of solutions until it cannot nd a move that will
improve the current solution any further. Unfortunately, while a solution discov
ered with this sort of hill-climbing approach cannot be improved through any loc
al move, it may not represent a global optimum. Tabu search provides a technique
for exploring the solution space beyond points where traditional approaches bec
ome trapped at a local optimum. Tabu search does not supplant these traditional
approaches. Instead, it is designed as a higher-level strategy that guides their
application. In its most basic form, the tabu method involves classifying certa
in moves as forbidden (or tabu); in particular, a move to any of the most recently g
enerated solutions is classi ed as tabu.
802
TECHNOLOGY
such a way that all potential customers are assigned to some driver. Note that a
set of a priori tours of this sort does provide the kind of regularity of servi
ce described above. At rst glance, the task of calculating E[L ] for a given a prio
ri tour
may appear problematic because the summation is over all 2n subsets of V
. However, Jaillet derived an ef cient closed-form expression for E[L ], which requi
res only O(n2) time to compute (see discussion in Jaillet 1988). Of course, bein
g able to calculate E[L ] ef ciently for a given a priori tour is very different fro
m actually nding a tour that minimizes E[L ]. Because the PTSP is at least as hard
as the TSP, there is little hope of developing exact optimization methods that c
ould solve more than modestly sized instances of the problem. Consequently, one
must employ heuristics in order to develop practically viable PTSP algorithms. H
owever, it has proven dif cult to develop effective heuristic approaches to the PT
SP, at least in part because the class of optimal solutions to the PTSP has prop
erties very different from those associated with the conventional (Euclidean) TS
P. For example, one of the fundamental properties of the Euclidean TSP is that a
n optimal solution cannot intersect itself; this property follows directly from
the triangle inequality. In contrast, an optimal solution for the PTSP can inter
sect itself, even when the triangle inequality is satis ed (systematic treatments
of the differences between the TSP and PTSP are presented by Jaillet [1988] and
Jaillet and Odoni [1988]). One consequence of the fact that the PTSP has feature
s very different from the TSP is that, in general, we cannot expect heuristic ap
proaches designed speci cally for the TSP to be extended successfully to the PTSP.
In general, therefore, entirely new solution approaches are needed. One of the
few promising approaches to the PTSP that has been discussed in the literature i
s based on the idea of space lling curves (Bartholdi and Platzman 1988). But the proba
bilistic versions of the TSP and VRP remain very much under the heading of resea
rch topics. The problem becomes even more dif cult when we turn to probabilistic v
ersions of the VRPTW. For this problem, presumably, the a priori strategy would
involve multiple objectives; in addition to minimizing the expected length of th
e a priori tour, we would also want to minimize the expected number of missed ti
me windows. But it is dif cult even to formulate this problem properly, much less
solve it. For this reason, when time windows are involved, we try in actual appl
ications to capture some subset of the key features of the problem. In this case
, instead of trying to construct a priori tours of the kind just described, we s
imply try to construct a solution in which a large percentage of repeat customer
s is handled by the same driver and each driver is relatively familiar with most
of the territory to which he or she is assigned. One way of achieving such a so
lution is to partition the service area into geographical regions and then alway
s assign the same driver to the same region. In operational terms, a solution of
this sort is one in which drivers have well-de ned territories: a driver travels
from the depot to his or her service area, carries out his or her work there, an
d then returns to the depot. This picture is to be contrasted with the one that
emerges from the heuristic insertion framework, which usually results in routes
that form a petal structure: in general, it creates routes in the form of inters
ecting loops, in which stops appear in a roughly uniform way. To construct a sol
ution in which drivers are assigned to territories in this way, it is appropriat
e to adopt the cluster- rst, route-second strategy mentioned earlier. Thus, we pro
ceed by rst assigning stops to drivers, thereby partitioning the service area. We
then construct a sequence for the stops assigned to each driver. This strategy
represents a departure from the heuristic insertion framework, according to whic
h clustering and routing proceed in parallel. One way of implementing a cluster- r
st strategy is as follows. Suppose that, on a given day, we wish to construct k
routes. We can proceed by (heuristically) solving a k-median problem (Daskin 199
5); we then carry out the initial clustering by assigning each stop to its close
st median. Having created the clusters, we then construct a route for each clust
er individually by solving the resulting TSPTW. Unfortunately, it is often impos
sible to construct a feasible route for each cluster. When feasible routes canno
t be constructed, we have to shift stops between clusters in such a way as to ac
hieve feasibility while maintaining well-de ned territories as much as possible. I
f we begin by solving a new k-median problem each day, then we may succeed in pr
oducing well-de ned territories for drivers; but we are unlikely to achieve consis
tency from one day to the next in assigning customers to drivers. A natural way
of extending this approach to achieve consistency is to take historical informat
ion into account. Thus, we can solve a weighted k-median problem over the set of
all locations that have required service over a speci ed number of past days, whe
re the weight of a location is proportional to the number of times it has actual
ly required service. We would then consistently use these medians as the basis f
or clustering. There may well be days in which the initial clusters thus constru
cted are signi cantly unbalanced with respect to numbers of customers per driver;
in this case, we have to shift stops between clusters, as described above. Wheth
er the procedure just outlined can be applied successfully in the presence of ti
me windows depends in part on the nature of those time windows. For some applica
tions, time windows are onesided, and there are only a few different service lev
els: for example, delivery by 9:00 am for express packages, delivery by 11:00 am
for priority packages, and so on. For such applications, it is plausible
During the pickup and delivery operation of a parcel delivery company, packages
are picked up and brought to a local terminal, as discussed in Section 8. These
packages are sometimes transported to their destination terminal directly and de
livered to the customers. In most cases, however, packages go though a series of
transfer terminals where they get sorted and consolidated. Transportation betwe
en terminals is achieved using several modes of transportation. The most prevale
nt modes on the ground are highway and rail, using a eet of tractors and trailers
of different types. A eet of aircraft is used for transporting by air. A transfe
r terminal often shares the same building with one or more local terminals. Seve
ral sorting operations may be run daily in a transfer terminal and are repeated
every day of the work week at the same times. A large transfer terminal may have
four sorting operations at different times of the day or night. Packages get un
loaded, sorted, and reloaded on trailers. Several types of trailers may be used
with different capacities and other characteristics. One or more trailers are at
tached to a tractor and depart for the next terminal on their route. This may be
another transfer terminal or their destination terminal, or a rail yard for the
trailers to be loaded and transported by rail, or an air terminal, if the packa
ges have to be transported by air. In this section, we will examine a model that
determines the routes of packages from their origin to their destination termin
als and the equipment used to transport them, only for packages that travel on t
he ground using two modes, highway or rail.
804
TECHNOLOGY
Package handling and transporting between terminals represents a large part of t
he operating costs for a package transportation company. A good operating plan t
hat minimizes cost and / or improves service is crucial for ef cient operations. I
n recent years, many new products and services have been introduced in response
to customer demand. These changes as well as the changes in package volume requi
re additions and modi cations to the underlying transportation network to maintain
its ef ciency. A planning system that can evaluate alternatives in terms of their
handling and transporting costs and their impact on current operations is very
important. There is a tradeoff between the number of sorting operations a packag
e goes through and the time needed to reach its destination. More sorting operat
ions increase the handling cost, decrease the transportation cost because they c
onsolidate loads better, but also increase the total time needed until a package
is delivered to the customer. All these factors need to be taken into account i
n the design of a transportation network for routing packages.
9.2.
A Network-Optimization Problem
The network-optimization problem for transporting packages on the ground can be
de ned as follows. Determine the routes of packages, mode of transportation (highw
ay or rail), and types of equipment to minimize handling and transportation cost
s while the following constraints are met: service requirements are respected, t
he capacities of sorting operations are not surpassed, no sorting operation is u
nderutilized, the number of doors of each facility available for loading trailer
s is considered, trailers balance by building, and all the packages to the same
destination are routed along the same path. To maintain level of service, packag
es must be transported between particular origin / destination (OD) pairs in a g
iven time. The capacity of a sorting operation cannot be surpassed but also, if
the number of packages through a sorting operation falls below a given value, th
e sorting operation needs to be closed down. The number of loading doors of each
facility represents the maximum number of trailers that can be loaded simultane
ously during a sorting operation. Note that two or more of the trailers loaded s
imultaneously may be going to the same terminal next on their route. Trailers ne
ed to balance by building daily. This means that the trailers of each type that
go into a building must also leave the building during the daily cycle. Balancin
g is achieved by introducing empty trailers into the system. There are several t
ypes of trailers that are used, with different capacities. Some types can be use
d both on highway and rail. Trailers can also be rented from the railroads, and
these trailers may not need to balance. Each tractor can pull one, two, and, on
rare occasions, three trailers, depending on the trailer types and highway restr
ictions. At a terminal, tractortrailer combinations are broken up, the trailers u
nloaded, the packages sorted, the trailers loaded again, and the tractor and tra
ilers reassembled to form new combinations. There is no constraint on the number
of trailers of any type that are used. An important operational constraint is t
hat all the packages to the same destination must be routed along the same path.
This implies that all packages with the same destination terminal follow the sa
me path from the terminal where they rst meet to their destination. This constrai
nt results from the fact that sorting is done by destination. If sorting became
totally automated, this constraint might be modi ed. A network-optimization model
can be used as a long-term planning tool. It can evaluate alternatives for locat
ing new building facilities, opening or closing sorting operations, and changing
an operating plan when new products are introduced or the number of packages in
the system changes. Because any such changes require retraining of the people w
ho manually sort the packages, the routing network in parcel delivery is not cha
nged often or extensively. For a network-optimization model to be used as an int
ermediate-term planning tool to modify the actual routing network, the model nee
ds to obtain incremental changes, that is, to obtain solutions as close as possi
ble to the current operations.
9.3.
Modeling
The network-optimization problem is formulated on a network consisting of nodes
that represent sorting operations and directed links that represent movement of
packages from one sorting operation to another. Both the highway and rail modes
are represented by one link if the same time is needed to traverse the link by e
ither mode. If a different time is needed by each mode, two different links are
used, each representing one mode. In the latter case, all the packages need to t
ravel by only one mode even if both modes are feasible so that all the packages
that start together arrive at the same time to their destination. It is assumed
that every day is repeated unchanged without any distortion of the pattern becau
se of weekends. Since a sorting operation is repeated every day at the same time
, one node represents a sorting operation for as many days as necessary for a co
mplete cycle of operations to occur in the system. To avoid any confusion, we wi
ll use in the description of the network design problem the terms origin and des
tination or OD pair to refer to the origin and destination nodes of a package; w
e will use the terms initial and nal nodes to refer to the origin and destination
nodes of a directed link.
806
TECHNOLOGY
The cost dikl is estimated as the average cost of a package through sorting oper
ation i N multiplied by the number of packages of OD pair (k,l) A. The cost cijm
is estimated as a function of the distance of link (i, j) E, fuel prices, drive
r wages as well as depreciation and cost of the trailer-combination type m Mij.
It is assumed that all the packages processed through a sorting operation are av
ailable at the beginning and leave at the end of the sorting operation. The time
hij of link (i, j) E is the difference between the starting time of the sorting
operation j and the ending time of the sorting operation i with the appropriate
time difference included for the packages to be transported on link (i, j) and
be available at j on time. The time hij as de ned above makes it unnecessary to co
nsider the time windows of the sorting operations explicitly in the following fo
rmulation. The time hij is also equal to the travel time on link (i, j) plus the
difference in time between the beginning of the sorting operation at j and the
arrival of the packages at j, which corresponds to the wait time until the sorti
ng operation j starts. The capacity of a sorting operation is represented by an
upper bound that cannot be exceeded. A lower bound may also be used to force sor
ting operations to be closed if they are underutilized.
9.4.
Network Design Formulation
Min Z(x,w,y,z,t)
jN (k,l)A djkl i N yijkl
jN yijkl
pN ypikl
b (k,l) A, i N
(3)
(4)
where b 1 for i
k; b
skl
i N, j N, (k,l) A
1 for i
l; b
0 otherwise tjkl
tikl
INF( yijkl
1) hij
(5) (6) (7) (8) (9) (10) j (11) (12) (13) (14) (15) (16) (17) (18) (19) (20)
(k,l) A (k,l) A jN jN
tlkl kl
skl
rl
iN (k,l)A vkl yijkl uj
iN (k,l)A vkl yijkl
zj
yijkl 0 yijkl
gj zj 0
yijkl 1
i N, j N, (k,l) A
(i, j) E, (i, j) E, (k,l) A, (k,l) A, k
jBk iN mMij qm xijm
k, j
0 qQ kq wijq
Bk B, q F
(i, j) E
iN (i, j) E, q Q i, j,m i, j,q i, j,k,l i j,k,l
xijm 0 and integer wijq 0 and integer yijkl
0 or 1 zi
0 or 1 tjkl 0
The objective function (3) minimizes the total cost of handling and transporting
the packages of all the OD pairs. Constraints (4) are balancing constraints ens
uring that the packages of an OD pair start from their origin, end at their dest
ination, and, if they enter an intermediate node, also exit the node. Each const
raint (5) computes the departure time of the packages of OD pair (k,l) from node
j, using the departure time of the preceding node i. If link (i, j) E is used t
o transport the packages of OD pair (k,l) A ( yijkl
1), the constraint becomes t
jkl tikl hij
rj. If link (i, j) E is not used ( yijkl
0), the constraint is not
binding. The starting time at each origin is set with constraints (6).
The package-routing problem determines the routes of packages on the G(N,E) netw
ork for each OD pair so that the service requirements are met, the capacities of
the sorting operations are not exceeded, the sorting operations are not underut
ilized, the number of loading doors of the buildings is not exceeded, packages t
o a common destination are routed along the same path, trailers are not underuti
lized, and total cost is minimized. The package-routing problem does not determi
ne the trailer combinations that are used, nor does it balance trailer types. In
the following, we still present a simpli ed formulation where we do not indicate
mode, although we continue to use one link for both modes if the travel times ar
e the same for both modes on the link and two different links if the travel time
s are different. We extract from formulation (3)(20) constraints (4)(8), (18), and
(20), which involve only the yijkl and tjkl variables, putting aside for the mo
ment the constraints on the number of loading doors, underutilized sorting opera
tions, split paths to common destination, and maximization of trailer utilizatio
n. Because of their very large number, constraints (11) are not included. Becaus
e the objective function (3) involves variables other than yijkl and tjkl, we re
place it with a new objective function that uses an estimated cost parameter. Fi
rst, we select a representative trailercombination type for each mode and estima
te the cost per package, per mode for each link by dividing
808
TECHNOLOGY
the combination cost on the link by the combination capacity. For links that
re both modes, the smaller cost of the two modes is used. Using the cost per
kage, the following cost parameter is computed: dijkl
cost of transporting
he packages of OD pair (k,l) A on link (i, j) E Using this new cost, the
ve function (3) is replaced by the following approximation: Min Z(y,t)
jN
l iN yijkl
(i, j)E (k,l)A dijkl yijkl (21)
sha
pac
all t
objecti
(k,l)A djk
CONSTANT
(23)
subject to
jN yijkl
pN ypikl
b (k,l) A, i N
(24)
where b 1 for i
k; b
i N, j N, (k,l) A
1 for i
l; b
0 otherwise tjkl
tikl
INF( yijkl
1) hij
kl
skl
rl (k,l) A yijkl
0 or 1 tjkl 0
raints valid. In step 4, if the solution can be improved and more Lagrangian ite
rations are permitted, new Lagrange multipliers are computed using subgradient o
ptimization (Ahuja et al. 1993; Crowder 1976) or some other methodology. These a
re used to obtain new costs c ijkl and a new Lagrangian iteration starts at step
2. No more details are given here about the Lagrangian relaxation or the subgra
dient optimization algorithm. A Lagrangian relaxation approach is used to solve
the trailer-assignment problem and a more detailed description of this optimizat
ion procedure is given in that section. When the Lagrangian iterations are compl
eted, the solution satis es the split-paths, capacity, and door constraints. If at
any point the heuristic cannot obtain a solution that satis es a constraint, it s
tops and indicates that it cannot nd a feasible solution. In step 5, links are fo
und that carry too few packages for a complete load and the packages are reroute
d so that the capacity, split-paths, and door constraints continue to be satis ed.
These con-
810
TECHNOLOGY
straints are included to improve the results of the trailer-assignment problem t
hat is solved after the package-routing problem and that represents the true cos
t of a network design solution. Finally, in step 6, underutilized sorting operat
ions are examined to implement constraints (9). The packages of underutilized so
rting operations are rerouted and the underutilized sorting operation for which
the total cost of the solution decreases most is eliminated. The node representi
ng the eliminated sorting operation is disabled and the algorithm starts over fr
om step 2. The routing decisions obtained by the package-routing heuristic algor
ithm outlined above are used as input in the trailer-assignment problem that is
described next.
9.8.
Trailer-Assignment Problem
Given the number of packages on each link obtained by solving the package-routin
g problem, the trailer-assignment problem determines the number and type of trai
ler combinations on each link of the network G(N,E ) that have enough combined c
apacity to transport all the packages on the link, balance by trailer type for e
ach building, and have the least cost. The trailer-assignment problem is describ
ed next. To solve the problem more ef ciently, the network G(N,E ) can be modi ed in
to the network G(N,E) as follows. Each node i N represents a building, that is, all
the nodes representing sorting operations in the same building are collapsed int
o one node. All the links that carry packages in the solution of the package-rou
ting problem are included. Among the links in G(N,E ) that do not carry packages
, only those that may be used to carry empty combinations for balancing are incl
uded so that E E. In particular, among parallel links between buildings that do n
ot carry packages, only one is retained in G. Still, the network G generally has s
everal parallel links between each pair of nodes, in which case link (i, j) is n
ot unique. To avoid complicating the formulation and because it is easy to exten
d it to include parallel links, we will not include indexing to indicate paralle
l links, exactly as we did in formulation (3)(20). The trailer-assignment problem
is formulated below on the network G(N,E) using constraints (12), (13), and (16) a
s well as the objective function (3) with some modi cations.
9.9.
Trailer-Assignment Formulation
Min Z(x)
(i, j)E mMij cijm xijm subject to
iN mMij qm xijm
pN mMij qm xjpm
0 mMij km xijm vij (i, j) E i, j,m j
(31)
(32) (33) (34)
xijm 0 and integer
Objective function (31) is the same as objective function (3). It does not inclu
de the rst component because it is a constant since the values of the yijkl varia
bles are known from the solution of the package-routing problem. Constraints (32
) are the same as constraints (12) except that the summation over buildings is n
ow unnecessary because a node represents a building. Constraints (33) are simila
r to constraints (13) where, as in the objective function, the number of package
s, v ij, for all OD pairs on link (i, j) is now a constant. The variables wijq a
re not needed anymore and only the variables xijm are used, representing combina
tions with both full and empty trailers. So the trailer capacities kq for q Q ar
where cijm
cijm
qF qm ( jq
iq) is the modi ed cost of moving trailer-combination typ
Mij on link (i, j) E given the vector . Problem (35)(37) decomposes into E subproblems
, one for each link (i, j) E. Each one of the subproblems is a kind of integer re
verse knapsack problem, similar to the integer knapsack problem (Martello and To
th 1990; Nemhauser and Wolsey 1988; Chva tal 1983) and can be solved by similar
algorithms. Each subproblem is very small, having Mij variables for link (i, j) E.
The solution obtained by solving the Lagrangian dual (i.e., all the reverse knap
sack problems) does not necessarily balance trailer combinations at each node ev
en for optimal and is not generally feasible for the original problem (31)(34). A
feasible solution is obtained heuristically by solving sequentially one minimum
-cost- ow problem (Ahuja et al. 1993) for each trailer type that needs to balance.
Balancing is achieved by adding empty trailers or replacing one trailer type wi
th another one of larger capacity that is not yet balanced or does not need to b
alance. A Lagrangian relaxation heuristic algorithm that solves the Lagrangian d
ual problem (35)(37) is presented next. It uses subgradient optimization to compu
te the Lagrange multipliers .
9.10.
Lagrangian Relaxation Algorithm for the Trailer-Assignment Problem
1. Set the Lagrange multipliers
to zero in the Lagrangian dual problem and initi
alize Z (upper bound, best known feasible solution of the original problem) to a
high value. 2. Solve the Lagrangian dual problem with the latest values of
(by
solving a set of integer reverse knapsack problems) to obtain the optimal object
ive function value, Z*(x*, ), for the given . 3. Apply a heuristic approach to obt
ain a feasible solution of the problem and update Z. 4. If the gap between Z and
Z*(x*, ) is small or some other criterion is satis ed (e.g., a set number of itera
tions is reached or no more improvement is expected), stop. 5. Compute new value
s of the Lagrange multipliers using subgradient optimization and go to step 2. S
tep 3 above may be implemented only occasionally instead of at every iteration.
Improvement heuristics can also be used to modify the solution in several ways.
They can be used to combine single trailers into more ef cient trailer combination
s. They can also be used to nd any cycles of trailer types that are either empty
or can be replaced by trailer types that do not need to balance. If a whole cycl
e of trailers that results in improved cost is modi ed, balancing of trailers is m
aintained. Improvement heuristics can also be used to replace whole cycles of tr
ailers of excess capacity with trailers of smaller capacity if the total cost is
decreased. A subgradient optimization algorithm (Ahuja et al. 1993; Crowder 197
6) is used in step 5 to compute an improved Lagrange multiplier vector and is de
scribed below.
9.11.
Subgradient Optimization Algorithm
Given an initial Lagrange multiplier vector 0, the subgradient optimization algor
ithm generates a sequence of vectors 0, 1, 2, . . . If k is the Lagrange multiplier
already obtained, k 1 is generated by the rule tk
k(Z
Z*(x*, k)) k 2 j q (i m qm x ijm p m qm x k jpm)
j N, q F
(38) (39)
k 1 k k
jq
k jq
tk (i m
qm x ijm
p m
qm x jpm)
where tk is a positive scalar called the step size and k is a scalar that satis es
the condition 0
k 2. The denominator of equation (38) is the square of the Euclid
ean norm of the subgradient vector corresponding to the optimal solution vector
xk of the relaxed problem at step k. Often a good rule for determining the seque
nce k is to set 0
2 initially and then halve k whenever Z*(x*, k) has not increased
in a speci c number of iterations. The costs are calculated with the new values of
. If any negative costs are obtained, the value of
is halved and the values of
r
ecomputed until all the costs are nonnegative or is too small to continue iterat
ing. A feasible solution is obtained in step 3, using a minimum-cost- ow algorithm
(Ahuja et al. 1993) sequentially for each trailer type that needs to balance. F
irst, for each building the excess or de cit of trailers of each type that has to
balance is computed. Then a minimum-cost- ow algorithm is applied for each trailer
type that obtains the optimal movements of trailers from each node of excess to
each node of de cit. This may be the movement of a single empty trailer, the move
ment of an empty trailer that gets attached to an already used trailer to make u
p a permitted combination, or the
812
TECHNOLOGY
movement of an already used trailer replacing another trailer type of smaller or
equal capacity that is not yet balanced or does not need to balance. Lagrangian
relaxation is often used within a branch-and-bound procedure. The exact branchandbound algorithm has large computational cost; also, the trailer-assignment pr
oblem is only part of a heuristic algorithm for the network design problem. For
these reasons, the Lagrangian relaxation algorithm is applied only once, at the
root of the branch-and-bound tree, to nd a heuristic solution to the trailer-assi
gnment problem.
9.12.
Extensions of the Network-Design Problem
If the results of the network-design problem are intended not only for long-term
planning but to actually modify the routing network, the solution must represen
t an incremental change from the currently used solution. Otherwise, any savings
from an improved solution may be lost in modifying the sorting operations to ac
commodate the solution changes. To this end, both the package-routing and the tr
ailer-assignment problem can be modi ed relatively easily to handle presetting som
e of the variables. For the package-routing problem, this means presetting whole
or partial paths of speci c OD pairs. A solution is obtained by preprocessing the
input data to eliminate or modify OD pairs that are completely or partially pre
set and updating the input data. Similarly, for the trailer-assignment problem,
particular combinations on links may be preset. After appropriate variables are x
ed, the Lagrangian relaxation algorithm optimizes the remaining variables. The n
etwork-design problem presented considers only packages moving on the ground and
chooses only one transportation mode when travel times differ between sorting o
perations. The network-design problem can be extended to include all modes of tr
ansportation and all types of products with different levels of service requirem
ents. Different modes may have different costs and travel times between sorting
operations, permitting parallel use of modes with different travel times along t
he same routes. This extension complicates considerably an already dif cult proble
m and is not described here any further.
10.
10.1.
DRIVER SCHEDULING
Tractor-Trailer-Driver Schedules
This section examines one approach for solving the tractor-trailer-driver-schedu
ling problem for a package-transportation company. Tractor-trailer combinations
transport packages between terminals of a package-transportation company, as dis
cussed in Section 9. This involves movement of both equipment and drivers. While
balancing is the only constraint for equipment routing, the movement of drivers
is more complex. The daily schedule of a tractor-trailer driver starts at his o
r her base location (domicile) where he or she returns at the end of his workday
. A driver schedule consists of one or more legs. A leg is the smallest piece of
work for a driver and consists of driving a tractor-trailer combination from on
e terminal to another or repositioning trailers inside a terminal. Each leg is c
haracterized by an origin terminal and a destination terminal, which may coincid
e. There is an earliest availability time at the origin terminal, a latest requi
red arrival time at the destination terminal, and a travel time (or repositionin
g time) associated with a leg. A tractor-trailer combination that is driven from
the origin terminal to the destination terminal of a leg must start no earlier
than the earliest availability time at the origin and arrive at the destination
no later than the latest required arrival time. At the destination terminal of a
leg, a driver may drop his or her current tractor-trailer combination and pick
up a new one to continue work on the next leg of his or her daily schedule, take
a break, or nish work for the day. In this section, we assume that the legs are
already determined and given. We want to generate driver schedules by combining
legs. An acceptable schedule must meet speci c work rules, which specify the minim
um number of hours that a driver must be paid for a days work (usually 8 hours) a
nd the maximum length of a workday (usually 10 hours). In addition, a driver mus
t return to his or her domicile every day and a workday must incorporate breaks
of speci ed duration at speci ed times. For example, a lunch break must last 1 hour
and be scheduled between 11:00 am and 2:00 pm. Another consideration in the gene
ration of driver schedules is the availability of packages for sorting within a
terminal. During a sorting operation, loads should arrive at such a rate that th
e facility is not kept idle. If packages arrive late, the facility will be under
utilized during the early stages of sorting while the facility capacity may be s
urpassed later. For this reason, the duration of each sorting operation is divid
ed into equal time intervals, say, half-hour intervals. Driver schedules need to
be generated in such a way that volume availability is satis ed, that is, a minim
um number of packages arrives at each sorting facility by the end of each time i
nterval. The driver-scheduling problem has a short or intermediate planning hori
zon. It is usually solved regularly once or twice a year and the obtained schedu
les are bid by the drivers, based on seniority.
0 or 1
The objective function (40) minimizes the total cost of schedules. Constraints (
41) are the setpartitioning constraints ensuring that each leg (row) is used by
only one schedule (column). Constraints (42) and (43) are the domicile constrain
ts and ensure that the lower and upper bounds for the selection of domiciles are
met. According to work rules, each particular terminal must be used as a domici
le a number of times within a given range. Constraints (44) are the volume-avail
ability constraints and ensure that the required volume of packages is available
for each sort interval. Constraints (45) are the binary constraints. The presen
ted formulation is a set-partitioning problem with additional side constraints.
For a freight transportation company with 10,000 tractors and 15,000 drivers in
the continental United States, the problem formulated above is too large to be s
olved for the whole country. Often, however,
814
TECHNOLOGY
driver-scheduling decisions are made at the local level and the problem is broke
n naturally into smaller problems that are solved locally by each region. It is
sometimes dif cult to obtain even a feasible solution of the IP problems formulate
d in (40) (45). If the feasible region contains few feasible solutions or if ther
e are errors in the data that make it dif cult or impossible to nd a feasible solut
ion, the set-partitioning formulation (40)(45) can be changed into a set-covering
formulation (Nemhauser and Wolsey 1988; Bradley et al. 1977) with soft domicile
and volume-availability constraints to help guide the cleaning of data and the
solution process. Slack and surplus (auxiliary) variables are added to the equat
ions and inequalities (41)(45) and incorporated into the objective function (40)
with very high costs. The following additional notation is introduced: di
very h
igh cost for auxiliary variable of row i. s i
surplus variable for row i; if posi
tive, it indicates the number of units that the original constraint is below its
right-hand-side. s i
slack variable for row i; if positive, it indicates the num
ber of units that the original constraint is above its right-hand-side.
10.4.
Set-Covering Formulation with Soft Constraints
Min jJ cj xj
iIlegdis i
iIdomdisi
iIdomdisi
iIsortdisi subject to
(46)
jJ aij xj
si
1 jJ bij xj
si
si
leg i Ileg
(set-covering constraints) (soft lower-domicile constraints) (soft upper-domicil
e constraints) (soft volume-availability constraints)
(47) (48) (49) (50) (51) (52) (53)
domicile i Idom domicile i Idom
sort interval i Isort
xj
0 or 1 s
i 0 s
i 0
si
ui
the columns included. Instead, a standard decomposition method for large LPs ca
lled column generation is used (Bradley et al. 1977; Chva tal, 1983). When the b
inary constraints (51) are replaced with the following bounding constraints 0 xj
1
schedule j J
a column generation approach refers to the resulting LP problem that includes al
l the possible columns as the master problem. The corresponding LP problem that
includes only a subset of the columns is called the restricted master problem. S
olving the LP involves pricing out each column using the dual variables or shado
w prices associated with the restricted master problem. Sometimes this pricing-o
ut operation can be formulated as a known problem (e.g., a shortest-path problem
) and is called a subproblem. The solution of the subproblem produces a column,
not yet included in the restricted master problem, that prices out best. For a m
inimization problem, if the best new column obtained has negative reduced cost,
its inclusion in the restricted master problem will improve the solution. Column
generation iterates solving the subproblem, adding a new column with negative r
educed costs to the restricted master problem, and solving the new restricted ma
ster problem until no more columns can be obtained with
d add them to the LP. If the number of columns exceeds a given maximum, remove c
olumns with the worst reduced costs. Go to step 2. 4. Select a small number of c
olumns with the highest fractional values, say 8. Using their legs as seeds, gen
erate a small number of additional schedules, say 300, add to the LP, and solve
it. 5. Select a small number of the highest fractional schedules, say 8. Restric
t them to be integers and solve the resulting mixed-integer-programming (MIP) pr
oblem. If all variables in the solution of the current restricted master problem
are integral, stop. Otherwise, set the selected columns to their integer soluti
on values permanently, update the formulation, and go to step 2.
816
TECHNOLOGY
Steps 1, 2, and 3 are the same as before. In step 4, a preset number of columns
with the highest fractional values are selected. The actual number used is set b
y experimentation. The legs making up these columns are used as seeds to generat
e a small number of additional columns that are added to the LP. The LP is solve
d and a small number of columns with the highest fractional values are selected
and restricted to be integral. The resulting MIP problem is solved to optimality
using an MIP solver. If all the columns of the restricted master problem have i
ntegral values in the solution, the algorithm stops. If some fractional values a
re still present in the solution, the selected columns to be integral are set pe
rmanently to their integer solution values and eliminated from the formulation a
nd the column-generation phase starts again. In applications, up to about 40,000
columns were included in the restricted master problem during the iterative sol
ution process.
10.8.
Generation of Schedules
Approaches for generating new schedules are discussed in this section. Several t
echniques can be used to obtain new schedules explicitly, guided by the LP shado
w prices. Each leg is associated with a shadow price in the LP solution, which i
ndicates how expensive it is to schedule this leg. The generation of schedules f
ocuses on the expensive legs to provide more alternative schedules that include
them and drive the LP cost down. New schedules improve the LP solution if they h
ave negative reduced costs. Usually the cutoff value is set higher than zero to
include schedules with low positive costs that may combine well with the rest of
them because columns are not added one at a time to the LP. New schedules are g
enerated probabilistically. Each available leg is assigned a probability of sele
ction proportional to its shadow price. The list of legs is then shuf ed using the
assigned probabilities of selection as weights.* This means that legs with high
shadow prices are more likely to be at the top of the shuf ed list. Each leg in t
he shuf ed list is used as a seed sequentially to generate up to a given number of
legs. Starting with a seed, schedules are generated using three different appro
aches: one based on depth rst search (Aho et al. 1983), a second approach that gen
erates a given number of schedules in parallel, and a third method that recombin
es existing schedules to produce new and better ones. The schedules are generate
d by limited complete enumeration, that is, all the schedules that can be genera
ted starting with a particular seed are generated, limited by an upper bound whe
n too many combinations exist. Tractor-only movements for repositioning drivers
are also used in the generation of feasible schedules. A partial schedule is acc
epted only if a complete feasible schedule can be generated from it, including a
ppropriate work breaks. When a maximum total number of columns is obtained, the
process stops. The depth- rst-search approach starts with a seed and adds legs seq
uentially until a complete schedule is obtained. Then a new schedule is started
using either the same seed or the next seed in the list. All the feasible succes
sors of a leg are shuf ed based on their shadow prices as weights and used to obta
in the next leg of a partial schedule. The parallel approach generates several s
chedules simultaneously by adding each one of its feasible successors to the cur
rent partial schedule. The recombination approach identi es feasible schedules tha
t are generated by removing one or more consecutive legs from one feasible sched
ule and replacing them with one or more legs from another schedule. Cycles of le
g exchanges are then identi ed that produce new schedules of lower costs. In actua
l applications, the recombination approach tends to produce columns that drive t
he iterative solution process much more quickly toward a good overall result tha
n when only columns obtained by the other two approaches are included.
10.9.
Beyond Algorithms
Good algorithms that capture well the complexities of real-world problems are a
big step towards achieving ef ciency using optimization techniques. They are rarel
y, however, suf cient by themselves. The perfect algorithm is useless if it is not
actually used, if it not used properly, or if the solution is not implemented.
This is particularly important in the transportation of goods, which is a labori
ntensive industry, and where the implementation of an optimization system may in
volve and affect a large number of people. Often, a big challenge, beyond the de
velopment of good algorithms, is de ning the correct problem to solve, nding the ne
cessary data, setting up tools to extract the needed data, correcting and valida
ting the data, having the model used correctly, and obtaining acceptance of the
model and its results. A successful, decentralized application needs the involve
ment of the users, who must be able and willing to use it correctly and apply th
e results.
* This is exactly like regular shuf ing except for the probabilities of selection
that are not uniform. An ordered list of legs is obtained by randomly selecting
one leg at a time from an original list of legs, based on the weights.
directly for implementing a solution that minimizes cost, the solution results
are unlikely to be used. If a model minimizes the number of drivers but the deci
sion maker is not rewarded for using fewer drivers, he is unlikely to jeopardize
the goodwill of the people working with him by making a big effort to change th
e status quo.
11.
QUALITY IN TRANSPORTATION
Companies that specialize in the transportation of goods must manage their costs
, growth, and quality in order to remain competitive. However, without the appro
priate measures and the systems to support performance measurement, it is practi
cally impossible to manage any transportation system. In measuring quality in th
e freight-transportation industry several questions arise. First, of course, is
the de nition of quality itself. In this industry, quality is viewed differently a
t different steps in the transportation process. Shippers have different require
ments from those of the receivers. Different internal processes have different v
iews of quality and its measurement. However, we can summarize these requirement
s into ve categories:
818 1. 2. 3. 4. 5.
TECHNOLOGY Damages: Was the shipment damaged in the process? On-time performance
: Were all service guarantees met? Accuracy: Was the shipment delivered to the c
orrect destination? Shipment integrity: Were all the items in a shipment deliver
ed together? Information integrity: Was the information associated with a shipme
nt available at all times?
The primary objective of the transportation-planning activity is to design proce
sses that maintain high levels of performance in all ve categories. Lets explore t
hese requirements further:
Damages: Of all the quality factors discussed, damages are perhaps one of the mo
st important
indicators of quality in both the receivers and the shippers view. When the freigh
ttransportation company damages the merchandise being moved, the shipper, the re
ceiver, and the transportation company itself are affected. Costs associated wit
h insurance, returns, product replacement, and lost productivity are all a resul
t of damaged goods. All transportation processes must be designed to prevent dam
aging the goods. Usually, every step in the transportation process has procedure
s to measure the number of damages created in any period of time. These procedur
es are used to establish accountability practices and to identify process-improv
ement opportunities. On-time performance: Freight-transportation companies compe
te on the basis of service performance and cost. In order to support the needs o
f the complex supply chains that exist today, high levels of on-time delivery re
liability are expected from the transportation company. Several external and int
ernal factors can have a direct impact on on-time delivery performance. External
factors such as weather, traf c conditions, and subcontractor labor relations can
have a direct impact on the ability of the transportation company to meet servi
ce commitments. With proper planning, transportation companies manage to minimiz
e the impact of some of these factors. On the other hand, the one internal facto
r that will always have a negative impact on on-time delivery is lack of plannin
g or, quite simply, poor planning. If the organization is not prepared to handle
seasonal variations in pickup and delivery volumes or does not have contingency
plans to deal with unexpected events, on-time delivery performance will be affe
cted. Accuracy: Delivering to the correct destination is expected every time for
every shipment. However, there are instances in which the transportation system
fails to satisfy this basic requirement. Two major causes contribute to this ty
pe of service failure: missing or incorrect information and inadequate planning.
For example, when the wrong address is attached to a shipment, the probability
of making delivery mistakes increases substantially. Today, transportation compa
nies offer a variety of services aimed at providing continuous shipment tracking
and improved information quality. As indicated before, labor is usually the hig
hest cost variable in the pro tability equation of a transportation company. Labor
is also the primary driver of quality in these organizations. When companies fa
il to develop staf ng and training plans properly, delivery accuracy and reliabili
ty will be impacted. Shipment integrity: Receivers expect to receive all the ite
ms (e.g., packages) in a shipment on the same day and at the same time. It is th
e responsibility of the freight-transportation company to maintain the integrity
of all shipments. The transportation system must be capable of using shipment i
nformation in its different processes to ensure the integrity of every shipment.
Information integrity: As indicated earlier in this chapter, the information ab
820 Routing Real-Time Daily Route Routing Routing Planning N Y Y GeoWhiz GIS Pro
duct Interface Special Features Number of Terminals 256 512 Unlimited Unlimited
N Y Y ArcInfo, MapInfo GeoRoute works for local delivery as well as over-the-roa
d applications. Options for multidepot and redisptach are available. Software su
pports both point-topoint and street-by-street operations, as well as mixed requ
irements. N/A 500 Unlimited N Y Y N/A N N N Proprietary, can work with data from
all GIS systems Unlimited Unlimited Y Y Y 1000 100 Y Y Y Unlimited Unlimited Y
Y Y LoadExpress is a simple, powerful, and exible choice for building and optimiz
ing routes, scheduling deliveries, and analyzing distirbution patterns. Xeta, Ro
ckwell, Resource managementallows Cadec, management of driver, tractor Autoroach
and trailer schedules, provides information on equipment requirements. Map Objec
ts Can optimize routes across multiple time periods, respect space / time constr
aints, optimize number and location of terminals and service areas. ESRI shape le
s Con gurable by users across multiple industries, including both scheduling model
and screen cosmetics. Interfaces via ODBC drivers.
TABLE 4
Routing Software Survey
Solvable Problem Size
Product
Publisher
Number of Number of Stops Vehicles
GeoRoute
Kositzky & Associates, Inc.
4600
GeoRoute 5
GIRO Enterprises, Inc.
Unlimited
Load Manager
N/A
LoadExpress Plus
Roadnet Technologies, Inc. Information Software, Inc.
Unlimited
Manugistics Routing & Scheduling
Manugistics, Inc.
Unlimited
OVERS
Bender Management Consultants
10,000
RIMMS
Lightstone Group, Inc.
Unlimited
Int
Unlimited
Unlimited 1 Intermediates
ROADSHOW calculates costeffective solutions based on actual costs incorporating
speci c information supplied by the user. Customizable through fourthgeneration ma
cro language and ability to call functions from other languages. Meter-reading s
ystem, handles walking, driving, and combination routes.
RouteSmart RouteSmart Point-to-Point Technologies Routronics 2000 Carrier Logist
ics
Unlimited
Unlimited
SHIPCONS II
Insight, Inc.
Unlimited
Unlimited
Unlimited
N
Y
Y
Taylor II
F&H Simulations, Inc.
1000
100
1000
N
N
Y
Territory Planner TESYS 1000 Unlimited Unlimited 500
Unlimited
Unlimited
Unlimited
N Y Y
N Y Y
Y N Y
Etak, Horizons Technology, GDT, PCMiler GIS Plus (DOS version), ArcInfo ArcView
version 3.0 MapInfo Routronics 2000 has been developed as a complete customer-se
rvice routing and dispatch system with interfaces to wireless communications. GD
T, MapInfo, Cost-based, integer optimization; Etak user-con gurable screens; Ad Ho
c Report Writer; Digital Geography; Shipment Rater for TL and LTL carriers. 2 an
d 3D animation; Windows 95 and Windows NT; design of experiments; curve tting of
raw data (advanced statistics module). GDT maps CDPD / GPS / RF
3000
TransCAD
Roadnet Technologies Inform Software Corporation Caliper Corporation
Unlimited
821
Maptitude, GIST, can work with data from all GIS systems
Toolkit of OR methods including min-cost network ow, transportation problem, and
822
TECHNOLOGY
deployment of more mature products. Freight-transportation companies face new co
nstraints and challenges not only in meeting service commitments but in remainin
g competitive and cost effective while meeting governmental regulations. The use
of ITS offers new opportunities to use information in the development of routes
and schedules. The ITS program is sponsored by the DOT through the ITS Joint Pr
ogram Of ce (JPO), the Federal Highway Administration (FHWA), and the Federal Tran
sit Administration (FTA). ITS, formerly known as the intelligent vehicle highway
systems (IVHS), were created after the Intermodal Surface Transportation Ef cienc
y Act (ISTEA) of 1991 was established. ISTEA helped authorize larger spending fo
r transit improvement. In January 1996, then Secretary of Transportation Frederi
co Pen a launched Operation TimeSaver, which seeks to install a metropolitan int
elligent transportation infrastructure in 75 major U.S. cities by 2005 to electr
onically link the individual intelligent transportation systems, sharing data so
that better travel decisions can be made. A projected $400 billion will be inve
sted in ITS by the year 2011. Approximately 80% of that investment will come fro
m the private sector in the form of consumer products and services. The DOT has
de ned the following as the components of the ITS infrastructure:
Transit eet management: enables more ef cient transit operations, using enhanced pa
ssenger
information, automated data and fare collection, vehicle diagnostic systems, and
vehicle positioning systems Traveler information: linked information network of
comprehensive transportation data that directly receives transit and roadway mo
nitoring and detection information from a variety of sources Electronic fare pay
ment: uses multiuse traveler debit or credit cards that eliminate the need for c
ustomers to provide exact fare (change) or any cash during a transaction Traf c si
gnal control: monitors traf c volume and automatically adjusts the signal patterns
to optimize traf c ow, including signal coordination and prioritization Freeway ma
nagement: provides transportation managers the capability to monitor traf c and en
vironmental conditions on the freeway system, identify ow impediments, implement
control and management strategies, and disseminate critical information to trave
lers Incident management: quickly identi es and responds to incidents (crashes, br
eakdowns, cargo spills) that occur on area freeways or major arteries Electronic
toll collection: uses driver-payment cards or vehicle tags to decrease delays a
nd increase roadway throughput Highwayrail intersection safety systems: coordinat
es train movements with traf c signals at railroad grade crossings and alerts driv
ers with in-vehicle warning systems of approaching trains Emergency response: fo
cuses on safety, including giving emergency response providers the ability to pi
npoint quickly the exact location of an incident, locating the nearest emergency
vehicle, providing exact routing to the scene, and communicating from the scene
to the hospital
The use of information from all of these system components will enhance the plan
ners ability in designing ef cient transportation networks and delivery routes. In
addition, as this information is communicated to the drivers, they will also hav
e the capability of making better decisions that will enhance customer satisfact
ion and reduce overall costs. For additional information, visit the DOTs website
on ITS: http: / / www.its.dot.gov / .
Acknowledgments
The authors wish to thank Professor Mark S. Daskin of Northwestern University an
d Professor Ching-Chung Kuo of Pennsylvania State University for their construct
ive comments. They also thank their colleagues Ranga Nuggehalli, Doug Mohr, Hla
Hla Sein, Mark Davidson, and Tai Kim for their assistance. They also thank Dr. G
erald Nadler of the University of Southern California for all his support.
REFERENCES
Aho, A. V., Hopcroft, J. E., and Ullman, J. D. (1983), Data Structures and Algor
ithms, AddisonWesley, Reading, MA. Ahuja, R. K., Magnanti, T. L., and Orlin, J.
B. (1993), Network Flows: Theory, Algorithms, and Applications, Prentice Hall, E
nglewood Cliffs, NJ. Bartholdi, J., and Platzman, L. (1988), Heuristics Based on S
pace lling Curves for Combinatorial Problems in Euclidean Space, Management Science,
Vol. 34, pp. 291305. Bertsimas, D., Jaillet, P., and Odoni, A. (1990), A Priori Op
timization, Operations Research, Vol. 38, pp. 10191033.
Bradley, S. P., Hax, A. C., and Magnanti, T. L. (1977), Applied Mathematical Pro
gramming, AddisonWesley, Reading, MA. Chva tal V. (1983), Linear Programming, W.
H. Freeman, New York. Crowder, H. (1976), Computational Improvements for Subgradie
nt Optimization, Symposia Mathematica, Vol. 19, pp. 357372. Daskin, M. S. (1995), N
etwork and Discrete Location: Models, Algorithms, and Applications, John Wiley &
Sons, New York. Desrosiers, J., Dumas, Y., Solomon, M. M., and Soumis, F. (1995
), Time Constrained Routing and Scheduling, in Network Routing, Vol. 8 of Handbooks
in Operations Research and Management Science, M. O. Ball, T. L. Magnanti, C. L.
Monma, and G. L. Nemhauser, Eds., North-Holland, Amsterdam, pp. 35139. Fisher, M
. L. (1985), An Applications Oriented Guide to Lagrangian Relaxation, Interfaces, Vo
l. 15, No. 2, pp. 1021. Fisher, M., and Jaikumar, R. (1981), A Generalized Assignme
nt Heuristic for Vehicle Routing, Networks, Vol. 11, pp. 109124. Garey, M., and Joh
nson, D. (1979), Computers and Intractability: A Guide to the Theory of NPComple
teness, W.H. Freeman, New York. Geoffrion, A. M. (1974), Lagrangian Relaxation for
Integer Programming, Mathematical Programming Study, Vol. 2, pp. 82114. Gillet, B.
, and Miller, L. (1974), A Heuristic Algorithm for the Vehicle Dispatching Problem
, Operations Research, Vol. 22, pp. 340349. Glover, F. (1989), Tabu SearchPart I, ORS
ournal on Computing, Vol. 1, pp. 190206. Glover, F. (1990), Tabu SearchPart II, ORSA J
ournal on Computing, Vol. 2, pp. 432. Hall, R. W., and Partyka, J. G. (1997), On th
e Road to Ef ciency, OR / MS Today, June, pp. 38 47. Hall, R., Du, Y., and Lin, J. (1
994), Use of Continuous Approximations within Discrete Algorithms for Routing Vehi
cles: Experimental Results and Interpretation, Networks, Vol. 24, pp. 4356. Jaillet
, P. (1988), A Priori Solution of a Traveling Salesman Problem in Which a Random S
ubset of the Customers Are Visited, Operations Research, Vol. 36, pp. 929936. Jaill
et, P., and Odoni, A. (1988), The Probabilistic Vehcile Routing Problem, in Vehicle
Routing: Methods and Studies, B. Golden and A. Assad, Eds., North-Holland, Amste
rdam, pp. 293318. Kontoravdis, G., and Bard, J. (1995), A GRASP for the Vehicle Rou
ting Problem with Time Windows, ORSA Journal on Computing, Vol. 7, pp. 1023. Lawler
, E., Lenstra, J., Rinnooy Kan, A., and Shmoys, D., Eds. (1985), The Traveling S
alesman Problem: A Guided Tour of Combinatorial Optimization, John Wiley & Sons,
New York. Lin, S., and Kernighan, B. (1973), An Effective Heuristic Algorithm for
the Traveling Salesman Problem, Operations Research, Vol. 21, pp. 498516. Martello
, S., and Toth, P. (1990), Knapsack Problems: Algorithms and Computer Implementa
tions, John Wiley & Sons, New York. Minieka, E. (1978), Optimization Algorithms
for Networks and Graphs, Marcel Dekker, New York. Nemhauser, G. L., and Wolsey,
L. A. (1988), Integer and Combinatorial Optimization, John Wiley & Sons, New Yor
k. Potvin, J., and Rousseau, J. (1993), A Parallel Route Building Algorithm for th
e Vehicle Routing and Scheduling Problem with Time Windows, European Journal of Op
erational Research, Vol. 66, pp. 331340. Potvin, J., and Rousseau, J. (1995), An Ex
change Heuristic for Routing Problems with Time Windows, Journal of the Operationa
l Research Society, Vol. 46, pp. 14331446. Potvin, J., Kervahut, T., Garcia, B.,
and Rousseau, J. (1996), The Vehicle Routing Problem with Time WindowsPart I: Tabu
Search, INFORMS Journal on Computing, Vol. 8, pp. 158164. Rochat, Y., and Taillard,
E. (1995), Probabilistic Diversi cation and Intensi cation in Local Search for Vehicl
e Routing, Journal of Heuristics, Vol. 1, pp. 147167. Russell, R. (1995), Hybrid Heur
istics for the Vehicle Routing Problem with Time Windows, Transportation Science,
Vol. 29, pp. 156166. Savelsbergh, M. (1985), Local Search in Routing Problems with
Time Windows, Annals of Operations Research, Vol. 4, pp. 285305. Savelsbergh, M. (1
990), An Ef cient Implementation of Local Search Algorithms for Constrained Routing
Problems, European Journal of Operational Research, Vol. 47, pp. 7585.
824
TECHNOLOGY
Savelsbergh, M. (1992), The Vehicle Routing Problem with Time Windows: Minimizing
Route Duration, ORSA Journal on Computing, Vol. 4, pp. 146154. Solomon, M. (1987), Al
gorithms for the Vehicle Routing and Scheduling Problems with Time Window Constr
aints, Operations Research, Vol. 35, pp. 254265. Solomon, M., Baker, E., and Schaff
er, J. (1988), Vehicle Routing and Scheduling Problems with Time Window Constraint
s: Ef cient Implementations of Solution Improvement Procedures, in Vehicle Routing:
Methods and Studies, B. Golden and A. Assad, Eds., North-Holland, Amsterdam, pp.
85105. Taillard, E., Badeau, P., Gendreau, M., Guertin, F., and Potvin, J. (1997
), A Tabu Search Heuristic for the Vehicle Routing Problem with Soft Time Windows, T
ransportation Science, Vol. 31, pp. 170186. Thangiah, S., Osman, I., and Sun, T.
(1995), Metaheuristics for Vehicle Routing Problems with Time Windows, Technical Rep
ort, Computer Science Department, Slippery Rock University, Slippery Rock, PA.
826
TECHNOLOGY
2. DEVELOPING EFFICIENT WORK ENVIRONMENTS FOR HOTELS AND RESTAURANTS
2.1. Overview
One area in which the application of industrial engineering techniques and princ
iples has become increasingly important is improving worker ef ciency. Between 198
8 and 1997, the productivity, in output per hour, in food service kitchens decre
ased at an average annual rate of 0.6% per year (U.S. Bureau of the Census 1999)
. Operators have tried many things in their attempts to slow and reverse this tr
end. Many operations have turned to technology and increased automation as a pos
sible solution. While the incorporation of technology in the kitchen is importan
t, Clark and Kirk (1997) failed to nd a positive relationship between implementat
ion of technology and productivity. Operations have also sought to improve produ
ctivity by purchasing labor in the form of convenience foods and increasing util
ization of selfserve systems (Schechter 1997). Still other operations have sough
t to improve productivity though kitchen design. A study on trends in kitchen si
ze, found that it has been steadily decreasing (Ghiselli et al. 1998). These sma
ller kitchens have reduced the distances workers must walk to prepare meals. If
workers are walking less, they can become more productive (Liberson 1995). To de
velop smaller, more ef cient kitchens, designers have used a variety of methods. O
ne of the better ones is design by consensus (Avery 1985).
2.2. 2.2.1.
Design by Consensus Overview
This technique uses the same standard relationship charts and diagrams as recomm
ended for traditional design methods by some of the leading experts on food serv
ice layout and design (Almanza et al. 2000; Avery 1985; Kazarian 1989). However,
there are major differences between design by consensus and some more tradition
al design methods. Traditional design methods have centered on management provid
ing operational information to a kitchen design professional. The design profess
ional takes that information and, based on experience and training, develops a l
ayout for the facility. Design by consensus recognizes that workers who will be
utilizing the facility have valuable information that can be used to improve the
design. Design by consensus does not eliminate the need for the professional de
signer. Rather, it provides the designer with additional information that can le
ad to a more ef cient, user-friendly design. Information is collected from workers
by the use of relationship charts.
2.2.2.
Relationship Charts
The
the
ship
kers
Receiving area Dry storage area Cold storage area Preparation area Cooking area
Serving area Dining room Dishwashing area Kitchen office 4 4 2 3 4 3 4 2 4 1 2 2
1 0 1 0 2 0 0 2 0 0 0 0 0 4 1 0 0 0 0 0 0 0 0 0
Figure 1 Relationship Chart for Work Centers within the Facility. (The ratings a
re anchored at 4, very important to be located close together, and 0, not import
ant to be located close together.)
828
TECHNOLOGY
Refrigerator Preparation table 1 Preparation sink Mixer Vertical chopper Prepara
tion table 2 Cold storage area Cooking area Serving area 3 2 4 1 0 3 0 2 4 1 2 4
3 0 4 1 2 0 0 1 0 2 1 0 0 1 1 2 1 0 0 0 1 4 0 0
Figure 3 Relationship Chart for Equipment within the Preparation Work Center. (T
he ratings are anchored at 4, very important to be located close together, and 0
, not important to be located close together.)
Using the information from Figure 3, the designer can easily see that the prepar
ation area should be arranged so that preparation table 2 is located nearest the
cooking area and the refrigerator is located nearest the cold storage area. The
fact that there are no important links between the serving area and any of the
equipment in the preparation area does not mean that the link between the two ar
eas is not important, just that movement between the two areas is likely to be t
o or from any of the pieces of equipment in the work center. While the consolida
ted relationship charts for the work centers within the facility and the equipme
nt within the work centers provide valuable information, it is sometimes dif cult
to visualize the optimum layout in the numbers in the chart. This is particularl
y true if the relationship chart has a large number of work centers or pieces of
equipment listed on it. Therefore, it is helpful to display the information in
the form of a relationship diagram that highlights the most important links. To
prepare the information for use in the relationship diagrams, the numbers in the
consolidated charts must be converted to whole numbers (Avery 1991). This is do
ne by organizing the ratings for the individual links between work centers / equ
ipment in descending order of importance. Only those links receiving a composite
rating of 3.0 or higher are included in the ranking. The inclusion of less impo
rtant links will clutter the relationship diagram and not provide the designer a
ny additional information. Once the rank order of the importance of the links ha
s been established, they are split into three or four groups based on the breaks
in the data. Those links in the highest grouping are assigned an importance ran
k of four, those in the next highest grouping are assigned an importance rank of
three, and so forth until the top four groupings have been assigned importance
ranks. Table 1 shows the importance ranking for the work center links from Figur
e 2. The designer is now ready to prepare the relationship diagrams.
TABLE 1 Link
Importance Rankings for the Top Links from Figure 2 Consolidated Score 4.0 4.0 3
.9 3.7 3.7 3.5 3.2 3.2 3.2 Importance Rank 4 4 4 3 3 2 1 1 1
Receiving and cold storage Preparation and cooking Cooking and serving Cold stor
age and preparation Serving and dining room Preparation and serving Receiving an
d dry storage Dry storage and cooking Cold storage and cooking
830
TECHNOLOGY
Figure 5 Layout-Type Relationship Diagram Developed from Figure 4.
and 2. Therefore, it is up to the designer to ensure that the kitchen of ce is pro
perly placed to provide adequate supervision of the kitchen. The placement of th
e of ce is further complicated by other functions for which the of ce is used. Birch e
ld (1988) describes the need for the of ce to be accessible to customers needing t
o talk to management without having those customers walk through the kitchen to
reach the of ce. Further, managers must also monitor the movement of food and supp
lies in and out of the building and storage areas. This places the of ce near the
receiving area, which would cause problems for customers accessing the of ce. The n
al location of the of ce is often a compromise and depends on which function manag
ement views as its primary one.
2.2.5.
Designing for Ef cient Utility Use
Another important consideration when arranging the kitchen is ef cient use of util
ities. As with the of ce, the information provided by the relationship charts and
diagrams will not adequately address this issue. Frequently, designing for ef cien
t utility use con icts with designing for maximum worker ef ciency. Kazarian (1989)
recommends that heating and cooling equipment be separated. While this is import
ant for controlling energy use, it does not always provide for ef cient production
. Frequently, food is moved directly from a refrigerator to a piece of cooking e
quipment. Designing for ef cient energy use would place the pieces of equipment so
me distance apart, while designing for ef cient production would place them adjace
nt to each other. In this case, the designer must weigh the need for energy cons
ervation against the need for worker ef ciency and develop an arrangement that min
imizes total production costs. There are several other arrangement consideration
s that can help reduce energy consumption. One is locating steam- and hot watergenerating equipment near the equipment that uses steam or hot water. Shortening
the distance between the generation and use of steam or hot water will reduce e
nergy loss from the pipes connecting the equipment. Another way to help reduce e
nergy consumption is to consolidate as many pieces of heat-generating equipment
under one exhaust hood as possible. This will reduce costs by reducing ventilati
on requirements. Finally, Avery (1985) recommends that compressors and condenser
s for large cooling equipment such as walk-in refrigerators and freezers be loca
ted so that the heat they generate can be easily exhausted, thereby reducing the
cooling requirements for the kitchen. As with the example in the previous parag
raph, arranging the kitchen to take advantage of these energy conservation techn
iques may con ict with arranging for maximum worker ef ciency. It is up to the desig
ner to develop a compromise arrangement that will minimize the total cost of ope
rating the facility.
2.3. 2.3.1.
Evaluating the Ef ciency of a Layout Overview
Kazarian (1989) describes a cross-charting technique that uses distance, move, a
nd travel charts to compare two or more layouts to determine which is the most e
f cient. This technique recognizes that if the time spent walking between equipmen
t can be reduced, then the worker can spend more time being productive. Avery (1
985) took the use of cross-charting one step further. He recognized that not all
production workers are paid the same wage. It is possible to have an arrangemen
t that is the most ef cient based on total man-hours, but not the most ef cient in t
otal labor cost. Because of different training and skill requirements, it is pos
sible for the highest-paid workers to make twice what the lowest-paid workers ma
ke. If the travel charts are adjusted for wages, then the comparison of arrangem
ents is based on cost ef ciency and not man-hour ef ciency.
832
TECHNOLOGY
their wages. Finally, a single sum is calculated for all the charts. That number
can now be used to compare two or more arrangements. The arrangement that yield
s the smallest number is the most ef cient in terms of labor costs. Table 4 show t
he travel charts for this example. The weighted sum for the prep cooks chart is 2
4,850 and the weighted sum of the dishwashers chart is 9,240. Combining the two,
the weighted sum for the operation is 34,090. The units are in currency times di
stance. The exact units used for currency and distances are not important, as lo
ng as they are consistent for each layout being evaluated.
2.3.5.
Evaluating the Charts
The nal step is to evaluate the charts to identify ways to improve the arrangemen
t of equipment. There are basically two different ways to do this. One is to mod
ify the arrangement, and the other is to modify the production procedures. For a
n ef cient arrangement, the majority of the movement will be between adjacent equi
pment. Since the distance chart was set up so that adjacent equipment in the arr
angement appeared in adjacent cells, the move chart can be used to identify prob
lems with the arrangement. If the work center is properly arranged, the numbers
in the cells along the diagonal line should be large and get progressively small
er the farther they are from the line. Rearranging the layout so that the larges
t numbers are adjacent to the line will improve the ef ciency of the operation. In
the prep cooks chart, one possible change in the arrangement that may reduce the
total distance traveled is to move the walk-in closer to the sink than the oven
is. The total number of moves between the prep table and the walk-in is 65 (45
from the walk-in to the sink and 20 from the sink to the walk-in), while the tot
al number of moves between the walk-in and oven is only 20 (10 from the oven to
the sink and 10 from the sink to the oven). Because moving one piece of equipmen
t affects other pieces of equipment, the only way to be sure that the new arrang
ement is indeed better is to chart the new arrangement and compare the total wei
ghted sums for both arrangements. When there is more than one travel chart becau
se of different wage rates, both charts must be prepared for the new arrangement
. Be careful when making adjustments based on the move chart for the lowest-paid
worker. While those adjustments may reduce the total man-hours need to perform
the task, they may actually increase the total labor cost for the operation. For
example, from the dishwashers chart in Table 3, one possible change would be to
move the dishwashing machine closer to the prep table. While this would reduce t
he travel of the lowest-paid worker, it may increase the travel of the highest-p
aid worker. Therefore, moving the dishwashing machine may increase labor dollars
, making this an unwise adjustment to the arrangement. Identifying procedural ch
anges using these charts is not as straightforward as identifying arrangement ch
anges. Basically, these charts can only be used to identify problems with proced
ures requiring further investigation. In the example in Table 3, there is a tota
l of 100 trips to and from the dishwasher made by the prep cook and another 120
trips made by the dishwasher. The dishwashers trips are all between the dishwashe
r and the prep table. This is an area requiring further investigation to see whe
ther procedural changes are needed. The designer should seek to uncover the reas
on for the trips. It could be that the prep cook is bringing dirty dishes to the
dishwasher and returning empty-handed and that the dishwasher is returning clea
n dishes to the prep table and returning empty-handed. If
TABLE 4 Prep Cook To Prep table Sink Oven Walk-in Dishwasher Dishwasher To Prep
table Sink Oven Walk-in Dishwasher
Travel Charts for Prep Cook and Dishwasher From Prep Table 60 140 320 220 Prep T
able 0 0 0 660 Sink 80 30 120 300 Sink 0 0 0 0 Oven 140 30 60 0 From Oven 0 0 0
0 Walk-in 0 0 0 0 Dishwasher 660 0 0 0 Walk-in 120 270 60 0 Dishwasher 385 150 0
0
ces the bottom at approximately 69 cm (27 in.) for women and 74 cm (29 in.) for
men. If the sink is to be used to soak pans, the bottom can be 15 cm (6 in.) low
er, provided small items are not placed in the sink. Avery (1985) also presents
information on the importance of height for stacked items such as an oven. He re
commends that deck ovens be stacked not more than two high and that the bottom o
ven be at least 51 cm (20 in.) off the ground. If three ovens are stacked and th
e rst oven is at least 51 cm (20 in.) off the ground, there is a greater risk tha
t workers will be burned trying to use the top oven. If the top oven is at an ac
ceptable height, the bottom oven will be below 51 cm (20 in.), increasing the ri
sk that a worker will be burned using that oven. Further, the worker will have t
o expend more energy stooping to place things in and remove them from the bottom
oven. Stacking of other types of ovens is also not recommended. Conventional an
d convection ovens are too tall to stack without the risk of serious burns to th
ose using them. It is far safer to put the oven on a stand so the opening is bet
ween the workers waists and shoulders. Restaurants often employ several different
types of storage devices, from refrigeration and warming cabinets to plate and
tray dollies to standard shelves. The proper height for storage of frequently us
ed items is between the workers waist and shoulders (Avery 1985). Placing items t
oo high can lead to items being improperly placed on shelves, which can lead to
falling objects and the related potential for damage and personal injury. Placin
g items too low forces workers to stoop, causing them to expend additional energ
y and putting a strain on their back muscles. In the process of designing small,
ef cient kitchens, designers have turned to undercounter storage units, such as
834
TECHNOLOGY
refrigerators. If the use of undercounter storage units is necessary, the design
ers should consider using drawers instead of reach-ins because they require less
bending and stooping to access products. Finally, if plate and tray dollies are
used, they should be self-leveling, meaning that as one plate or tray is remove
d, the load on the springs at the bottom of the stack decreases, decreasing the
compression of the spring and raising the stack. This keeps the items within eas
y reach of the workers. With top-loading pieces of equipment such as steam-jacke
ted kettles, pots and pans, and mixers, it is important for the user to be able
to reach comfortably and safely over the rim. This helps prevent burns, spills,
and similar accidents when workers are adding ingredients or stirring the mixtur
e. Avery (1985) recommends a rim height of no greater than 97 cm (38 in.) for st
eam-jacketed kettles, whenever possible. Operations that use large kettles must
exceed that height to ensure that the draw-off is high enough to allow a pan to
be placed beneath it. The same height recommendations for steamjacketed kettles
also apply to other top-loading equipment, such as stock pots and mixers.
2.4.4.
Workstation Dimensions
In addition to proper surface height being maintained, the length and width of t
he workstations must also be addressed to help workers be productive. For a stan
ding worker, the workstation should be arranged so that the majority of the task
s can be performed in a 46-cm (18-in.) arc centered on the worker (Avery 1985).
Supplies needed to perform the tasks can be stored just outside that arc. Since
people tend to spread out to ll their environment, it is important that table siz
e be restricted to ensure ef cient production. Almanza et al. (2000) recommend lim
iting the table size for a single work to 6176 cm (2430 in.) wide and 1.221.83 m (46
ft) long. The width of the table can be increased to 91 cm (36 in.) if the back
of the table will be used for storage and to 107 cm (42 in.) if two workers wil
l be using opposite sides of the table. If the two workers will be working side
by side, then the table length should be 2.443.05 m (68 ft).
3.
3.1.
CONTROLLING CAPITAL COSTS
Overview
As with any industry, controlling capital costs is an important ingredient in de
veloping and maintaining a successful operation. Earlier sections presented info
rmation on controlling production costs by increasing worker productivity. Equal
ly important is the need to control capital cost. While capital costs for restau
rant and hotels may not be as large as for heavy industry, they are signi cant whe
n compared to the revenue generated by the respective operations. Therefore, it
is important that operations not waste money on capital expenditures. By using v
alue engineering and life-cycle costing, operators are able to control costs by
making better capital-expenditure decisions.
3.2.
Value Engineering
In its simplest terms, value engineering is the process of reviewing purchase de
cisions to determine whether they are cost effective. Value engineering seeks to
determine whether the value added to the operation by the purchase provides the
SUMMARY
The industrial engineering techniques discussed in this chapter are but a few of
the techniques currently being used by hospitality operations. Successful foodservice operations of the future will nd ways to change the negative productivity
trend that the industry has been experiencing for the last
836
TECHNOLOGY
20 years. They will become more productive by making better use of existing resou
rces and exploiting the bene ts of new technology. The prudent application of the
principles and techniques of industrial engineering will help make a positive pr
oductivity trend a reality.
REFERENCES
Almanza, B. A., Kotchevar, L. H., and Terrell, M. E. (2000), Foodservice Plannin
g: Layout, Design, and Equipment, 4th Ed., Prentice Hall, Upper Saddle River, NJ
. Avery, A. C. (1985), A Modern Guide to Foodservice Equipment, Rev. Ed., Wavela
nd Press, Prospect Heights, IL. Borsenik, F. D., and Stutts, A. S. (1997), The M
anagement of Maintenance and Engineering Systems in the Hospitality Industry, 4t
h Ed., John Wiley & Sons, New York. Birch eld, J. C. (1988), Design and Layout of
Foodservice Facilities, Van Nostrand Reinhold, New York. Clark, J., and Kirk, D.
(1997), Relationships Between Labor Productivity and Factors of Production in Hos
pital and Hotel Foodservice DepartmentsEmpirical Evidence of a Topology of Food P
roduction Systems, Journal of Foodservice Systems, Vol. 10, No. 1, pp. 2339. Foster
, F. Jr., (1998), Operations and Equipment: Reducing the Pain of Value Engineering, N
ations Restaurant News, Vol. 20, No. 4, pp. 1626. Ghiselli, R., Almanza, B. A., an
d Ozaki, S. (1998), Foodservice Design: Trends, Space Allocation, and Factors That
In uence Kitchen Size, Journal of Foodservice Systems, Vol. 10, No. 2, pp. 89105. Ka
tsigris, C., and Thomas, C. (1999), Design and Equipment for Restaurants and Foo
dservice: A Management View, John Wiley & Sons, New York. Kazarian, E. D. (1989)
, Foodservice Facilities Planning, Van Nostrand Reinhold, New York. Liberson, J.
(1995), Food and Beverage: Cooking Up New Kitchens: Technology and Outsourcing Ha
ve Created Smaller, More Ef cient Kitchens, Lodging, Vol. 21, No. 2, pp. 6972. Nebel,
D. C., III (1991), Managing Hotels Effectively: Lessons From Outstanding Genera
l Managers, Van Nostrand Reinhold, New York. Newman, D. G. (1980), Engineering E
conomic Analysis, Engineering Press, San Jose , CA. Schechter, M. (1997), The Grea
t Productivity Quest, Food Management, Vol. 32, No. 1, pp. 46, 48, 52, 54. U.S. Bu
reau of the Census (1999), Statistical Abstract of the United States: 1999, Wash
ington, DC.
III.A
Organization and Work Design
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
3.
LEADERSHIP OUTCOMES 3.1. 3.2.
CONCLUSION
REFERENCES ADDITIONAL READING
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
841
842
PERFORMANCE IMPROVEMENT MANAGEMENT
1.
INTRODUCTION
Most de nitions of leadership re ect the assumption that it involves a social in uence
process whereby intentional in uence is exerted by one person over other people t
o structure the activities and relationships in a group or organization (Yukl 19
98). Leadership occurs when one group member modi es the motivation and competenci
es of others in the group (Bass 1990). Work motivation is the willingness of an
individual to invest energy in productive activity. Thus, leadership and motivat
ion are interwoven, inseparable concepts; a core outcome of effective leadership
is a higher willingness on the part of the employees to invest energy in perfor
ming their tasks. A new genre of leadership and motivation theories has been sho
wn to affect organizational effectiveness in ways that are quantitatively greate
r than, and qualitatively different from, the effects speci ed by previous theorie
s (House and Shamir 1993; for a meta-analytic review see Lowe et al. 1996). Thes
e theories have led to new applications in human resource management. As shown i
n Figure 1, we rst review the early vs. the most recent paradigms of leadership,
namely the shift from transactional to transformational, charismatic, or visiona
ry leadership. This shift is manifested by changes in the bases for work motivat
ion from an emphasis on calculative, individualistic, extrinsic, short-term moti
vators toward more expressive, collectivistic, intrinsic, long-term motivators.
Taken together, these new approches have been shown to positively impact organiz
ational outcomes, including performance and employee development. The impact of
the new leadership and motivation paradigm on organizational outcomes has implic
ations for strategic human resource management, speci cally recruiting, performanc
e management, training and development, and compensation.
2.
2.1.
LEADERSHIP AND MOTIVATION
The Classic Paradigm of Leadership: A Transactional Approach
I was his loyal friend, but no more. . . . I was so hurt when I discovered that
in spite of my loyalty, he always preferred Arie Dery to me. I asked him: Bibi, w
hy? and he answered: He brings me 10 seats in the parliament while you dont have su
ch a bulk of voters behind you. Itzhak Levi, former Minister of Education, on Benja
min Netanyahu, former Prime Minister of Israel
Many of the classic approaches to leadership concentrated on how to maintain or
achieve results as expected or contracted between the leader and the employees.
These transactional theories and practices viewed leadership in terms of conting
ent reinforcement, that is, as an exchange process in which employees are reward
ed or avoid punishment for enhancing the accomplishment of agreedupon objectives
. Figure 2 shows the process by which transactional leaders affect their employe
es motivation and performance. Transactional leaders help employees recognize wha
t the role and task requirements are to reach a desired outcome. The transaction
al leader helps clarify those requirements for employees, resulting in increased
con dence that a certain level of effort will result in desired performance. By r
ecognizing the needs of employees and clarifying how those needs can be met, the
transactional leader enhances the employees motivational level. In parallel, the
transactional leader recognizes what the employee needs and clari es for the empl
oyee how these needs will be ful lled in exchange for the employees satisfactory ef
fort and performance. This makes the designated outcome of suf cient value to the
employee to result in his or her effort to attain the outcome. The model is buil
t upon the assumption that the employee has the capability to perform as require
d. Thus, the expected effort is translated into the expected performance (Bass 1
985). There is support in the literature for the effectiveness of transactional
leadership. Using contingent reinforcement, leaders have been shown to increase
employee performance and job satisfaction and to reduce job role uncertainty (Av
olio and Bass 1988). For example, Bass and Avolio (1993) report results collecte
d from 17 independent organizations indicating that the correlations between the
contingent reward leadership style and employees effectiveness and satisfaction
typically ranged from 0.4 to 0.6, depending on whether it was promises or actual
rewards. Similarly, Lowe et al. (1996), in their meta-analysis of 47 studies, r
eport that the mean corrected correlation between contingent reward and effectiv
eness was 0.41. Although potentially useful, transactional leadership has severa
l serious limitations. First, the contingent rewarding, despite its popularity i
n organizations, appears underutilized. Time pressure, poor performance-appraisa
l systems, doubts about the fairness of the organizational reward system, or lac
k of managerial training cause employees not to see a direct relationship betwee
n how hard they work and the rewards they receive. Furthermore, reward is often
given in the form of feedback from the superior, feedback that may also be count
erproductive. What managers view as valued feedback is not always perceived as r
elevant by the employees and may be weighted less than feedback received from th
e job or coworkers. In addition, managers appear to avoid giving negative feedba
ck to employees. They distort such feedback to protect employees from the truth.
Second, transactional leadership may encourage a short-term approach toward att
aining organizational objec-
844
L: Recognizes what F must do to attain designated outcomes
PERFORMANCE IMPROVEMENT MANAGEMENT
L: Recognizes what F needs
L: Clarifies How Fs need fulfillment will be L: Clarifies Fs role exchanged fo
r enacting role to attain designated outcomes
F: Confidence in meeting role requirements (subjective probability of success)
F: Value of designated outcomes (need fulfilling value for F)
F: Motivation to attain desired Legend outcomes (expected effort) L = Leader F =
Follower
Figure 2 Transactional Leadership and Follower Motivation. (Adapted with the per
mission of The Free Press, a Division of Simon & Schuster, from Leadership and P
erformance beyond Expectations, by Bernard M. Bass, Copyright
1985 by The Free P
ress.)
of her credentials include winning the rst-ever Olympic gold medal in U.S. womens
basketball history; six national championships; an .814 winning percentage, fth a
mong all coaches in college basketball history; and so many trips to basketballs
Final Four that her record in all likelihood will never be equaled. Her 19971998
University of Tenessee team nished the regular season 30 and 0 and won its third
consecutive national title. Beyond coaching records, however, what is Summitt li
ke as a person? Among other things, when she walks into a room, her carriage is
erect, her smile con dent, her manner of speaking direct, and her gaze piercing. H
er calendar documents her pressured schedule. But she is also a deeply caring pe
rson, appropriate for a farm daughter who grew up imitating her mothers sel ess gen
erosity in visiting the sick or taking a home-cooked dinner to anyone in need. S
ummitt clearly has many extraordinary qualities as a coach, but perhaps most str
iking is the nature of the relationship she develops with her players (Hughes et
al. 1998). Pat Summitt does not rely only on her players contracts to gain their
highest levels of motivation and performance. The best transactional, contractbased leadership cannot assure such high performance on the part of the players.
Summitt clearly has the additional qualities of what is referred to as transfor
mational leadership. Indeed, the past 15 years have seen some degree of converge
nce among organizational behavior scholars concerning a new genre of leadership
theory, alternatively referred to as transformational, charismatic, and visionary.
ational leaders do more with colleagues and followers than set up simple transac
tions or agreements (Avolio 1999). Transformational leadership theory explains t
he unique connection between the leader and his or her followers that accounts f
or extraordinary performance and accomplishments for the larger group, unit, and
organization (Yammarino and Du-
l payoffs to subordinates for work-goal attainment and making the path to these
payoffs easier travel by clarifying it, reducing roadblocks and pitfalls, and in
creasing the opportunities for personal satisfaction en route (House 1971, p. 324)
. The common denominator of the behavioral and contingency approaches to leaders
hip is that the individual is assumed to be a rational maximizer of personal uti
lity. They all explain work motivation in terms of a rational choice process in
which a person decides how much effort to devote to the job at a given point of
time. Written from a calculative-instrumental perspective, discussions and appli
cations of work motivation have traditionally emphasized the reward structure, g
oal structure, and task design as the key factors in work motivation and deempha
sized other sources of motivation. Classic theories assume that supervisors, man
agers, leaders, and their followers are able to calculate correctly or learn exp
ected outcomes associated with the exercise of theoretically speci ed behaviors. T
hese theories, then, make strong implicit rationality assumptions despite substa
ntial empirical evidence that humans are subject to a myriad of cognitive biases
and that emotions can be strong determinants of behavior (House 1995). The new
motivation and leadership paradigms recognize that not all of the relevant organ
izational behaviors can be explained on the basis of calculative considerations
and that other considerations may enrich the explanation of human motivation (Sh
amir 1990). Transformational and charismatic leadership approaches assume that h
uman beings are not only instrumental-calculative, pragmatic,
846
PERFORMANCE IMPROVEMENT MANAGEMENT
and goal-oriented but are also self-expressive of feelings, values, and self-con
cepts. We are motivated to do things because it makes sense to do them from a ra
tional-instrumental point of view, but also because by doing so we can discharge
moral obligations or because through such a contribution we can establish and a
f rm a cherished identity for ourselves. In other words, because it is useful, but
also because it is right, or because it feels right. Making the assumption that hum
ans are selfexpressive enables us to account for behaviors that do not contribut
e to the individual self-interest, the most extreme of which is self-sacri ce (Hou
se and Shamir, 1993; Shamir 1990; Shamir et al. 1993). Huy (1999), in discussing
the emotional dynamics of organizational change, refers to the emotional role o
f charismatic and transformational leaders. According to Huy, at the organizatio
nal level, the emotional dynamic of encouragement refers to the organizations abi
lity to instill hope among its members. Organizational hope can be de ned as the w
ish that our future work situation will be better than the present one. Transfor
mational leaders emotionally inspire followers through communication of vivid im
ages that give esh to a captivating vision so as to motivate them to pursue ambit
ious goals. The most important work for top managers is managing ideology and no
t strategy making. Transformational leaders can shape an ideological setting tha
t encourages enthusiasm, nurtures courage, reveals opportunities, and therefore
brings new hope and life into their organizations. Thus, classic leadership theo
ries addressed the instrumental aspects of motivation, whereas the new paradigm
emphasizes expressive aspects as well. Such emotional appeals can be demonstrate
d by the words of Anita Roddick, founder and CEO of The Body Shop:
Most businesses focus all the time on pro ts, pro ts, pro ts . . . I have to say I thi
nk that is deeply boring. I want to create an electricity and passion that bond
people to the company. You can educate people by their passions . . . You have t
o nd ways to grab their imagination. You want them to feel that they are doing so
mething important . . . Id never get that kind of motivation if we were just sell
ing shampoo and body lotion. (Conger and Kanungo 1998, pp. 173174)
2.3.2. Work
From an Individualistic-Oriented toward a Collectivistic-Oriented Motivation to
Nearly all classic models of motivation in organizational behavior, in addition
to being calculative, are also hedonistic. These classic approaches regard most
organizational behavior as hedonistic and treat altruistic, prosocial, or cooper
ative behavior as some kind of a deviation on the part of the organizational mem
bers. This trend may be traced in part to the in uence of the neoclassical paradig
m in economics and psychology, which is based on an individualistic model of hum
ans. American psychology, from which our work motivation theories have stemmed,
has strongly emphasized this individualistic perspective (Shamir 1990). The rece
nt recognition that not all relevant work behaviors can be explained in terms of
hedonistic considerations has led to the current interest in prosocial organiza
tional behaviors, those that are formed with the intent of helping others or pro
moting others welfare. However, between totally sel sh work behavior and pure altru
istic behaviors, many organizationally relevant actions are probably performed b
oth for a persons own sake and for the sake of a collectivity such as a team, dep
artment, or organization. Individuals may approach social situations with a wide
range of motivational orientations that are neither purely individualistic (con
cerned only with ones satisfaction) nor purely altruistic (concerned only with ma
ximizing the others satisfaction). Deutsch (1973, in Shamir, 1990) uses the term
collectivistic to refer to a motivational orientation that contains a concern wi
th both ones own satisfaction and others welfare. The importance of discussing the
linkages between individual motivations and collective actions stems from the i
ncreasing recognition of the importance of cooperative behaviors for organizatio
nal effectiveness. During the 1990s, hundreds of American companies (e.g., Motor
ola, Cummins Engine, Ford Motor Co.) reorganized around teams to leverage the kn
owledge of all employees. Now it appears that the concept is going global, and r
ecent research conducted in Western Europe has supported the wisdom of teams. Fo
r example, the Ritz-Carlton Hotel Co. has created self-directed work teams with the
goal of improving quality and reducing costs. In the hotels Tysons Corner, Virgin
ia, facility, a pilot site for the companys program, the use of these teams has l
ed to a decrease in turnover from 56 to 35%. At a cost of $4,000 to $5,000 to tr
ain each new employee, the savings were signi cant. At Air Products, a chemical ma
nufacturer, one cross-functional team, working with suppliers, saved $2.5 millio
n in one year. OshKosh BGosh has combined the use of work teams and advanced equi
pment. The company has been able to increase productivity, speed, and exibility a
t its U.S. manufacturing locations, enabling it to maintain 13 of its 14 facilit
ies in the United States, which made one of the few childrens garment manufacture
rs able to do so (Gibson et al. 2000). This shift in organizational structure ca
lls for higher cooperation in group situations. Cooperation, de ned as the willful
contribution of personal effort to the completion of interdependent jobs, is es
sential whenever people must coordinate activities among differentiated tasks. I
ndividualism is the condition in which personal interests are accorded greater i
mportance than are the needs of groups.
sic motivation and that intrinsic rewards can weaken extrinsic motivation. Howev
er, intrinsic job satisfaction cannot be bought with money. Regardless of how sa
tisfactory the nancial rewards may be, most individuals will still want intrinsic
grati cation. Money is an important motivator because it is instrumental for the
grati cation of many human needs. Ful lling basic existence needs, social needs, and
growth needs can be satis ed with money or with things money can buy. However, on
ce these needs are met, new growth needs are likely to emerge that cannot be gra
ti ed with money. Pay, of course, never becomes redundant; rather, intrinsic motiv
ators to ensure growth-stimulating challenges must supplement it (Eden and Globe
rson 1992). Classic motivational and leadership theories have emphasized both ex
trinsic and intrinsic rewards. However, transformational leadership theory goes
beyond the rewards-for-performance formula as the basis for work motivation and
focuses on higher-order intrinsic motivators. The motivation for development and
performance of employees working with a transformational leader is driven by in
ternal and higher-order needs, in contrast to the external rewards that motivate
employees of transactional leaders (Bass and Avolio 1990). The transformational
leader gains heightened effort from employees as a consequence of their self-re
inforcement from doing the task. To an extent, transformational leadership can b
e viewed as a special case of transactional leadership with respect to exchangin
g effort for rewards. In the case of transformational leadership, the rewarding
is internal (Avolio and Bass 1988).
848
PERFORMANCE IMPROVEMENT MANAGEMENT
The intrinsic basis for motivation enhanced by transformational and charismatic
leaders is emphasized in Shamir et al.s (1993) self-concept theory. Self-concepts
are composed, in part, of identities. The higher the salience of an identity wi
thin the self-concept, the greater its motivational signi cance. Charismatic leade
rs achieve their transformational effects through implicating the selfconcept of
followers by the following four core processes: 1. By increasing the intrinsic
value of effort, that is, increasing followers intrinsic motivation by emphasizin
g the symbolic and expressive aspects of the effort, the fact that the effort it
self re ects important values. 2. By empowering the followers not only by raising
their speci c self-ef cacy perceptions, but also by raising their generalized sense
of self-esteem, self-worth, and collective ef cacy. 3. By increasing the intrinsic
value of goal accomplishment, that is, by presenting goals in terms of the valu
e they represent. Doing so makes action oriented toward the accomplishment of th
ese goals more meaningful to the follower in the sense of being consistent with
his or her self-concept. 4. By increasing followers personal or moral commitment.
This kind of commitment is a motivational disposition to continue a relationshi
p, role, or course of action and invest effort regardless of the balance of exte
rnal costs and bene ts and their immediate gratifying properties.
2.4.
The Full Range Leadership Model: An Integrative Framework
Bass and Avolio (1994) propose an integrative framework that includes transforma
tional, transactional, and nontransactional leadership. According to Bass and Av
olios full range leadership model, leadership behaviors form a continuum in terms of
activity and effectiveness. Transformational leadership behaviors are at the hig
her end of the range and are described as more active proactive and effective tha
n either transactional or nontransactional leadership. Transformational leadersh
ip includes four components. Charismatic leadership or idealized in uence is de ned
with respect to both the leaders behavior and employee attributions about the lea
der. Idealized leaders consider the needs of others over their own personal need
s, share risks with employees, are consistent and trustworthy rather than arbitr
ary, demonstrate high moral standards, avoid the use of power for personal gain,
and set extremely challenging goals for themselves and their employees. Taken t
ogether, these behaviors make the leader a role model for his or her employees;
the leader is admired, respected, trusted, and ultimately identi ed with over time
. Jim Dawson, as the president of Zebco, the worlds largest shing tackle company,
undertook a series of initiatives that would symbolically end the class differen
ces between the workforce and management. Among other actions, he asked his mana
gement team to be role models in their own arrival times at work. One executive
explained Dawsons thinking:
What we realized was that it wasnt just Jim but we managers who had to make sure
that people could see there were no double standards. As a manager, I often work
long hours. . . Now the clerical and line staff dont see that we are here until
7 because they are here only until 4:45 p.m. As managers, we dont allow ourselves
to use that as an excuse. . . Then it appears to the workforce that you have a
double standard. An hourly person cannot come in at 7:05 a.m. He has got to be h
ere at 7:00 a.m. So we have to be here at the same time. (Conger and Kanungo 199
8, pp. 136137)
Similarly, Lee Iacocca reduced his salary to $1 in his rst year at Chrysler becau
se he believed that leadership means personal example. Inspirational motivation
involves motivating employees by providing deeper meaning and challenges in thei
r work. Such leaders energize their employees desire to work cooperatively to con
tribute to the collective mission of their group. The motivational role of visio
850
PERFORMANCE IMPROVEMENT MANAGEMENT
A lot
E F FE C T IV E E F FE C T IV E Is 4 CT
M BE-A M BE-A
A little
CT
Is 4
P A S S IV E
M BE-P
A C T IV E
P A S S IV E
M BE-P
A C T IV E
NC
UE
EQ
FR
lot
IN E F F E C T IV E
A little
IN E F F E C T IV E
SUB-OPTIMAL PROFILE
OPTIMAL PROFILE
Figure 3 Full Range Leadership Model: Optimal and Suboptimal Pro les as Created by
the Frequency of Displaying the Leadership Styles Behaviors. (Adapted from B. J.
Avolio, Full Leadership Development: Building the Vital Forces in Organizations
, p. 53, copyright 1999 by Sage Publications, Inc., reprinted by permission of S
age Publications, Inc.)
have shown active transactional and proactive transformational leadership to be
far more effective than other styles of leadership. The frequency dimension mani
fests the difference between optimal and suboptimal pro le. An optimal managerial
pro le is created by more frequent use of transformational and active transactiona
l leadership styles along with less frequent use of passive transactional and la
issez-faire leadership. The opposite directions of frequency create a suboptimal
pro le.
3.
LEADERSHIP OUTCOMES
My message was, and will always be, There are no limits to the maximum. T., an exempl
ary infantry platoon commander in the Israel Defense Forces
Transformational leaders encourage follower to both develop and perform at levels
above what they may have felt was possible, or beyond their own personal expecta
tions (Bass and Avolio 1990, p. 234, emphasis in original). Thus, achieving certai
n levels of performance as well as development become a targeted outcome. The li
nkages between transformational leadership, employee development, and performanc
e are presented in Figure 4.
Employee Development Transformational - Personal Development Leadership - Develo
pment of Attitudes toward the Leader - Group Development Performance
Figure 4 Linkages among Transformational Leadership, Employee Development, and P
erformance.
FR
EQ
UE
NC
LF
LF
Legend: LF - Laissez-faire MBE-P - Management by Exception (Passive) MBE-A - Man
agement by Exception (Active) CR - Contingent Reward 4Is: Idealized Influence Ins
pirational Motivation Intellectual Stimulation Individualized Consideration
Y
Y
vely related to nancial measures and productivity among MBA students engaged in a
complex, semester-long competitive business simulation. Bryant (1990, in Bass a
nd Avolio 1993) con rms that nursing supervisors who were rated by their followers
as more transformational managed units with lower turnover rates. Howell and Av
olio (1993) found positive relationships between transformational leadership and
objective unit performance over a one-year interval among managers representing
the top four levels of management in a large Canadian nancial institution. Germa
n bank unit performance over longer vs. shorter time periods was higher in banks
led by leaders who were rated by their employees as more transformational (Geye
r and Steyrer 1998 in Avolio 1999). However, even when predictions regarding objec
tive performance outcomes support the model, we are still faced with plausible a
lternative cause-and-effect relationships (Bass and Avolio 1993, p. 69). Therefore
, to establish causal effects, several experiments (Barling et al. 1996; Crookal
l 1989; Dvir et al. in press; Howell and Frost 1989; Kirkpatrick and Locke 1996;
Sosik et al. 1997) were conducted either in the eld or in the laboratory. Overal
l, these experiments con rmed the causal impact of transformational or charismatic
leadership on performance outcomes. Such an experimental design can con rm that t
he direction of causal ow is indeed from transformational leadership to the hypot
hesized performance outcomes as opposed to instances where enhanced follower per
formance cause the higher transformational leadership ratings. Howell and Frost
(1989) found that experimentally induced charismatic leadership positively affec
ted task performance, task adjustment, and ad-
852
PERFORMANCE IMPROVEMENT MANAGEMENT
justment to the leader and the group. Kirkpatrick and Locke (1996) manipulated t
hree core components common to charismatic and transformational leadership in a
laboratory simulation among 282 students. They found that a vision of high quali
ty signi cantly affected several attitudes, such as trust in the leader, and congr
uence between participants beliefs and values and those communicated through the
vision. Vision implementation affected performance quality and quantity. Barling
et al. (1996), in a eld experiment among 20 branch managers in a large Canadian
bank, con rmed the positive impact of transformational leadership training on empl
oyees organizational commitment and on two aspects of branch-level nancial perform
ance. Dvir et al. (in press) conducted a longitudinal randomized eld experiment a
mong military leaders and their followers. They con rm the positive impact of tran
sformational leadership, enhanced through a training intervention, on direct fol
lowers personal development and on indirect followers objective performance. Two m
eta-analytic studies have been conducted recently. Lowe et al. (1996) found that
signi cantly higher relationships were observed between transformational scales a
nd effectiveness than between transactional scales and effectiveness across 47 s
amples. This pattern held up across two levels of leadership and with both hard
(number of units) and soft (performance appraisals) measures of performance (see
Figure 5). Coleman et al. (1995) found that, across 27 studies, the transformat
ional leadership styles were more strongly related to performance than the trans
actional styles. The average relationship across studies for the transformationa
l leadership factors and performance ranged from 0.45 to 0.60; for transactional
leadership, 0.44; for management-by-exceptionactive, 0.22; for management-by-exc
eptionpassive, 0.13; and for laissez-faire, 0.28. To sum up, there is suf cient empi
rical evidence to conclude that transformational leadership has a positive impac
t on both perceived and objective performance and that this impact is stronger t
han the effects of transactional leadership.
3.2.
Employee Development
Rather than solely focusing on the exchange with an eye toward performance, tran
sformational leaders concentrate on developing employees to their full potential
. Indeed, one of the most prominent aspects of transformational leadership compa
red to transactional leadership concerns employee developmental processes (Avoli
o and Gibbons 1988). The transformational leader evaluates the potential of all
employees in terms of their being able to ful ll current commitments and future po
sitions with even greater responsibilities. As a result, employees are expected
to be more prepared to take on the responsibilities of the leaders position, and
to be transformed, as Burns (1978) originally argued, from followers into leaders. I
n contrast, working with the transactional leader, employees are expected to ach
ieve agreed upon objectives but are not expected to undergo a developmental proc
ess whereby they will assume greater responsibility for developing and leading t
hemselves and others. Follower
1.0
Public
0.8 Relationship 0.6 0.4 0.2 0.0 -0.2
Laissez Faire / Management by Exception Contingent Reward Individualized Conside
ration Intellectual Stimulation Inspirational Mgt / Idealized Influence
FollowedMeasures Private Public
Private Organizational Measures
MLQ Scale
Figure 5 The Relationships between the Full Range Leadership Models Styles and Le
ader Effectiveness in Public and Private Organizations in Lowe et al.s (1996) Met
a-analysis of 47 Samples. (MLQ
multifactor leadership questionnaire, which measu
res transformational, transactional, and nontransactional leadership; Relationsh
ip correlations between leadership style and leader effectiveness; Follower meas
ures subordinate perceptions of leader effectiveness; organizational measures
qu
asi-institutional measures of leader effectiveness including both hard measures
[e.g., pro t] and soft measures [e.g., supervisory performance appraisals].)
rom transformational and charismatic leadership theories, and have called for it
s inclusion as part of the developmental process undergone by employees of trans
formational leaders. Bass (1996) came to agree with Burns that to be transformat
ional, a leader must be morally uplifting. One of the dif culties in applying this
part of the theory is that, according to Kohlberg, moving from one moral stage
to the next may take years, a time span too long to wait for results to appear i
n typical leadership studies. Therefore, moral development in the short-term can
be represented by the employees internalization of organizational moral values (
Dvir et al. in press). The transition from a desire to satisfy solely personal i
nterests to a desire to satisfy the broader collective interests is part of mora
l development, according to Kohlberg (1973). Bass (1985) places special emphasis
on this aspect of moral development and suggests that transformational leaders
get their employees to transcend their own self-interest for the sake of the tea
m or organization. In Shamirs (1991) self-concept theory, the collectivistic orie
ntation of the employees is one of the central transformational effects of chari
smatic leaders. Dvir et al. (in press) con rm the causal impact of transformationa
l leaders on their direct followers collectivistic orientation. Empowerment Trans
formational leadership theory, in contrast to early charismatic leadership theor
ies (e.g., House 1977), never assumed that leadership relationships are based on
regression or weakness on the part of the employees. On the contrary, transform
ational leadership theory has
854
PERFORMANCE IMPROVEMENT MANAGEMENT
-level outcomes. Sivasubramaniam et al. (1997, in Avolio 1999) found that transf
ormational leadership directly predicted the performance of work groups while al
so predicting performance indirectly through levels of group potency. In a longi
tudinal laboratory experiment, Sosik et al. (1997) largely con rm that transformat
ional leadership affected group potency and group effectiveness more strongly th
an transactional leadership. Shamir et al. (1998) con rm that the leaders emphasis
on the collective identity of the unit was associated with the strength of the u
nit culture, as expressed in the existence of unique symbols and artifacts, and
with unit viability as re ected by its discipline, morale, and cooperation. More r
esearch is needed on the effects of transformational leaders on follower group d
evelopment.
4.
IMPLICATIONS FOR STRATEGIC HUMAN RESOURCE MANAGEMENT
This section will introduce central implications of the implementation of the ne
w genre of leadership theories in organizations. Our goal is to focus on the imp
lications of leadership research in the framework of the major functions of huma
n resource management (HRM). As with other aspects of organizational behavior, l
eadership and motivation research has contributed to the development of sophisti
cated HRM tools and techniques that have been shown to improve organizational ef
fectiveness (e.g., Jackson and Schuler 1995). The major functions of HRM, discus
sed below, are recruiting, performance appraisal, training, and compensation. Fo
r each function, we provide a background based on a strategic approach to HRM. W
e then describe the utility of the above research ndings, regarding leadership an
d motivation, to strategic HRM and provide examples.
856
PERFORMANCE IMPROVEMENT MANAGEMENT
4.1.
A Strategic Approach to Human Resource Management
As traditional sources of competitive advantage, such as technology and economie
s of scale, provide less competitive leverage now than in the past, core capabil
ities and competence, derived from how people are managed have become more impor
tant. A strategic perspective of human resource management is critical for creat
ing and sustaining human resource-based competitive advantage (Sivasubramaniam a
nd Ratnam 1998). A strategic approach to HRM implies a focus on planned HRM deploy
ments and activities intended to enable the form to achieve its goals (Wright and
McMahan 1992, p. 298). Strategic HRM involves designing and implementing a set o
f internally consistent policies and practices that ensure that a rms human capita
l (employees collective knowledge, skills, and abilities) contributes to the achi
evement of its business objectives. Internal consistency is achieved by creating
vertical and horizontal t. Vertical t involves the alignment of HRM practices and
strategic management, and horizontal t refers to the alignment of the different
HRM functions (Wright and Snell 1998). Strategic HRM is concerned with recognizi
ng the impact of the outside environment, competition, and labor markets, emphas
izes choice and decision making, and has a long-term focus integrated in the ove
rall corporate strategy. For instance, companies like General Motors that emphas
ize a retrenchment or cost reduction strategy will have an HRM strategy of wage
reduction and job redesign, while the growth corporate strategy of Intel require
s that it will aggressively recruit new employees and rise wages (Anthony et al.
1998). Similarly, cost-reduction policies may require that leaders emphasize eq
uity and contingent reward, while growth strategy should enhance transformationa
l leadership. Cost-reduction and growth strategies, as well as other strategies,
have clear implications for how companies should recruit employees, evaluate th
eir performance, and design training and compensation systems. Considerable evid
ence has shown the performance implications of HRM. These studies indicate that
high-performing systems are characterized by careful selection, focus on a broad
range of skills, employ multisource appraisals focused on employee development,
employ teams as a fundamental building block of the organizations work systems,
provide facilitative supervision, include human resource considerations in strat
egic decision making, and link rewards to performance that focuses the employees
on the long-term goals of the rm. These ndings have been fairly consistent across
industries and cultures (Sivasubramaniam and Ratnam 1998).
4.2.
Recruiting: From Hiring to Socialization
Recruiting, the process by which organizations locate and attract individuals to
ll job vacancies, should be matched with company strategy and values as well as
with external concerns, such as the state of the labor market. Recruiting includ
es both pre- and post-hiring goals. Among the pre-hiring goals are attracting la
rge pools of well-quali ed applicants. Post-hiring goals include socializing emplo
yees to be high performers and creating a climate to motivate them to stay with
the organization (Fisher et al. 1996). Another major decision in the recruiting
philosophy of a rm is whether to rely on hiring outside candidates or to promote
from within the organization. The above decisions and goals have implications fo
r how organizations can bene t from recent research on leadership and motivation.
Organizations that recruit from their internal labor market and accept candidate
s for long-term employment may seek candidates with leadership skills who can be
potential managers. Such organization may use mentoring programs as a way of en
hancing employee socialization. Other organizations may bene t from using leadersh
ip skills as part of the selection procedures they employ, though the use of lea
dership characteristics as a criterion for selection is highly controversial (e.
g., Lowery 1995).
4.2.1.
Implications for Selection
While good leadership qualities could be a criterion for selecting employees at
all levels, if used at all, it is more common in managerial selection. Manageria
l selection is a particularly dif cult task because there are many different ways
to be a successful manager. Managing requires a wide range of skills, one of whi
ch could be leadership. Leadership effectiveness was found to relate to common s
election criteria, such as practical intelligence, reasoning, creativity, and ac
hievement motivation (Mumford et al. 1993). However, many HRM professionals are
reluctant to use leadership as a criterion for selection, citing reasons, such a
s lack of de nition, complexity of the construct, and dif culty in validating select
ion tools (Lowery 1995). These challenges pose a threat to the validity of the s
election process. An invalid selection system may be illegal, as well as strateg
ically harmful for the organization. Consequently, our discussion of leadership
as a selection tool is mainly exploratory. The most common use of leadership cri
teria in selection is as part of an assessment center (e.g., Russell 1990). Asse
ssment centers are work samples of the job for which candidates are selected (mo
stly candidates for managerial positions). They can last from one day to one wee
k and have multiple means of assessment, multiple candidates, and multiple asses
sors. Assessment centers are
worker serves as an adviser to the new employee, may help in the socialization p
rocess. Mentoring programs are especially necessary in environments of high turn
over, such as high technology (Messmer 1998). Mentoring programs have both short
- and long-term bene ts. In addition to developing the skills, knowledge, and lead
ership abilities of new and seasoned professionals, these programs strengthen em
ployee commitment (Messmer 1998). Mentoring programs have been successful in dev
eloping new talent, as well as recruiting junior level staff and bringing them u
p to speed on policies and procedures. Mentors provide more than company orienta
tion; they become trusted advisors on issues ranging from career development to
corporate culture. At a time when many executives are managing information inste
ad of people, mentoring may offer a welcome opportunity to maintain the kinds of
critical interpersonal skills that can further a managers career. Although the m
entors in most mentoring programs are not the supervisors of the mentees, the ch
aracteristics of good mentors are very similar to the characteristics of good le
aders, especially transformational leaders (Bass 1985). Bass (1985) argues that
effective transformational leaders emphasize individualized consideration that i
nvolves the leaders service as a counselor for their proteges. Such leaders have
more knowledge and experience and the required status to develop their proteges.
Johnson (1980) found that two thirds of 122 recently promoted employees had men
tors. Bass (1985) claims that since mentors are seen as
858
PERFORMANCE IMPROVEMENT MANAGEMENT
authorities in the system, their reassurance helps mentees be more ready, willin
g and able to cooperate in joint efforts. Moreover, mentoring is an ef cient metho
d for leadership development because followers are more likely to model their le
adership style on that of their mentor, as compared with a manager who is not se
en as a mentor (Weiss 1978). To be effective, mentors need to be successful, com
petent, and considerate (Bass 1985). Companies like Bank of America and Pricewat
erhouseCoopers employ mentoring mostly to increase retention. In competitive labo
r markets, such as the high-technology industry, increasing retention is a major
HR goal. In PricewaterhouseCoopers, mentoring has led to an increase in female r
etention (Messmer 1998). Clearly, transformational leadership, and individualize
d consideration in particular, can provide a strategic advantage to organization
s that employ mentoring programs. The development of mentoring programs should i
nclude a review of needs for mentoring, a buyin from senior management, and cons
tant updating of all managers involved with the mentee. Such programs should als
o include a selection of mentors based on the above principles, a careful match
of participants, and company events to demonstrate the importance of the mentori
ng program for the organization (Messmer 1998). Top managers, such as the long-t
ime CEO of the Baton Rouge Re nery of Exxon, are seen as more effective if they de
velop their followers. In the case of Exxon, these followers were later promoted
more than their mentor. We chose to demonstrate the contribution of effective l
eadership to recruiting using the examples of selection and mentoring. However,
leadership and motivation are important parts of other recruiting processes, suc
h as in interviewer skills and assessment of candidates motivation. We chose thos
e aspects that we feel bene t especially from the new genre of leadership theories
. We continue to use this approach with the HRM function of performance manageme
nt.
4.3.
From Performance Appraisal to Performance Management
Performance appraisal is the process by which an employees contribution to the or
ganization during a speci c period of time is assessed. Performance feedback provi
des the employees with information on how well they performed. If used inappropr
iately, performance appraisal can be disastrous and lead to a decrease in employ
ee motivation and performance. Performance appraisal has a signi cant strategic ro
le in HRM. It provides strategic advantage by allowing the organization to monit
or both its individual employees and the extent to which organizational goals ar
e met. From a strategic HRM perspective, performance appraisal involves more tha
n assessment or measurement. Rather, it is a method for performance management,
including de ning performance, measuring it, and providing feedback and coaching t
o employees (Fisher et al. 1996). In sum, performance management has three major
strategic functions. First, it signals to employees which behaviors are consist
ent with organizational strategy. Second, if used appropriately, it is a useful
method for organizational assessment. Finally, it is a feedback mechanism that p
romotes employee development. The movement from a focus on appraisal to performa
nce management allows for better application of the new genre of leadership and
motivation theory. While the traditional performance-appraisal systems emphasize
d uniform, control-oriented, narrow-focus appraisal procedures that involved sup
ervisory input alone, strategic performance management emphasizes customization,
multipurpose, and multiple raters, focusing on development rather than past per
formance. Similarly, whereas traditional approaches to leadership and motivation
(e.g., House 1971) emphasized control and exchange based on extrinsic motivator
s, more recent approaches emphasize development and in uence through intrinsic mot
ivators (e.g., Bass 1985; Shamir et al. 1993). Indeed, even during the 1980s and
the beginning of the 1990s, employees and managers viewed performance appraisal
negatively. Longnecker and Gioia (1996, in Vicere and Fulmer 1996) found that o
ut of 400 managers, only one quarter were satis ed with appraisal systems and the
higher a manager rose in the organization, the less likely he or she was to rece
ive quality feedback. Nevertheless, they indicated that when used as a strategic
tool for executives, systematic feedback was crucial for them. Many of them wer
e also willing to be evaluated personally. According to Vicere and Fulmer (1996)
, performance-management processes are highly linked with leadership development
. We chose to focus on the contributions of leadership research to two aspects o
f performance management: feedback from multiple raters (360) and alignment of st
rategic goals.
4.3.1.
Implications for Feedback
As we mentioned above, strategic performance-management systems involve more tha
n the traditional supervisory evaluation. Current appraisal systems often includ
e peer ratings, follower ratings, and self-ratings. Each source seems to contrib
ute differently and their combination (a 360 approach) is used mainly for develop
mental reasons (Waldman et al. 1998). About 12% of American organizations are us
ing full 360 programs for reasons such as management development, employee involv
ement, communication, and culture change (Waldman et al. 1998). Advocates of 360
systems believe that they contribute to leadership development by allowing manag
ers to compare their self-perceptions with those of their employees (Yammarino a
nd Atwater 1997).
With the rapid developments in technology and the changing nature of jobs, the p
rovision of training and development exercises has become a major strategic obje
ctive for the organization. Organizations provide training for many reasons, fro
m orientation to the organization and improving performance on the job to prepar
ation for future promotion. Like other HRM functions, training should be aligned
with the strategic goals of the organization. Organizations that rely on a high
ly committed, stable workforce will have to invest more in individuals than will
organizations that employ unskilled temporary employees (Fisher et al. 1996). W
hen a company changes strategy, as Xerox did when it became the Document Company, it
begins intensive training to equip employees with the new skills. Xerox spent $
7 million on developing a training center. In a world of networking and alliance
s between organizations, Xerox needs this center not only to train its employees
but to train suppliers, customers, and other constituencies. Xerox, like other
employers, has a long-term perspective on its workforce and is using training to
increase employee retention. More and more professionals seek jobs that will pr
ovide them with the opportunity for professional development. Many engineers who
work in high-technology industries have multiple job opportunities and are cons
tantly in the job market (Messmer 1998). Consequently, organizations that wish t
o decrease turnover move from emphasizing short-term training for speci c purposes
to long-term development. We believe that the shift to development calls for mo
re leadership of the new genre type. While the short-term emphasis on training,
where employees are trained to learn a new skill that they need for their curren
t task or job, required mainly task-oriented transactional leadership, the move
to long-
860
PERFORMANCE IMPROVEMENT MANAGEMENT
term development entails transformational leadership. The need for more transfor
mational or charismatic leaders in organizations is a consequence of several tre
nds that re ect the shift from training to development. First, in addition to send
ing employee to off-the-job training workshops, more companies emphasize individ
ual coaching. Coaching, like other types of on-the-job training, ensures maximiz
ed transfer between what was learned in the training and the actual job (Ford 19
97). Onthe-job training also allows for maximum trainee motivation because train
ees see this type of training as highly relevant. Second, as managerial jobs bec
ome more complex, there is an increased need for managerial training. Most typol
ogies of managerial tasks emphasize that managers need to have leadership skills
. Speci cally, skills such as providing feedback, communication, motivation, align
ing individual goals with organizational goals, and having good relationships wi
th colleagues and followers should be the focus of development (Kraut et al. 198
9). Finally, as discussed above regarding socializing employees, coaching and me
ntoring programs have become more popular, sometimes as spontaneous mentorprote g
e relationship and sometimes as an organized mentoring program (Messmer 1998). T
hese shifts in training, together with the emphasis on long-term development, hi
ghlight the importance of developing transformational leadership in the organiza
tion. Transformational leaders develop signi cant relationships with their followe
rs and thus have a chance to be seen as coaches and to provide practical on-thejob training. These leaders are also intellectually stimulating and can constant
ly challenge employees and develop them. Personal development involves individua
lized consideration, another major style of transformational leaders. More speci c
ally, individualized consideration consists of mentoring (Bass 1985). Furthermor
e, both intellectual stimulation and individualized consideration can be cultiva
ted and nurtured not only at the individual level but also at the team and organ
izational levels (Avolio and Bass 1995). Organizations can create cultures that
encourage coaching and development while providing consideration and recognition
of individual needs. Paul Galvin, a CEO of Motorola, created a culture, based o
n his leadership style, where risk-taking is advocated and seen as the most secu
re approach (Avolio 1999). Employees are encouraged to be creative and develop n
ew products. Intellectually stimulating leaders also empower their followers (Ba
ss 1985). Empowerment is an effective method of training because employees learn
more tasks and are responsible for problem solving. It is especially important
in innovative environments where employees have to learn to take risks. Organiza
tions that employ empowering techniques try to applaud both successful and unsuc
cessful risk-taking behavior. For example, Lockheed Martin takes a team approach
to empowerment. Employees who have knowledge regarding an aspect of a project a
re encouraged to speak up (Wolff 1997, in Anthony et al. 1998). In this way, emp
loyees get to experience managerial roles. Several studies using the transformat
ional leadership paradigm have demonstrated the relationships between transforma
tional leadership and empowerment. Masi (1994, in Avolio 1999) reported positive
correlations between transformational leadership and empowering organizational
culture norms. Moreover, transformational leadership at the top may lter to lower
levels of the organization. Bass and Avolio (1994) suggested the falling dominoes
effect, where leaders who report to transformational leaders tend to be transform
ational as well. Bryce (1988, in Avolio 1999) found support for this effect in a
study of Japanese senior company leaders. Finally, Spreitzer and Jansaz (1998,
in Avolio 1999) found that empowered managers are seen as more intellectually st
imulating and charismatic than nonempowered managers. These ndings demonstrate th
at by being transformational, executives can create a culture that facilitates p
rofessional development for their employees. Even when their managers are empowe
red by other managers, followers seem to attribute charisma and intellectual sti
mulation to the empowered leader. In addition to the contribution of transformat
ional leaders to employee and managerial development, organizations can use tran
sformational leadership as a model for training leadership. There seems to be ag
reement that leadership is to some extent innate but to some extent trainable (e
.g., Avolio 1999). From the models of the new genre of leadership, training usin
g the full range leadership model has been the most extensive. Crookall (1989) c
ompared the effectiveness of training Canadian prison shop supervisors in transf
ormational leadership and in situational leadership using a control group that d
id not receive any training. Crookall concluded that the transformational leader
ship workshop had a signi cant positive impact on two of the four performance meas
ures subjectively evaluated by superiors, and the situational leadership trainin
g had signi cant positive in uence on one of the four subjective performance measure
s. There was no change in the perceived performance of the untrained group. In a
ddition, a signi cant positive impact was found in both training groups on turnove
r, work habits, and managers evaluations regarding the personal growth of prisone
rs. Signi cant improvement regarding respect for superiors, professional skills, a
nd good citizenship as evaluated by the manager, were found only for the transfo
rmational leadership workshop. A more recent study (Barling et al. 1996) demonst
rated that bank branch managers who received transformational leadership trainin
g had a more positive impact on follower commitment and unit nancial performance
than did those who did not receive training. Finally, Dvir et al. (in press)
loyees who trust them and are committed to work for more then short-term goals.
Transactional leaders, who rely on contingent reward and management by exception
, teach employees that they should work only when rewarded or when they are clos
ely monitored. Such leaders signal to their followers that they do not trust the
m. Consequently, these employees will be less motivated than employees who feel
that their supervisors do trust them. Indeed, companies like Southwest Airlines
recognize these issues and highlight managing through trust and respect rather t
han exceptional compensation packages (Bass 1985; Pfeffer 1998). Moreover, trans
formational leaders provide a personal example to employees. Executives who get
paid over 80 times more than their employees do not signal to employees that pay
is motivating. For example, Whole Food Markets pays nobody in the company more
than 8 times the average company salary. Similarly, when the CEO of Southwest Ai
rlines asked pilots to freeze their salaries for another year, he froze his own
salary for four years. On the other hand, paying executives bonuses when employe
es are being laid off, as was done by GM in the 1980s, sends the wrong message t
o employees (Pfeffer 1998). Motivating employees using compensation has to be al
igned with other organizational practices, including exempli cation of top manager
s. By providing an example, executives send a message that they share a common f
ate and that the organization emphasizes a culture of cooperation and teamwork r
ather than competition.
862
PERFORMANCE IMPROVEMENT MANAGEMENT
Compensation systems often highlight individual incentives (Fisher et al. 1996).
Among these incentives are commissions, bonuses, and merit pay. Although these
methods lead to employee satisfaction, they have little relationship with organi
zational outcomes that can be measured only at the unit or the organizational le
vels. Indeed, when organizations want to encourage teamwork, rewarding individua
ls may be extremely harmful. They end up with a compensation system that undermi
nes teamwork and focuses on short-term goals. Such compensation systems send a m
ixed message to employees regarding organizational goals. As the director of cor
porate industrial relations at Xerox said, if managers seeking to improve performa
nce or solve organizational problems use compensation as the only lever, they wi
ll get two results: nothing will happen, and they will spend a lot of money. Tha
ts because people want more out of their jobs than just money (Pfeffer 1998). Trans
formational leaders appeal to collective needs of employees and help align their
individual goals with organizational goals (Shamir et al. 1993). Organizations
need to emphasize collective rewards, such as pro t-sharing and gain-sharing plans
, where employees share organizational gains. These methods may motivate teamwor
k to a certain extent, but effective teamwork will still depend on leaders and t
he culture or environment that they create. Rather, transformational leaders may
be able to use compensation and its components to signal what is important and
thus shape company culture. In summary, the traditional notion that compensation
systems are effective motivators is based on a transactional contractual approa
ch to employeemanagement relations. More recent approaches to compensation emphas
ize rewarding at least at the team if not the organizational level and posit tha
t monetary rewards should accompany intrinsic rewards, such as trust and recogni
tion. These recent approaches also emphasize decentralizing pay decisions (Gomez
-Mejia et al. 1998). We argue here that these new compensation procedures can be
better applied by transformational or charismatic leaders. Applying the full ra
nge leadership model, we argue that such leaders may also exhibit contingent rew
ard and management by exception to support the basic compensation contract. Howe
ver, aligned with transformational leadership, the above compensation procedures
can lead to maximum motivation.
4.6. Involvement-Transformational vs. Inducement-Transactional HRM Systems: An I
ntegrative Framework
One common characteristic of most extant research is the use of the parts view of HR
M. Typically, researchers identify dimensions of HRM to identify the dimensions
that were signi cantly related to performance. New approaches to HRM argue that to
be considered strategic, the set of practices should be part of a system, inter
related, interdependent, and mutually reinforcing. HRM systems are best viewed a
s con gurations, systems of mutually reinforcing and interdependent set of practic
es that provide the overarching framework for strategic actions. The con gurationa
l approach to HRM has gained support in recent empirical studies (review in Siva
subramaniam and Ratnam 1998). Dyer and Holder (1988) have proposed three cluster
s of human resource strategies, labeled inducement, investment, and involvement.
Inducement human resource strategy is based on the concept of motivation throug
h rewards and punishment. Companies following this philosophy tend to emphasize
pay and bene ts, perquisites, or assignment to nontraditional work environments as
inducement to perform and remain with the rm. In addition, such rms demand high l
evels of reliable role behaviors, de ne jobs narrowly, hire on the basis of meetin
g minimum quali cations, and provide directive supervision. The inducement strateg
y is most closely linked to transactional leadership (Kroeck 1994). Investment h
uman resource strategy is built around extensive training and development. Compa
nies adhering to this strategy place a premium on the long-range education of em
ployees and expect employees to exercise a fair amount of initiative and creativ
ity in carrying out their tasks. Dominant corporate values are personal growth,
respect, equity, justice, and security, not autonomy and empowerment. Due to the
developmental aspects of this strategy, some (e.g., Kroeck 1994) link it to the
individualized consideration style in the full range leadership model. However,
because this strategy often takes on a paternalistic approach to management, an
d because of the lack of emphasis on autonomy and empowerment, core elements in
the relationships between transformational leaders and their followers, we will
refer to this strategy as paternalistic. Involvement human resource strategy is
built around creating a very high level of employee commitment. Employees are mo
tivated by the stimulus of autonomy, responsibility, and variety and of being ab
le to see their contribution to the nal product or service. The employees at thes
e companies are expected to exercise considerable initiative and creativity as w
ell as display a high degree of exibility to adapt to rapid change. Team-based wo
rk systems are the building blocks of the organization, and supervision is facil
itative and minimal. The involvement strategy calls for all four styles of trans
formational leadership (Kroeck 1994). It was found that in general, rms following
an involvement strategy outperformed rms pursuing inducement strategy. For examp
le, Sivasubramaniam, et al. (1997, in Avolio 1999) conducted a study among 50 In
dian rms and found higher correlations between involvement-transformational human
sformational leaders augment these rewards with intrinsic motivators that lead t
o building a highly committed workforce. As the market conditions get tougher an
d competitors attempt to duplicate every single source of advantage, what may be
sustainable and inimitable by competition is the human resource-based advantage
(Sivasubramaniam and Ratnam 1998). Therefore, an integrated, interdependent, an
d mutually reinforcing HRM system that represents the new leadership and motivat
ional paradigms may contribute to organizational effectiveness.
864
PERFORMANCE IMPROVEMENT MANAGEMENT
REFERENCES
Anthony, W. P., Perrewe, P. L., and Kacmar, K. M. (1998), Human Resource Managem
ent: A Strategic Approach, Dryden, Fort Worth, TX. Avolio, B. J. (1994), The Natura
l: Some Antecedents to Transformational Leadership, International Journal of Public
Administration, Vol. 17, pp. 15591580. Avolio, B. J. (1999), Full Leadership Dev
elopment: Building the Vital Forces in Organizations, Sage, Thousand Oaks, CA. A
volio, B. J., and Bass B. M. (1988), Transformational Leadership, Charisma and Bey
ond, in Emerging Leadership Vistas, J. G. Hunt, H. R. Baliga, H. P. Dachler, and C
. A. Schriesheim, Eds., Heath, Lexington, MA. Avolio, B. J., and Bass. B. M. (19
95), Individual Consideration Viewed at Multiple Levels of Analysis: A Multi-Level
Framework for Examining the Diffusion of Transformational Leadership, Leadership
Quarterly, Vol. 6, pp. 199218. Avolio, B. J., and Gibbons, T. C. (1988), Developing
Transformational Leaders: A Life Span Approach, in Charismatic Leadership: The El
usive Factor in Organizational Effectiveness, J. A. Conger and R. N. Kanungo, Ed
s., Jossey-Bass San Francisco. Avolio, B. J., Waldman, D. A., and Einstein, W. O
. (1988), Transformational Leadership in a Management Simulation: Impacting the Bo
ttom Line, Group and Organization Studies, Vol. 13, pp. 5980. Barling, J., Weber, T
., and Kelloway, K. E. (1996), Effects of Transformational Leadership Training on
Attitudinal and Financial Outcomes: A Field Experiment, Journal of Applied Psychol
ogy, Vol. 81, pp. 827832. Bass B. M. (1985), Leadership and Performance Beyond Ex
pectations, Free Press, New York. Bass B. M. (1990), Bass and Stogdills Handbook
of Leadership: Theory, Research and Management Applications, Free Press, New Yor
k. Bass B. M. (1996), A New Paradigm of Leadership: An Inquiry into Transformati
onal Leadership, Army Institute for the Behavioral and Social Sciences, Alexandr
ia, VA. Bass B. M., and Avolio, B. J. (1990), The Implications of Transactional an
d Transformational Leadership for Individual, Team, and Organizational Developme
nt, Research in Organizational Change and Development, Vol. 4, pp. 231272. Bass B.
M., and Avolio, B. J. (1993), Transformational Leadership: A Response to Critiques
, in Leadership Theory and Research: Perspectives and Directions, M. M. Chemers an
d R. Ayman, Eds., Academic Press, San Diego. Bass B. M., and Avolio, B. J. (1994
), Introduction, in B. M. Bass and B. J. Avolio, Eds., Improving Organizational Effe
ctiveness Through Transformational Leadership, Sage, Thousand Oaks, CA. Bass B.
M., and Avolio, B. J. (1996), Manual for the Multifactor Leadership Questionnair
e, Mind Garden, Palo Alto, CA. Berson, Y., and Avolio, B. J. (1999), The Utility o
f Triangulating Multiple Methods for the Examination of the Level(s) of Analysis
of Leadership Constructs, Paper presented at the Annual Meeting of the Society fo
r Industrial and Organizational Psychology (Atlanta, April). Berson, Y., and Avo
lio, B. J. (2000), An Exploration of Critical Links between Transformational and S
trategic Leadership, Paper presented at the Annual Meeting of the Society for Indu
strial and Organizational Psychology (New Orleans, April). Boal, K. B., and Brys
on J. M. (1988), Charismatic Leadership: A Phenomenological and Structural Approac
h, in Emerging Leadership Vistas, J. G. Hunt, B. R. Baliga, H. P. Dachler, and C.
A. Schriesheim, Eds., Lexington Books, Lexington, MA. Breaugh, J. A. (1983), Reali
stic Job Previews: A Critical Appraisal and Future Research Directions, Academy
of Management Review, Vol. 8, pp. 612620. Burns, J. M. (1978), Leadership, Harper
and Row, New York. Bycio, P., Hackett, R. D., and Allen, J. S. (1995), Further As
sessments of Basss (1985) Conceptualization of Transactional and Transformational
Leadership, Journal of Applied Psychology, Vol. 80, pp. 468478. Coleman, E. P., Pa
tterson, E., Fuller, B., Hester, K., and Stringer, D. Y. (1995), A Meta-Analytic E
xamination of Leadership Style and Selected Follower Compliance Outcomes, Universi
ty of Alabama. Conger, J. A., and Kanungo, R. N. (1987), Toward a Behavioral Theor
y of Charismatic Leadership in Organizational Settings, Academy of Management Revi
ew, Vol. 12, pp. 637647.
866
PERFORMANCE IMPROVEMENT MANAGEMENT
Klein, K. J., and House, R. J. (1995), On Fire: Charismatic Leadership and Levels
of Analysis, Leadership Quarterly, Vol. 6, pp. 183198. Kohlberg, L. (1973), Stages an
d Aging in Moral Development: Some Speculations, Gerontologist, Vol. 13, No. 4, pp
. 497502. Kouzes, J. M. (1999), Getting to the Heart of Leadership, Journal for Quali
ty and Participation, Vol. 22, p. 64. Kraut, A. I., Pedigo, P. R., McKenna, D. D
., and Dunnette, M. D. (1989), The Roles of the Manager: Whats Really Important in
Different Management Jobs, Academy of Management Executive, Vol. 3, pp. 286292. Kro
eck, G. K. (1994), Corporate Reorganization and Transformations in Human Resource
Management, in Improving Organizational Effectiveness Through Transformational Lea
dership, B. M. Bass and B. J. Avolio, Eds., Sage, Thousand Oaks, CA. Kunhert, K.
W., and Lewis, P. (1987), Transactional and Transformational Leadership: A Constr
uctive / Developmental Analysis, Academy of Management Review, Vol. 12, pp. 648657.
Landau, O., and Zakay, E. (1994), Ahead and with Them: The Stories of Exemplary
Platoon Commanders in The Israel Defense Forces, IDF School for Leadership Deve
lopment, Netania, Israel (in Hebrew). Lowe, K. B., Kroeck. K. G., and Sivasubram
aniam, N. (1996), Effectiveness Correlates of Transformational and Transactional L
eadership: A Meta-Analytic Review of the MLQ Literature, Leadership Quarterly, Vol
. 7, pp. 385425. Lowery, P. E. (1995), The Assessment Center Process: Assessing Lea
dership in the Public Sector, Public Personnel Management, Vol. 24, pp. 443450. Mas
low, A. H. (1954), Motivation and Personality, Harper, New York. Megalino, B. M.
, Ravlin, E. C., and Adkins, C. L. (1989), A Work Values Approach to Corporate Cul
ture: A Field Test of the Value Congruence and Its Relationship to Individual Ou
tcomes, Journal of Applied Psychology, Vol. 74, pp. 424432. Messmer, M. (1998), Mento
ring: Building Your Companys Intellectual Capital, HR Focus, Vol. 75, pp. 1112. Mumf
ord, M. D., OConnor, J., Clifton, T. C., Connelly, M. S., and Zaccaro, S. J. (199
3), Background Data Constructs as Predictors of Leadership Behavior, Human Performan
ce, Vol. 6, pp. 151195. Pfeffer, J. (1998), Six Dangerous Myths about Pay, Harvard Bu
siness Review, Vol. 76, pp. 108 119. Podsakoff, P. M., Mackenzie, S. B., Moorman,
R. H., and Fetter, R. (1990), Transformational Leader Behaviors and Their Effects
on Followers Trust in Leader, Satisfaction, and Organizational Citizenship Behav
iors, Leadership Quarterly, Vol. 1, pp. 107142. Russell, C. J. (1987), Person Charact
eristics versus Role Congruency Explanations for Assessment Center Ratings, Academ
y of Management Journal, Vol. 30, pp. 817826. Russell, C. J. (1990), Selecting Top
Corporate Leaders: An Example of Biographical Information, Journal of Management,
Vol. 16, pp. 7386. Seltzer, J., and Bass B. M. (1990), Transformational Leadership:
Beyond Initiation and Consideration, Journal of Management, Vol. 16, No. 4, pp. 6
93703. Shamir, B. (1990), Calculations, Values, and Identities: The Sources of Coll
ectivistic Work Motivation, Human Relations, Vol. 43, No. 4, pp. 313332. Shamir, B.
(1991), The Charismatic Relationship: Alternative Explanations and Predictions, Lea
dership Quarterly, Vol. 2, pp. 81104. Shamir, B., House, R. J., and Arthur, M. B.
(1993), The Motivational Effects of Charismatic Leadership: A Self-Concept Based
Theory, Organization Science, Vol. 4, No. 2, pp. 117. Shamir, B., Zakai, E., Breini
n, E., and Popper, M. (1998), Correlates of Charismatic Leader Behavior in Militar
y Units: Individual Attitudes, Unit Characteristics and Performance Evaluations, A
cademy of Management Journal, Vol. 41, pp. 387409. Sivasubramaniam, N., and Ratna
m, C. S. V. (1998), Human Resource Management and Firm Performance: The Indian Exp
erience, Paper presented at the Annual Meeting of The Academy of Management (San D
iego, August). Sosik, J. J., Avolio, B. J., and Kahai, S. S. (1997), Effects of Le
adership Style and Anonymity on Group Potency and Effectiveness in a Group Decis
ion Support System Environment, Journal of Applied Psychology, Vol. 82, pp. 89103.
Vicere, A. A., and Fulmer, R. M. (1996), Leadership by Design, Harvard Business
School Press, Boston.
868
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
870
PERFORMANCE IMPROVEMENT MANAGEMENT
As indicated by many of these outcomes, job-design decisions can in uence other hu
man resource systems. For example, training programs may need to be developed, r
evised, or eliminated. Hiring standards may need to be developed or changed. Com
pensation levels may need to be increased or decreased. Performance appraisal ca
n be affected due to changed responsibilities. Promotion, transfer, and other em
ployee-movement systems may also be in uenced. Thus, aspects of many human resourc
e programs may be dictated by initial job-design decisions or may need to be rec
onsidered following job redesign. In fact, human resource outcomes may constitut
e the goals of the design or redesign project. Research supporting these outcome
s is referenced below during the description of the approaches. Unfortunately, m
any people mistakenly view the design of jobs as technologically determined, xed,
and inalterable. However, job designs are actually social inventions that re ect
the values of the era in which they were constructed. These values include the e
conomic goal of minimizing immediate costs (Davis et al. 1955; Taylor 1979) and
the theories of human motivation that inspire work designers (Steers and Mowday
1977; Warr and Wall 1975). These values, and the designs they in uence, are not im
mutable but subject to change and modi cation (Campion and Thayer 1985). The quest
ion is, what is the best way to design a job? In fact, there is no single best w
ay. There are actually several major approaches to job design. Each derives from
a different discipline and re ects different theoretical orientations and values.
This chapter describes the various approaches and their advantages and disadvan
tages. It highlights the trade-offs and compromises that must be made in choosin
g among these approaches. This chapter provides tools and procedures for develop
ing and assessing jobs in all varieties of organizations.
1.2.
Team Design
This chapter also compares the design of jobs for individuals working independen
tly to the design of work for teams of individuals working interdependently. The
major approaches to job design usually focus on designing jobs for individual w
orkers. In recent years, design of work around groups or teams, rather than at t
he level of the individual worker, has become more popular (Goodman et al. 1987;
Guzzo and Shea 1992; Hollenbeck et al. 1995; Tannenbaum et al. 1996). New manuf
acturing systems and advancements in understanding of team processes have encour
aged the use of team approaches (Gallagher and Knight 1986; Majchrzak 1988). In
designing jobs for teams, one assigns a task or set of tasks to a group of worke
rs rather than to an individual. The team is then considered to be the primary u
nit of performance. Objectives and rewards focus on team, rather than individual
, behavior. Team members may be performing the same tasks simultaneously or they
may break tasks into subtasks to be performed by different team members. Subtas
ks may be assigned on the basis of expertise or interest, or team members may ro
tate from one subtask to another to provide variety and cross-train members to i
ncrease their breadth of skills and exibility (Campion et al. 1994). The size, co
mplexity, or skill requirements of some tasks seem to naturally t team job design
, but in many cases there may be a considerable degree of choice regarding wheth
er to design work around individuals or teams. In such situations, job designers
should consider the advantages and disadvantages of the different design approa
ches in light of the organizations goals, policies, technologies, and constraints
(Campion et al. 1993, 1996). The relative advantages and disadvantages of desig
ning work for individuals and for teams are discussed in this chapter, and advic
e for implementing and evaluating the different work-design approaches is presen
ted.
2.
JOB DESIGN
This chapter is based on an interdisciplinary perspective on job design. That is
, several approaches to job design are considered, regardless of the scienti c dis
ciplines from which they came. Interdisciplinary research on job design has show
n that several different approaches to job design exist; that each is oriented t
oward a particular subset of outcomes for organizations and employees; that each
has costs as well as bene ts; and that trade-offs are required when designing job
s in most practical situations (Campion 1988, 1989; Campion and Berger 1990; Cam
pion and McClelland 1991, 1993; Campion and Thayer 1985). The four major approac
hes to job design are reviewed below in terms of their historical development, d
esign recommendations, and bene ts and costs. Table 1 summarizes the approaches, w
hile Table 2 provides detail on speci c recommendations.
2.1. 2.1.1.
Mechanistic Job Design Historical Background
The historical roots of job design can be traced back to the concept of the divi
sion of labor, which was very important to early thinking on the economies of ma
nufacturing (Babbage 1835; Smith 1981). The division of labor led to job designs
characterized by specialization and simpli cation. Jobs designed in this fashion
had many advantages, including reduced learning time, reduced time
872 TABLE 2
PERFORMANCE IMPROVEMENT MANAGEMENT Multimethod Job Design Questionnaire (MJDQ)
(Speci c Recommendations from Each Job Design Approach) Instructions: Indicate the
extent to which each statement is descriptive of the job, using the scale below
. Circle answers to the right of each statement. Scores for each approach are ca
lculated by averaging applicable items. Please use the following scale: (5) Stro
ngly agree (4) Agree (3) Neither agree nor disagree (2) Disagree (1) Strongly di
sagree ( ) Leave blank if do not know or not applicable Mechanistic Approach 1.
Job specialization: The job is highly specialized in terms of purpose, tasks, or
activities. 1 2 3 4 5 2. Specialization of tools and procedures: The tools, pro
cedures, materials, etc. used on this job are highly specialized in terms of pur
pose. 1 2 3 4 5 3. Task simpli cation: The tasks are simple and uncomplicated. 1 2
3 4 5 4. Single activities: The job requires you to do only one task or activit
y at a time. 1 2 3 4 5 5. Skill simpli cation: The job requires relatively little
skill and training time. 1 2 3 4 5 6. Repetition: The job requires performing th
e same activity(s) repeatedly. 1 2 3 4 5 7. Spare time: There is very little spa
re time between activities on this job. 1 2 3 4 5 8. Automation: Many of the act
ivities of this job are automated or assisted by automation. 1 2 3 4 5 Motivatio
nal Approach 9. Autonomy: The job allows freedom, independence, or discretion in
work scheduling, sequence, methods, procedures, quality control, or other decis
ion making. 10. Intrinsic job feedback: The work activities themselves provide d
irect and clear information as to the effectiveness (e.g., quality and quantity)
of job performance. 11. Extrinsic job feedback: Other people in the organizatio
n, such as managers and coworkers, provide information as to the effectiveness (
e.g., quality and quantity) of job performance. 12. Social interaction: The job
provides for positive social interaction such as team work or coworker assistanc
e. 13. Task / goal clarity: The job duties, requirements, and goals are clear an
d speci c. 14. Task variety: The job has a variety of duties, tasks, and activitie
s. 15. Task identity: The job requires completion of a whole and identi able piece
of work. It gives you a chance to do an entire piece of work from beginning to
end. 16. Ability / skill-level requirements: The job requires a high level of kn
owledge, skills, and abilities. 17. Ability / skill variety: The job requires a
variety of knowledge, skills, and abilities. 18. Task signi cance: The job is sign
i cant and important compared with other jobs in the organization. 19. Growth / le
arning: The job allows opportunities for learning and growth in competence and p
ro ciency. 20. Promotion: There are opportunities for advancement to higher level
jobs. 21. Achievement: The job provides for feelings of achievement and task acc
omplishment. 22. Participation: The job allows participation in work-related dec
ision making. 23. Communication: The job has access to relevant communication ch
annels and information ows.
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
4 4 4
5 5 5
Adapted from Campion et al. (1993). See reference and related research (Campion
et al. 1996) for reliability and validity information. Scores for each preferenc
e / tolerance are calculated by averaging applicable items.
874
PERFORMANCE IMPROVEMENT MANAGEMENT
for changing tasks or tools, increased pro ciency from the repetition of the same
tasks, and the development of special-purpose tools and equipment. A very in uenti
al person for this early perspective on job design was Frederick Taylor (Taylor
1911; Hammond 1971), who expounded the principles of scienti c management, which e
ncouraged the study of jobs to determine the one best way to perform each task. Move
ments of skilled workers were studied using a stopwatch and simple analysis. The
best and quickest methods and tools were selected, and all workers were trained
to perform the job in the same manner. Standards were developed, and incentive
pay was tied to the standard performance levels. Gilbreth was also a key founder
of this job-design approach (Gilbreth 1911). Through the use of time and motion
study, he tried to eliminate wasted movements in work by the appropriate design
of equipment and placement of tools and materials. Surveys of industrial job de
signers indicate that this mechanistic approach to job design, characterized by
specialization, simpli cation, and time study, was the prevailing practice through
out the last century (Davis et al. 1955; Davis and Valfer 1965). These character
istics are also the primary focus of many modern-day writers on job-design (Barn
es 1980; Niebel 1988; Mundel 1985; also see Chapter 38). The discipline base is
indicated as classic industrial engineering in Table 1. Modernday industrial enginee
rs may practice a variety of approaches to job design, however.
2.1.2.
Design Recommendations
Table 2 provides a brief list of statements that describe the essential recommen
dations of the mechanistic approach. In essence, jobs should be studied to deter
mine the most ef cient work methods and techniques. The total work in an area (e.g
., department) should be broken down into highly specialized jobs that are assig
ned to different employees. The tasks should be simpli ed so that skill requiremen
ts are minimized. There should also be repetition in order to gain improvement f
rom practice. Idle time should be minimized. Finally, activities should be autom
ated or assisted by automation to the extent possible and economically feasible.
2.1.3.
Advantages and Disadvantages
The goal of this approach is to maximize ef ciency, in terms of both productivity
and the utilization of human resources. Table 1 summarizes some of the human res
ource advantages and disadvantages that have been observed in previous research.
Jobs designed according to the mechanistic approach are easier and less expensi
ve to staff. Training times are reduced. Compensation requirements may be less b
ecause skill and responsibility are reduced. And because mental demands are less
, errors may be less common. The mechanistic approach also has disadvantages. To
o much of the mechanistic approach may result in jobs that are so simple and rou
tine that employees experience less job satisfaction and motivation. Overly mech
anistic work can lead to health problems from the physical wear that can result
from highly repetitive and machine-paced work.
2.2. 2.2.1.
Motivational Job Design Historical Background
Encouraged by the human relations movement of the 1930s (Mayo 1933; Hoppock 1935
), people began to point out the unintended drawbacks of the overapplication of
the mechanistic design philosophy in terms of worker attitudes and health (Argyr
is 1964; Blauner 1964; Likert 1961). Overly specialized and simpli ed jobs were fo
und to lead to dissatisfaction (Caplan et al. 1975; Karasek 1979; Kornhauser 196
5; Shepard 1970) and to adverse physiological consequences for workers (Frankenh
aeuser 1977; Johansson et al. 1978). Jobs on assembly lines and other machine-pa
ced work were especially troublesome in this regard (Salvendy and Smith 1981; Wa
lker and Guest 1952). These trends led to an increasing awareness of the psychol
ogical needs of employees. The rst efforts to enhance the meaningfulness of jobs
simply involved the exact opposite of specialization. It was recommended that ta
sks be added to jobs, either at the same level of responsibility (i.e., job enla
rgement) or at a higher level (i.e., job enrichment) (Ford 1969; Herzberg 1966).
This job-design trend expanded into a pursuit of identifying and validating the
characteristics of jobs that make them motivating and satisfying (Grif n 1982; Ha
ckman and Lawler 1971; Hackman and Oldham 1980; Turner and Lawrence 1965). This
approach to job design considers the psychological theories of work motivation (
Mitchell 1976; Steers and Mowday 1977; Vroom 1964), thus this motivational approach
to job design draws primarily from organizational psychology as a discipline bas
e. A related trend following later in time but somewhat comparable in content is
the sociotechnical approach (Cherns 1976; Emory and Trist 1960; Rousseau 1977).
It focuses not only on the work, but also on the technology itself. Interest is
less on the job per se and more on roles and systems. The goal, and key concept
, is the joint optimization of both the social and technical systems. Although
and limitations (Pearson 1971). It is more concerned with equipment than is the
motivational approach and more concerned with peoples abilities than is the mecha
nistic approach. The perceptual / motor approach received public attention throu
gh the Three Mile Island incident, after which it was concluded that the control
room-operator job in the nuclear power plant may have created too many demands
on the operator in an emergency situation, thus creating a predisposition to err
ors of judgment (Campion and Thayer 1987). Federal government regulations issued
since that time require that nuclear power plants consider the human factors in the
ir design (NRC 1981). The primary emphasis suggested by the title of these regul
ations is on perceptual and motor abilities of people. This approach is the most
proli c with respect to recommendations for proper job design, with the availabil
ity of many handbooks giving speci c advice for all types of equipment, facilities
, and layouts (Salvendy 1987; Sanders and McCormick 1987; Van Cott and Kinkade 1
972; Woodson 1981).
2.3.2.
Design Recommendations
Table 2 provides a list of statements describing some of the most important reco
mmendations of the perceptual / motor approach. They refer either to equipment a
nd environments on the one hand or information processing requirements on the ot
her. Their thrust is to take into consideration the mental capabilities and limi
tations of people, such that the attention and concentration requirements of the
job do not exceed the abilities of the least-capable potential worker. The focu
s is on the limits of the least-capable worker because this approach is concerne
d with the effectiveness of the total system,
876
PERFORMANCE IMPROVEMENT MANAGEMENT
which is no better than its weakest link. Jobs should be designed to limit the a
mount of information workers must pay attention to, remember, and think about. L
ighting levels should be appropriate, displays and controls should be logical an
d clear, workplaces should be well laid out and safe, and equipment should be ea
sy to use (i.e., user friendly).
2.3.3.
Advantages and Disadvantages
The goals of this approach are to enhance the reliability and safety of the syst
em and to gain positive user reactions. Table 1 summarizes some of the human res
ource advantages and disadvantages found in previous research. Jobs designed acc
ording to the perceptual / motor approach have lower likelihoods of errors and a
ccidents. Employees may be less stressed and mentally fatigued because of the re
duced mental demands of the job. Like the mechanistic approach, it reduces the m
ental ability requirements of the job. Thus, it may also enhance some human reso
urce ef ciencies, such as reduced training times and staf ng dif culties. On the other
hand, costs to the perceptual / motor approach may result if it is excessively
applied. In particular, less satisfaction, less motivation, and more boredom may
result because the jobs provide inadequate mental stimulation. This problem is
exacerbated by the fact that the least-capable potential workerplaces the limits
on the mental requirements of the job.
2.4. 2.4.1.
Biological Job Design Historical Background
This approach and the perceptual / motor approach share a joint concern for prop
er personmachine t. The primary difference is that this approach is more oriented
toward biological considerations of job design and stems from such disciplines a
s work physiology (Astrand and Rodahl 1977), biomechanics (i.e., the study of bo
dy movements) (Tichaeur 1978), and anthropometry (i.e., the study of body sizes)
(Hertzberg 1972). Like the perceptual / motor approach, the biological approach
is concerned with the design of equipment and workplaces as well as the design
of tasks (Grandjean 1980).
2.4.2.
Design Recommendations
Table 2 provides a list of important recommendations from the biological approac
h. This approach tries to design jobs to reduce physical demands, and especially
to avoid exceeding peoples physical capabilities and limitations. Jobs should no
t have excessive strength and lifting requirements, and again the capabilities o
f the least-physically able potential worker set the maximum level. Chairs shoul
d be designed so that good postural support is provided. Excessive wrist movemen
t should be reduced by redesigning tasks and equipment. Noise, temperature, and
atmosphere should be controlled within reasonable limits. Proper work / rest sch
edules should be provided so that employees can recuperate from the physical dem
ands of the job.
2.4.3.
Advantages and Disadvantages
The goals of this approach are to maintain the comfort and physical well being o
f the employees. Table 1 summarizes some of the human resource advantages and di
sadvantages observed in the research. Jobs designed according to the biological
approach require less physical effort, result in less fatigue, and create fewer
injuries and aches and pains than jobs low on this approach. Occupational injuri
es and illnesses, such as lower back pain and carpal tunnel syndrome, are fewer
on welldesigned jobs. There may even be lower absenteeism and higher job satisfa
ction on jobs that are not physically arduous. A direct cost of this approach ma
y be the expense of changes in equipment or job environments needed to implement
the recommendations. At the extreme, there may be other costs. For example, it
is possible to design jobs with so few physical demands that the workers become
drowsy or lethargic, thus reducing their performance or encouraging them to leav
e their workplace. Clearly, extremes of physical activity and inactivity should
be avoided, and there may even be an optimal level of physical activity for vari
ous employee groups (e.g., male, female, young, old).
2.5.
Con icts and Trade-offs among Approaches
Although one should strive to construct jobs that are well designed on all the a
pproaches, it is clear that there are some direct con icts in design philosophies.
As Table 1 illustrates, the bene ts of some approaches are the costs of others. N
o one approach can satisfy all outcomes. As noted above, the greatest potential
con icts are between the motivational approach on the one hand and the mechanistic
and perceptual / motor approaches on the other. They produce nearly opposite ou
tcomes. The mechanistic and perceptual / motor approaches recommend designing jo
bs that are simple, easy to learn,
oaches to organizing work have become very popular in the last two decades in th
e United States (Goodman et al. 1987; Guzzo and Shea 1992; Hollenbeck et al. 199
5; Tannenbaum et al. 1996). Theoretical treatments of team effectiveness have pr
edominantly used inputprocess output (IPO) models, as popularized by such authors
as McGrath (1964), as frameworks to discuss team design and effectiveness (Guzzo
and Shea 1992). Many variations on the IPO model have been presented in the lit
erature over the years (e.g., Denison et al. 1996; Gladstein 1984; Sundstrom et
al. 1990). Social psychologists have studied groups and teams for several decade
s, mostly in laboratory settings. They have identi ed problems such as social loa ng
or free riding, groupthink, decisionmaking biases, and process losses and inhib
itions that operate in groups (Diehl and Strobe 1987; Harkins 1987; Janis 1972;
McGrath 1984; Paulus 1998; Steiner 1972; Zajonc 1965). Some empirical eld studies
have found that the use of teams does not necessarily result in positive outcom
es (e.g., Katz et al. 1987; Tannenbaum et al. 1996), while others have shown pos
itive effects from the implementation of teams (e.g., Banker et al. 1996; Campio
n et al. 1993, 1996; Cordery et al. 1991). Given that so many organizations are
transitioning to team-based work design, it is imperative that the design and im
plementation of teams be based on the increasing knowledge from team-design rese
arch.
878
PERFORMANCE IMPROVEMENT MANAGEMENT
3.2.
Design Recommendations
Design recommendations are organized around the IPO model of work team effective
ness shown in Figure 1. The variables in the model are brie y described below. A m
ore detailed explanation of each variable is contained in Figure 1.
3.2.1.
Input Factors
Inputs are the design ingredients that predispose team effectiveness. There are
at least four basic types of inputs needed to ensure that teams are optimally de
signed: 1. Design the jobs to be motivating and satisfying. The characteristics
of jobs that make them motivating in a team setting are basically the same as th
ose that make them motivating in an individual setting. Some of the key characte
ristics applicable in teams are listed below and described in Figure 1 in more d
etail. They can be used to evaluate or design the jobs in your clients organizati
on. (a) Allow the team adequate self-management. (b) Encourage participation amo
ng all members. (c) Encourage task variety; all members should perform varied te
am tasks. (d) Ensure that tasks are viewed by members as important. (e) Allow th
e team to perform a whole piece of work. (f) Make sure the team has a clear goal
or mission. 2. Make the jobs within the team interdependent. Teams are often fo
rmed by combining interdependent jobs. In other cases, the jobs can be made to b
e interdependent to make them appropriate for teams. For example, reorganizing a
company around its business processes normally requires making the work interde
pendent. Listed below (and in Figure 1) are several ways jobs can be interdepend
ent.
INPUTS Job Design Self-management Participation Task variety Task significance T
ask wholeness Goal clarity Interdependence Task interdependence Goal Interdepend
ence Interdependent feedback and rewards Composition Skilled members Heterogenei
ty Flexibility Relative size Preference for team work Context Training Informati
on availability Boundary management Managerial
PROCESSES Energy Effort Innovativeness Problem solving Skill usage Potency Direc
tion Accountability Customer orientation Learning orientation Communication Affe
ct Cooperation Social support Workload sharing Conflict resolution Good interper
sonal relationships Trust
OUTPUTS Productivity Productivity Efficiency Quality Reduced costs Satisfaction
Job satisfaction Satisfaction with team Commitment Motivation Customer satisfact
ion
Figure 1 Model of Work Team Effectiveness.
880
PERFORMANCE IMPROVEMENT MANAGEMENT
3.2.3.
Output Factors
Outputs are the ultimate criteria of team effectiveness. The most important outc
ome is whether the teams enhance the business process. There are two basic categ
ories of outputs. Both are central to the de nition of team effectiveness as well
as to effective business processes. 1. Effective teams are productive and ef cient
, and they may also improve quality and reduce costs, as explained in the rst par
t of this chapter. 2. Effective teams are satisfying. This includes not only job
satisfaction, but motivated and committed employees. Satisfaction also applies
to the customers of the teams products or services. These outcomes were also expl
ained in more detail in the rst part of the chapter.
3.3.
Advantages and Disadvantages
Work teams can offer several advantages over the use of individuals working sepa
rately. Table 3 lists some of these advantages. To begin with, teams can be desi
gned so that members bring a combination of different knowledge, skills, and abi
lities (KSAs) to bear on a task. Team members can improve their KSAs by working
with those who have different KSAs (McGrath 1984), and cross-training on differe
nt tasks can occur as a part of the natural work ow. As workers become capable of
performing different subtasks, the workforce becomes more exible. Members can als
o provide performance feedback to one another that they can use to adjust and im
prove their work behavior. Creating teams whose members have different KSAs prov
ides an opportunity for synergistic combinations of ideas and abilities that mig
ht not be discovered by individuals working alone. Heterogeneity of abilities an
d personalities has been found to have a generally positive effect on team perfo
rmance, especially when task requirements are diverse (Goodman et al. 1986; Shaw
1983). Other advantages include social facilitation and support. Facilitation r
efers to the fact that the presence of others can be psychologically stimulating
. Research has shown that such stimulation can have a positive effect on perform
ance when the task is well learned (Zajonc 1965) and when other team members are
perceived as potentially evaluating the performer (Harkins 1987; Porter et al.
1987). With routine jobs, this arousal effect may counteract boredom and perform
ance decrements (Cartwright 1968). Social support can be particularly important
when teams face dif cult or unpopular decisions. It can also be important in group
s such as military squads and medical teams for helping workers deal with dif cult
emotional and psychological aspects of tasks they perform. Another advantage of
teams is that they may increase the information exchanged between members. Comm
unication can be increased through proximity and the sharing of tasks (McGrath 1
984). Intrateam cooperation may also be improved because of team-level goals, ev
aluation, and rewards (Deutsch 1949; Leventhal 1976). Team rewards can be helpfu
l in situations where it is dif cult or impossible to measure individual performan
ce or where workers mistrust supervisors assessments of performance (Milkovich an
d Newman 1996). Increased cooperation and communication within teams can be part
icularly useful when workers jobs are highly coupled. There are at least three ba
sic types of coupling, or sequencing of work: pooled, sequential, and reciprocal
. In pooled coupling, members share common resources but are otherwise independe
nt. In sequential coupling, members work in a series. Workers whose tasks come l
ater in the process must depend on the performance of workers whose tasks come e
arlier. In reciprocal coupling, workers feed their work back and forth among the
mselves. Members receive both inputs and outputs from other members (Thompson 19
67; Mintzberg 1979). Team job design would be especially useful for work ows that
have sequential or reciprocal coupling. Many of the advantages of work teams dep
end on how teams are designed and supported by their organization. The nature of
team tasks and their degree of control can vary. According to much of the theor
y behind team job design, which is primarily from the motivational approach, dec
ision making and responsibility should be pushed down to the team members (Hackm
an 1987). By pushing decision making down to the team and requiring consensus, t
he organization should nd greater acceptance, understanding, and ownership of dec
isions among workers (Porter et al. 1987). The increased autonomy resulting from
making work decisions should be both satisfying and motivating for teams (Hackm
an 1987). The motivational approach would also suggest that the set of tasks ass
igned to a team should provide a whole and meaningful piece of work (i.e., have
task identity) (Hackman 1987). This allows team members to see how their work co
ntributes to a whole product or process, which might not be possible for individ
uals working alone. This can give workers a better idea of the signi cance of thei
r work and create greater identi cation with a nished product or service. If team w
orkers rotate among a variety of subtasks and cross-train on different operation
s, workers should also perceive greater variety in the work. Autonomy, identity,
signi cance, variety, and feedback are all characteristics of jobs that have been
found to enhance motivation. Finally, teams can be bene cial to the organization
if team members develop a feeling of commitment and loyalty to their team (Cartw
right 1968).
Group members learn from one another Possibility of greater workforce exibility w
ith cross-training Opportunity for synergistic combinations of ideas and abiliti
es New approaches to tasks may be discovered Social facilitation and arousal Soc
ial support for dif cult tasks Increased communication and information exchange be
tween team members Greater cooperation among team members Bene cial for interdepen
dent work ows Greater acceptance and understanding of decisions when team makes de
cisions Greater autonomy, variety, identity, signi cance, and feedback possible fo
r workers Commitment to the group may stimulate performance and attendance
882 TABLE 4
PERFORMANCE IMPROVEMENT MANAGEMENT When to Design Jobs around Work Teams
1. Do the tasks require a variety of knowledge, skills, abilities such that comb
ining individuals with different backgrounds would make a difference in performa
nce? 2. Is cross-training desired? Would breadth of skills and work force exibili
ty be essential to the organization? 3. Could increased arousal, motivation, and
effort to perform make a difference in effectiveness? 4. Can social support hel
p workers deal with job stresses? 5. Could increased communication and informati
on exchange improve performance rather than interfere? 6. Could increased cooper
ation aid performance? 7. Are individual evaluation and rewards dif cult or imposs
ible to make or are they mistrusted by workers? 8. Could common measures of perf
ormance be developed and used? 9. Would workers tasks be highly interdependent? 1
0. Is it technologically possible to group tasks in a meaningful and ef cient way?
11. Would individuals be willing to work in groups? 12. Does the labor force ha
ve the interpersonal skills needed to work in groups? 13. Would group members ha
ve the capacity and willingness to be trained in interpersonal and technical ski
lls required for group work? 14. Would group work be compatible with cultural no
rms, organizational policies, and leadership styles? 15. Would labormanagement re
lations be favorable to group job design? 16. Would the amount of time taken to
reach decisions, consensus, and coordination not be detrimental to performance?
17. Can turnover be kept to a minimum? 18. Can groups be de ned as a meaningful un
it of the organization with identi able inputs, outputs, and buffer areas that giv
e them a separate identity from other groups? 19. Would members share common res
ources, facilities, or equipment? 20. Would top management support group job des
ign?
Af rmative answers support the use of team work design.
will enhance productivity; however, if the norm is not one of commitment to prod
uctivity, cohesiveness can have a negative in uence (Zajonc 1965; Stogdill 1972).
The use of teams and team-level rewards can also decrease the motivating power o
f evaluation and reward systems. If team members are not evaluated for their ind
ividual performance, do not believe that their output can be distinguished from
the teams, or do not perceive a link between their own performance and their outc
omes, free-riding or social loa ng (Albanese and Van Fleet 1985; Cartwright 1968;
Latane et al. 1979) can occur. In such situations, teams do not perform up to th
e potential expected from combining individual efforts. Finally, teams may be le
ss exible in some respects because they are more dif cult to move or transfer as a
unit than individuals (Sundstrom et al. 1990). Turnover, replacements, and emplo
yee transfers may disrupt teams. And members may not readily accept new members.
Thus, whether work teams are advantageous or not depends to a great extent on t
he composition, structure, reward systems, environment, and task of the team. Ta
ble 4 presents questions that can help determine whether work should be designed
around teams rather than individuals. The greater the number of questions answe
red in the af rmative, the more likely teams are to succeed and be bene cial. If one
chooses to design work around teams, suggestions for designing effective work t
eams and avoiding problems are presented below.
4.
4.1.
IMPLEMENTATION ADVICE FOR JOB AND TEAM DESIGN
When to Consider Design and Redesign of Work
There are at least eight situations when design or redesign of work should be co
nsidered. 1. When starting up or building a new plant or work unit. This is the
most obvious application of job design. 2. During innovation or technological ch
ange. Continual innovation and technological change are important for survival i
n most organizations. These changes in procedures and equipment mean there are c
hanges in job design. This is not unique to manufacturing jobs. The introduction
of
to standard, the supervisor concluded that the incumbent was negligent and gave
her a written reprimand. But the job was very poorly designed from a biological
perspective. The employee had to operate a foot pedal while standing and thus sp
ent all day with most of the body weight on one foot. She also had to bend over
constantly while extending her arms to adjust the strips of wood, resulting in b
iomechanical stresses on the back, arms, and legs. Everyone hated the job, yet t
he employee was blamed.
As a nal example, the authors discovered that a personnel-recruiter job in a larg
e company was in need of improved mechanistic design. The job involved running t
he engineering co-op program that consisted of hundreds of engineering students
coming to the company and returning to school each semester. The recruiter had t
o match employees interests with managers needs, monitor everyones unique schedule,
keep abreast of the requirements of different schools, administer salary plans
and travel reimbursement, and coordinate hire and termination dates. The job was
completely beyond the capability of any recruiter. The solution involved having
a team of industrial engineers study the job and apply the mechanistic approach
to simplify tasks and streamline procedures. It is clear that some types of job
s are naturally predisposed to be well designed on some jobdesign approaches and
poorly designed on others. It may be in these latter regards that the greatest
opportunities exist to bene t from job redesign. For example, many factory, servic
e, and otherwise low-skilled jobs lend themselves well to mechanistic design, bu
t the ideas of specialization and simpli cation of tasks and skill requirements ca
n be applied to other jobs in order to reduce staf ng dif culties and training requi
rements. Jobs can often be too complex or too large for employees,
884
PERFORMANCE IMPROVEMENT MANAGEMENT
leading to poor performance or excessive overtime. This is common with professio
nal and managerial jobs, as was illustrated in the recruiter example above. Prof
essional jobs are usually only evaluated in terms of the motivational approach t
o job design, but often they can be greatly improved by mechanistic design princ
iples. Finally, if workload in an area temporarily rises without a corresponding
increase in staf ng levels, the mechanistic approach may be applied to the jobs t
o enhance ef ciency. Most managerial, professional, and skilled jobs are fairly mo
tivational by their nature. Factory, service, and low-skilled jobs tend naturall
y not to be motivational. The latter clearly represent the most obvious examples
of needed applications of the motivational approach. But there are many jobs in
every occupational group, and aspects of almost every job, where motivational f
eatures are low. Application of motivational job-design is often limited only by
the creativity of the designer. Jobs involving the operation of complex machine
ry (e.g., aircraft, construction, and factory control rooms) are primary applica
tions of the perceptual / motor approach. Likewise, many productinspection and e
quipment-monitoring jobs can tax attention and concentration capabilities of wor
kers. But jobs in many other occupations may also impose excessive attention and
concentration requirements. For example, some managerial, administrative, profe
ssional, and sales jobs can be excessively demanding on the information-processi
ng capabilities of workers, thus causing errors and stress. And nearly all jobs
have periods of overload. Perceptual / motor design principles can often be appl
ied to reduce these demands of jobs. Traditional heavy industries (e.g., coal, s
teel, oil, construction, and forestry) represent the most obvious applications o
f the biological approach. Similarly, this approach also applies to many jobs th
at are common to most industries (e.g., production, maintenance) because there i
s some physical demands component. Biological design principles can be applied t
o physically demanding jobs so that women can better perform them (e.g., lighter
tools with smaller hand grips). But there may also be applications to less phys
ically demanding jobs. For example, seating, size differences, and posture are i
mportant to consider in the design of many of ce jobs, especially those with compu
ter terminals. This approach can apply to many light-assembly positions that req
uire excessive wrist movements that can eventually lead to the wrist ailment car
pal tunnel syndrome. It should be kept in mind, however, that jobs designed with
too little physical activity (i.e., movement restricted due to single position
or workstation) should be avoided. Likewise, jobs that require excessive travel
should be avoided because they can lead to poor eating and sleeping patterns.
4.2.
Procedures to Follow
There are at least several general guiding philosophies that are helpful when de
signing or redesigning jobs: 1. As noted previously, designs are not xed, unalter
able, or dictated by the technology. There is at least some discretion in the de
sign of all jobs and substantial discretion in most jobs. 2. There is no single
best design for a given job, there are simply better and worse designs depending
on ones job-design perspective. 3. Job design is iterative and evolutionary. It
should continue to change and improve over time. 4. When possible, participation
of the workers affected generally improves the quality of the resulting design
and acceptance of suggested changes. 5. Related to number 4, the process aspects
of the project are very important to success. That is, how the project is condu
cted is important in terms of involvement of all the parties of interest, consid
eration of alternative motivations, awareness of territorial boundaries, and so
on. As noted previously, surveys of industrial job designers have consistently i
ndicated that the mechanistic approach represents the dominant theme of job desi
gn (Davis et al. 1955; Taylor 1979). Other approaches to job design, such as the
motivational approach, have not been given as much explicit consideration. This
is not surprising because the surveys only included job designers trained in en
gineering-related disciplines, such as industrial engineers and systems analysts
. It is not necessarily certain that other specialists or line managers would ad
opt the same philosophies. Nevertheless, there is evidence that even fairly naiv
e job designers (i.e., college students taking management classes) also seem to
adopt the mechanistic approach in job-design simulations. That is, their strateg
ies for grouping tasks were primarily similarity of functions or activities, and
also similarity of skills, education, dif culty, equipment, procedures, or locati
on (Campion and Stevens 1989). Even though the mechanistic approach may be the m
ost natural and intuitive, this research has also revealed that people can be tr
ained to apply all four of the approaches to job design.
4.2.1.
Procedures for the Initial Design of Jobs or Teams
In consideration of process aspects of conducting a design project, Davis and Wa
cker (1982) have suggested a strategy consisting of four steps:
1. Evaluation may also be conducted on the outcomes from the redesign, such as
employee satisfaction, error rates, and training times (e.g., Table 1). And it s
hould be noted that some of the effects of job and team design are not always ea
sy to demonstrate. Scienti cally valid evaluations require experimental research s
trategies with control groups. Such studies may not always be possible in ongoin
g organizations, but often quasiexperimental and other eld research designs are p
ossible (Cook and Campbell 1979). Finally, the need for iterations and ne adjustm
ents is identi ed through the follow-up evaluation.
4.3.
Methods for Combining Tasks
In many cases, designing jobs or teams is largely a function of combining tasks.
Generally speaking, most writing on job design has focused on espousing overall
design philosophies or on identifying those dimensions of jobs (once the jobs e
xist) that relate to important outcomes, but little research has focused on how
tasks should be combined to form jobs in the rst place. Some guidance can be
886
PERFORMANCE IMPROVEMENT MANAGEMENT
gained by extrapolating from the speci c design recommendations in Table 2. For ex
ample, variety in the motivational approach can be increased by simply combining
different tasks into the same job. Conversely, specialization from the mechanis
tic approach can be increased by including only very similar tasks in the same j
ob. It is also possible when designing jobs to rst generate alternative combinati
ons of tasks, then evaluate them using the design approaches in Table 2. A small
amount of research within the motivational approach has focused explicitly on p
redicting the relationships between combinations of tasks and the design of resu
lting jobs (Wong 1989; Wong and Campion 1991). This research suggests that the m
otivational quality of a job is a function of three task-level variables. 1. Tas
k design. The higher the motivational quality of the individual tasks, the highe
r the motivational quality of the job. Table 2 can be used to evaluate the indiv
idual tasks, then motivational scores for the individual tasks can be summed tog
ether. Summing is recommended rather than averaging because it includes a consid
eration of the number of tasks (Globerson and Crossman 1976). That is, both the
motivational quality of the tasks and the number of tasks are important in deter
mining the motivational quality of a job. 2. Task interdependence. Interdependen
ce among the tasks has been shown to have an invertedU relationship with the mot
ivational quality of a job. That is, task interdependence is positively related
to motivational value up to some moderate point; beyond that point, increasing i
nterdependence leads to lower motivational value. Thus, when tasks are being com
bined to form motivational jobs, the total amount of interdependence among the t
asks should be kept at a moderate level. Both complete independence among the ta
sks and excessively high interdependence should be avoided. Table 5 contains the
dimensions of task interdependence and provides a questionnaire that can be use
d to measure interdependence. Table 5 can be used to judge the interdependence o
f each pair of tasks being evaluated for inclusion into a particular job. 3. Tas
k similarity. Some degree of similarity among tasks may be the oldest rule of jo
b-design (as discussed previously) and seems to have little in uence on the motiva
tional quality of the job. But beyond a moderate level, it tends to decrease the
motivational value. Thus, when motivational jobs are being designed, high level
s of similarity should be avoided. Similarity at the task pair level can be judg
ed in much the same manner as interdependence by using a subset of the dimension
s in Table 5 (see the note). Davis and Wacker (1982 1987) have provided a list o
f criteria for grouping tasks into jobs. Part of their list is reproduced below.
There are two points to notice. First, the list represents a collection of crit
eria from both the motivational approach to job-design (e.g., 1, 5, 9) as well a
s the mechanistic approach (e.g., 2, 8). Second, many of the recommendations cou
ld be applied to designing work for teams, as well as individual jobs. 1. 2. 3.
4. 5. 6. Each set of tasks is a meaningful unit of the organization. Task sets a
re separated by stable buffer areas. Each task set has de nite, identi able inputs a
nd outputs. Each task set has associated with it de nite criteria for performance
evaluation. Timely feedback about output states and feedforward about input stat
es are available. Each task set has resources to measure and control variances t
hat occur within its area of responsibility. 7. Tasks are grouped around mutual
causeeffect relationships. 8. Tasks are grouped around common skills, knowledge,
or data bases. 9. Task groups incorporate opportunities for skill acquisition re
levant to career advancement.
4.4.
Individual Differences Among Workers
A common observation made by engineers and managers is that not all employees re
spond the same way to the same job. Some people on a given job have high satisfa
ction, while others on the very same job have low satisfaction. Some people seem
to like all jobs, while others dislike every job. Clearly, there are individual
differences in how people respond to their work. There has been a considerable
amount of research looking at individual differences in reaction to the motivati
onal approach to job design. It has been found that some people respond more pos
itively (e.g., are more satis ed) than others to highly motivational work. These d
ifferences were initially considered to be re ections of underlying work ethic (Hu
lin and Blood 1968), but later were viewed more generally as differences in need
s for personal growth and development (Hackman and Oldham 1980).
888 TABLE 6
PERFORMANCE IMPROVEMENT MANAGEMENT Preferences / Tolerances for Types of Work
Instructions: Indicate the extent to which each statement is descriptive of the
job incumbents preferences and tolerances for types of work on the scale below. C
ircle answers to the right of each statement. Scores are calculated by averaging
applicable items. Please use the following scale: (5) Strongly agree (4) Agree
(3) Neither agree nor disagree (2) Disagree (1) Strongly disagree ( ) Leave blan
k if do not know or not applicable Preferences / Tolerances for Mechanistic Desi
gn I have a high tolerance for routine work. 1 2 3 4 5 I prefer to work on one t
ask at a time. 1 2 3 4 5 I have a high tolerance for repetitive work. 1 2 3 4 5
I prefer work that is easy to learn. 1 2 3 4 5 Preferences / Tolerances for Moti
vational Design I prefer highly challenging work that taxes my skills and abilit
ies. 1 2 3 4 5 I have a high tolerance for mentally demanding work. 1 2 3 4 5 I
prefer work that gives a great amount of feedback as to how I am doing. 1 2 3 4
5 I prefer work that regularly requires the learning of new skills. 1 2 3 4 5 I
prefer work that requires me to develop my own methods, procedures, goals, and s
chedules. 1 2 3 4 5 I prefer work that has a great amount of variety in duties a
nd responsibilities 1 2 3 4 5 Preferences / Tolerances for Perceptual / Motor De
sign I prefer work that is very fast paced and stimulating. 1 2 3 4 5 I have a h
igh tolerance for stressful work. 1 2 3 4 5 I have a high tolerance for complica
ted work. 1 2 3 4 5 I have a high tolerance for work where there are frequently
too many things to do at one time. 1 2 3 4 5 Preferences / Tolerances for Biolog
ical Design I have a high tolerance for physically demanding work. 1 2 3 4 5 I h
ave a fairly high tolerance for hot, noisy, or dirty work. 1 2 3 4 5 I prefer wo
rk that gives me some physical exercise. 1 2 3 4 5 I prefer work that gives me s
ome opportunities to use my muscles. 1 2 3 4 5
Adapted from Campion (1988). See reference for reliability and validity informat
ion. Interpretations differ slightly across the scales. For the mechanistic and
motivational designs, higher scores suggest more favorable reactions from incumb
ents to well designed jobs. For the perceptual / motor and biological approaches
, higher scores suggest less unfavorable reactions from incumbents to poorly des
igned jobs. Copyright
1988 by the American Psychological Association. Adapted wi
th permission.
4.5.
Some Basic Decisions
Hackman and Oldham (1980) have provided ve strategic choices that relate to imple
menting job redesign. They note that little research exists indicating the exact
consequences of each choice and that correct choices may differ by organization
. The basic decisions are given below: 1. Individual vs. group designs for work.
A key initial decision is to either enrich individual jobs or create self-manag
ing work teams. This also includes consideration of whether any redesign should
be undertaken and its likelihood of success. 2. Theory-based vs. intuitive chang
es. This choice was basically de ned as the motivational (theory) approach vs. no
particular (atheoretical) approach. In the present chapter, this choice may be b
etter framed as choosing among the four approaches to job design. However, as ar
gued earlier, consideration of only one approach may lead to some costs or addit
ional bene ts being ignored. 3. Tailored vs. broadside installation. The choice he
re is between tailoring the changes to the individual employee or making the cha
nges for all employees in a given job. 4. Participative vs. top-down change proc
esses. The most common orientation, and that of this chapter, is that participat
ive is best. However, there are costs to participation, including the
d without being retyped. Table 7 presents a scale that can be used to measure te
am-design characteristics. It can be used to evaluate input and process characte
ristics of teams. Background information and examples of the use of this measure
can be found by Campion et al. (1993 1996). Questionnaires may be used in sever
al different contexts: 1. When designing new jobs. When a job does not yet exist
, the questionnaire is used to evaluate proposed job descriptions, workstations,
equipment, and so on. In this role, it often serves as a simple design checklis
t. 2. When redesigning existing jobs. When a job exists, there is a much greater
wealth of information. Questionnaires can be completed by incumbents, managers,
and engineers. Questionnaires can be used to measure job design before and afte
r changes are made and to evaluate proposed changes.
2 2 2
3 3 3
4 4 4
5 5 5
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
1 1 1 1 1 1
2 2 2 2 2 2
3 3 3 3 3 3
4 4 4 4 4 4
5 5 5 5 5 5
1 1 1
2 2 2
3 3 3
4 4 4
5 5 5
Adapted from Campion et al. (1993). Scores for each preference / tolerance are c
alculated by averaging applicable items. Adapted with permission of Personnel Ps
ychology, Inc.
892
PERFORMANCE IMPROVEMENT MANAGEMENT
3. When diagnosing problem jobs. When problems occur, regardless of the apparent
source of the problem, the job-design questionnaire can be used as a diagnostic
device to determine whether any problems exist with the design of the jobs. The
administration of questionnaires can be conducted in a variety of ways. Employe
es can complete them individually at their convenience at their workstation or s
ome other designated area, or they can complete them in a group setting. Group s
ettings allow greater standardization of instructions and provide the opportunit
y to answer questions and clarify ambiguities. Managers and engineers can also c
omplete the questionnaires either individually or in a group session. Engineers
and analysts usually nd that observation of the job site, examination of the equi
pment and procedures, and discussions with any incumbents or managers are import
ant methods of gaining information on the job before completing the questionnair
es. Scoring for each job-design approach is usually accomplished by simply avera
ging the applicable items. Then the scores from different incumbents, managers,
or engineers are combined by averaging (Campion 1988; Campion and McClelland 199
1). The implicit assumption is that slight differences among respondents are to
be expected because of legitimate differences in viewpoint. However, the absolut
e differences in scores should be examined on an item-by-item basis, and large d
iscrepancies (e.g., more than one point) should be discussed to clarify possible
differences in interpretation. It is often useful to discuss each item until a
consensus group rating is reached. The higher the score on a particular job-desi
gn scale, the better the quality of the design of the job based on that approach
. Likewise, the higher the score on a particular item, the better the design of
the job on that dimension. How high a score is needed or necessary cannot be sta
ted in isolation. Some jobs are naturally higher or lower on the various approac
hes as described previously, and there may be limits to the potential of some jo
bs. The scores have most value in comparing jobs or alternative job designs rath
er than evaluating the absolute level of the quality of job design. However, a s
imple rule of thumb is that if the score for an approach is smaller than three,
the job is poorly designed on that approach and should be reconsidered. Even if
the average score on an approach is greater than three, examine any individual i
tem scores that are at two or one.
5.2.
Choosing Sources of Data
1. Incumbents. Incumbents are probably the best source of information if there i
s an existing job. In the area of job analysis, incumbents are considered subjec
t matter experts on the content of their jobs. Also, having input into the job d
esign can enhance the likelihood that suggested changes will be accepted. Involv
ement in such work-related decisions can enhance feelings of participation, thus
increasing motivational job design in itself (see item 22 of the motivational s
cale in Table 2). One should include a large number of incumbents for each job b
ecause there can be slight differences in perceptions of the same job due to ind
ividual differences. Evidence suggests that one should include all incumbents or
at least 10 incumbents for each job (Campion 1988; Campion and McClelland 1991)
. 2. Managers or supervisors. First-level managers or supervisors may be the nex
t-most knowledgeable persons about an existing job. They may also provide inform
ation on jobs under development if they have insight into the jobs through invol
vement in the development process. Differences in perceptions of the same job am
ong managers should be smaller than among incumbents, but slight differences wil
l exist and multiple managers should be used. Evidence suggests that one should
include all managers with knowledge of the job or at least three to ve managers f
or each job (Campion 1988; Campion and McClelland 1991). 3. Engineers or analyst
s. Engineers, if the job has not been developed yet, may be the only source of i
nformation because they are the only ones with insight into what the job will ev
entually look like. But also for existing jobs, an outside perspective by an eng
ineer, analyst, or consultant may provide a more objective viewpoint. Again, the
re can be small differences among engineers, so at least two to ve should evaluat
e each job (Campion and Thayer 1985; Campion and McClelland 1991).
5.3.
Evaluating Long-Term Effects and Potential Biases
It is important to recognize that some of the effects of job design may not be i
mmediate, others may not be long lasting, and still others may not be obvious. T
he research has not tended to address these issues directly. In fact, these effe
cts are offered here as potential explanations for some of the inconsistent nding
s in the literature. The purpose is to simply put the reader on the alert for th
e possibility of these effects. Initially when jobs are designed and employees a
re new, or right after jobs are redesigned, there may be a short-term period of
positive attitudes (often called a honeymoon effect). As the legendary Hawthorne stu
dies indicated, often changes in jobs or increased attention given to workers te
nds to create novel stimulation and positive attitudes (Mayo 1933). Such transit
ory elevations in affect should not be mistaken for long-term improvements in sa
tisfaction, as they may wear off over
894
PERFORMANCE IMPROVEMENT MANAGEMENT
evaluation above. Questionnaires based on Table 7 were administered to 391 cleri
cal employees and 70 managers on 80 teams (Campion et al. 1993) and to 357 profe
ssional workers on 60 teams (Campion et al. 1996) to measure teams design charact
eristics. Thus, two sources of data were used, both team members and managers, t
o measure the team-design characteristics. In both studies, effectiveness outcom
es included the organizations satisfaction survey, which had been administered at
a different time than the team-design characteristics questionnaire, and manage
rs judgments of team effectiveness. In the rst study, several months of records of
teams productivity were also used to measure effectiveness. In the second study,
employees judgments of team effectiveness, managers judgments of team effectivene
ss measured three months after the rst managers judgements measure, and the averag
e of team members most recent performance ratings were also used as outcome measu
res. Results indicated that all of the team-design characteristics had positive
relationships with at least some of the outcomes. Relationships were strongest f
or process characteristics. Results also indicated that when teams were well des
igned according to the team-design approach, they were higher on both employee s
atisfaction and team-effectiveness ratings. One nal cautionary note regarding eva
luation. Different sources (e.g., incumbents, managers) provide different perspe
ctives and should always be included. Collecting data from a single source could
lead one to draw different conclusions about a project than if one obtains a br
oader picture of results by using multiple sources of data.
REFERENCES
Albanese, R. and Van Fleet, D. D. (1985), Rational Behavior in Groups: The Free-Ri
ding Tendency, Academy of Management Review, Vol. 10, pp. 244255. Argyris, C. (1964
), Integrating the Individual and the Organization, John Wiley & Sons, New York.
Astrand, P. O., and Rodahl, K. (1977), Textbook of Work Physiology: Physiologic
al Bases of Exercise, 2nd ed., McGraw-Hill, New York. Babbage, C. (1835), On the
Economy of Machinery and Manufacturers, 4th Ed., in Design of Jobs, 2nd Ed.), L
. E. Davis and J. C. Taylor, Eds., Goodyear, Santa Monica, CA, pp. 35. Banker, R.
D., Field, J. M., Schroeder, R. G., and Sinha, K. K. (1996), Impact of Work Teams
on Manufacturing Performance: A Longitudinal Study, Academy of Management, Vol. 3
9, pp. 867 890. Barnes, R. M. (1980), Motion and Time Study: Design and Measureme
nt of Work, 7th Ed., John Wiley & Sons, New York. Blauner, R. (1964), Alienation
and Freedom, University of Chicago Press, Chicago. Campion, M. A. (1988), Interdi
sciplinary Approaches to Job Design: A Constructive Replication with Extensions, J
ournal of Applied Psychology, Vol. 73, pp. 467481. Campion, M. A. (1989), Ability R
equirement Implications of Job Design: An Interdisciplinary Perspective, Personnel
Psychology, Vol. 42, pp. 124. Campion, M. A., and Berger, C. J. (1990), Conceptual
Integration and Empirical Test of Job Design and Compensation Relationships, Pers
onnel Psychology, Vol. 43, pp. 525554. Campion, M. A., and McClelland, C. L. (199
1), Interdisciplinary Examination of the Costs and Bene ts of Enlarged Jobs: A Job D
esign Quasi-Experiment, Journal of Applied Psychology, Vol. 76, pp. 186198. Campion
, M. A., and McClelland, C. L. (1993), Follow-Up and Extension of the Interdiscipl
inary Costs and Bene ts of Enlarged Jobs, Journal of Applied Psychology, Vol. 78, pp
. 339351. Campion, M. A., and Stevens, M. J. (1989), A Laboratory Investigation of
How People Design Jobs: Naive Predispositions and the In uence of Training, in Acade
my of Management Best Papers Proceedings (Washington, DC), Academy of Management
, Briarcliff Manor, NY, pp. 261 264. Campion, M. A., and Thayer, P. W. (1985), Deve
lopment and Field Evaluation of an Interdisciplinary Measure of Job Design, Journa
l of Applied Psychology, Vol. 70, pp. 2943. Campion, M. A., and Thayer, P. W. (19
87), Job Design: Approaches, Outcomes, and Trade-offs, Organizational Dynamics, Vol.
15, No. 3, pp. 6679. Campion, M. A., Medsker, G. J., and Higgs, A. C. (1993), Rela
tions between Work Group Characteristics and Effectiveness: Implications for Des
igning Effective Work Groups, Personnel Psychology, Vol. 46, pp. 823850. Campion, M
. A., Cheraskin, L., and Stevens, M. J. (1994), Job Rotation and Career Developmen
896
PERFORMANCE IMPROVEMENT MANAGEMENT
Grandjean, E. (1980), Fitting the Task to the Man: An Ergonomic Approach, Taylor
& Francis, London. Grif n, R. W. (1982), Task Design: An Integrative Approach, Sc
ott-Foresman, Glenview, IL. Grif n, R. W. (1989), Work Redesign Effects on Employee
Attitudes and Behavior: A Long-Term Field Experiment, in Academy of Management Bes
t Papers Proceedings (Washington, DC), Academy of Management, Briarcliff Manor,
NY, pp. 214219. Guzzo, R. A., and Shea, G. P. (1992), Group Performance and Intergr
oup Relations in Organizations, in Handbook of Industrial and Organizational Psych
ology, 2nd Ed., Vol. 3, M. D. Dunnette and L. M. Hough, Eds., Consulting Psychol
ogists Press, Palo Alto, CA, pp. 269313. Hackman, J. R. (1987), The Design of Work
Teams, in Handbook of Organizational Behavior, J. Lorsch, Ed., Prentice Hall, Engl
ewood Cliffs, NJ, pp. 315342. Hackman, J. R. and Lawler, E. E. (1971), Employee Rea
ctions to Job Characteristics, Journal of Applied Psychology, Vol. 55, pp. 259286.
Hackman, J. R., and Oldham, G. R. (1980), Work Redesign, Addison-Wesley, Reading
, MA. Hammond, R. W. (1971), The History and Development of Industrial Engineering
, in Industrial Engineering Handbook, 3rd Ed., H. B. Maynard, Ed., McGraw-Hill, Ne
w York. Harkins, S. G. (1987), Social Loa ng and Social Facilitation, Journal of Exper
imental Social Psychology, Vol. 23, pp. 118. Hertzberg, H. T. H. (1972), Engineerin
g Anthropology, in Human Engineering Guide to Equipment Design, Rev. Ed., H. P. Va
n Cott and R. G. Kinkade, Eds., U. S. Government Printing Of ce, Washington, DC, p
p. 467584. Herzberg, F. (1966), Work and the Nature of Man, World, Cleveland. Hof
stede, G. (1980), Cultures Consequences, Sage, Beverly Hills, CA. Hollenbeck, J.
R., Ilgen, D. R., Tuttle, D. B., and Sego, D. J. (1995), Team Performance on Monit
oring Tasks: An Examination of Decision Errors in Contexts Requiring Sustained A
ttention, Journal of Applied Psychology, Vol. 80, pp. 685696. Hoppock, R. (1935), J
ob Satisfaction, Harper & Row, New York. Hulin, C. L. and Blood, M. R. (1968), Job
Enlargement, Individual Differences, and Worker Responses, Psychological Bulletin
, Vol. 69, pp. 4155. Hyer, N. L. (1984), Managements Guide to Group Technology, in Gro
up Technology at Work, N. L. Hyer, Ed., Society of Manufacturing Engineers, Dear
born, MI, pp. 2127. Isenberg, D. J. (1986), Group Polarization: A Critical Review a
nd Meta-Analysis, Journal of Personality and Social Psychology, Vol. 50, pp. 114111
51. Janis, I. L. (1972), Victims of Groupthink, Houghton-Mif in, Boston. Johansson
, G., Aronsson, G., and Lindstrom, B. O. (1978), Social Psychological and Neuroend
ocrine Stress Reactions in Highly Mechanized Work, Ergonomics, Vol. 21, pp. 583599.
Karasek, R. A. (1979), Job Demands, Job Decision Latitude, and Mental Strain: Imp
lications for Job Redesign, Administrative Science Quarterly, Vol. 24, pp. 285308.
Katz, H. C., Kochan, T. A., and Keefe, J. H. (1987), Industrial Relations and Prod
uctivity in the U. S. Automobile Industry, Brookings Papers on Economic Activity,
Vol. 3, pp. 685727. Kornhauser, A. (1965), Mental Health of the Industrial Worker
: A Detroit Study, John Wiley & Sons, New York. Latane, B., Williams, K., and Ha
rkins, S. (1979), Many Hands Make Light the Work: The Causes and Consequences of S
ocial Loa ng, Journal of Personality and Social Psychology, Vol. 37, pp. 822832. Leve
nthal, G. S. (1976), The Distribution of Rewards and Resources in Groups and Organ
izations, in Advances in Experimental Social Psychology, Vol. 9, L. Berkowitz and
E. Walster, Eds., Academic Press, New York, pp. 91131. Likert, R. (1961), New Pat
terns of Management, McGraw-Hill, New York. Loher, B. T., Noe, R. A., Moeller, N
. L., and Fitzgerald, M. P. (1985), A Meta-Analysis of the Relation Between Job Ch
aracteristics and Job Satisfaction, Journal of Applied Psychology, Vol. 70 pp. 2802
89. Majchrzak, A. (1988), The Human Side of Factory Automation, Jossey-Bass, San
Francisco. Mayo, E. (1933), The Human Problems of an Industrial Civilization, M
acmillan, New York. McGrath, J. E. (1964), Social Psychology: A Brief Introducti
on, Holt, Rinehart & Winston, New York. McGrath, J. E. (1984), Groups: Interacti
on and Performance, Prentice Hall, Englewood Cliffs, NJ.
898
PERFORMANCE IMPROVEMENT MANAGEMENT
Turnage, J. J. (1990), The Challenge of New Workplace Technology for Psychology, Ame
rican Psychologist, Vol. 45, pp. 171178. Turner, A. N., and Lawrence, P. R. (1965
), Industrial Jobs and the Worker: An Investigation of Response to Task Attribut
es, Harvard Graduate School of Business Administration, Boston. U. S. Department
of Labor (1972), Handbook for Analyzing Jobs, U.S. Government Printing Of ce, Was
hington, DC. Van Cott, H. P., and Kinkade, R. G., Eds. (1972), Human Engineering
Guide to Equipment Design, Rev. Ed., U.S. Government Printing Of ce, Washington,
DC. Vroom, V. H. (1964), Work and Motivation, John Wiley & Sons, New York. Walke
r, C. R., and Guest, R. H. (1952), The Man on the Assembly Line, Harvard Univers
ity Press, Cambridge, MA. Warr, P., and Wall, T. (1975), Work and Well-Being, Pe
nguin, Maryland. Welford, A. T. (1976), Skilled Performance: Perceptual and Moto
r Skills, Scott-Foresman, Glenview, IL. Wong, C. S. (1989), Task Interdependence:
The Link Between Task Design and Job-design, Ph.D. dissertation, Purdue University
, West Lafayette, IN. Wong, C. S., and Campion, M. A. (1991), Development and Test
of a Task Level Model of Jobdesign, Journal of Applied Psychology, Vol. 76, pp. 8
25837. Woodson, W. E. (1981), Human Factors Design Handbook, McGraw-Hill, New Yor
k. Zajonc, R. B. (1965), Social Facilitation, Science, Vol. 149, pp. 269274.
These jobs will nonetheless be valued similarly in the job structure. The market
rates for these jobs, however, may be quite different. Other organizations may
not value skill and effort equally. Or perhaps market supply is lower and market
demand is higher for people capable of performing the skilled jobs, resulting i
n higher market wages for the skill jobs relative to the effort jobs. Thus, for the o
nization to attract the most quali ed workers, it may have to offer wages that are
higher than it would offer on the basis of internal consistency alone. The bala
nce between internal consistency and external competitiveness is a key issue in
any employers compensation strategy. One rm may emphasize an integrated approach t
o all human resource management, and internal consistency of pay would be part o
f that strategy. If so, there would be a relatively close correspondence between
its job structure and its pay structure. Another rm may emphasize the relationsh
ip between its pay level and pay levels in the labor market. In this rm, there ma
y not be as close a correspondence between the companys job structure, as origina
lly determined through job evaluation, and its pay structure. Indeed, as we shal
l discover, the rm may not even systematically develop a job structure through jo
b evaluation, choosing rather to adopt the external markets evaluation of jobs (i
.e., adopt wholesale the market rate without considering internal worth). This t
ension between value as assessed within an organization and value as assessed by
competitors in the external labor market is but one of several con icts that may
arise in deciding on wages for jobs. Indeed, other actors have also in uenced the wage
-determination process.
1.1.
The In uence of Society and Its Values on Job Evaluation
In some societies, at different times through history, egalitarian value systems
have been adopted by entire countries. An egalitarian philosophy implies a beli
ef that all workers should be treated equally (Matthew 20.116). To some extent, t
his philosophy underlies the job-evaluation process in those remaining countries
that can be classi ed as communist or socialist. Although some differentials do e
xist across different jobs, the size of these differentials is much smaller than
if this societal in uence were not present. Given the recent movement toward capi
talism around the world, it is evident that an egalitarian policy may not contin
ue to exert a strong in uence over the valuation of jobs. A second example of soci
etal impacts on wage determination is illustrated by the just wage doctrine (Cartter
1959). In the 13th century, skilled artisans and craftsmen began to prosper at
the expense of nobles and landowners by selling goods and services to the highes
t bidders. The church and state reacted by proclaiming a schedule of just wages that
tended to re ect that societys class structure and that were consistent with the p
revailing notion of birthrights. In essence, the policy explicitly denied econom
ic factors as appropriate determinants of pay. The proliferation of computers an
d accompanying information explosion in the recent past has forever changed the
way work is done. Not surprisingly, countless companies (like Bayer) have been f
orced to make retain, reject, or redesign decisions about their job-evaluation syste
ms. Most have chosen the redesign option in order to keep the values that have m
ade them so successful but incorporate their new perspectives regarding employee
autonomy, teamwork, responsibility, and the like (Laabs 1997). Sometimes referr
ed to as competencies or value driver, job characteristics such as leadership re
quired and customer impact are beginning to form the basis for a whole new set o
f compensable factors (Kanin-Lovers et al. 1995; McLagan 1997).
1.2.
The In uence of Individuals on Job Evaluation
Normally great pains are taken to ensure that position evaluation is kept entire
ly independent from person evaluation (i.e., job evaluation is kept distinct fro
m performance evaluation, which involves the evaluation of individuals as they p
erform jobs). Seasoned job evaluators counsel novices to determine the worth of
a job independent of its incumbent. The focus should always be on the work, not
the worker. After all, a job is relatively stable, whereas the person holding th
at job may change regularly. For the purposes of determining job worth, individu
als are viewed as interchangeable. To deal with the distinction between job and
person value, organizations traditionally have set upper and lower limits on job
worth (called pay grade minimums and pay grade maximums) and allowed salary to u
ctuate within that grade as a function of individual performance or worth. For c
ertain jobs, though, the worth of the job is inextricably linked to the incumben
t performing the job (Pierson 1983). This exception is particularly evident for
managerial and executive positions. The persons unique abilities and knowledge ma
y shape the job. For these jobs, the relative importance of the individual occup
ying the job leads to increased emphasis on personal attributes in job valuation
. The top jobs in almost any organization seem to be designed more around the ta
lents and experience of the individuals involved than around any rigidly de ned du
ties and responsibilities. For professional workers, too, the nature of their wo
rk and the knowledge they bring to the task may make it dif cult to distinguish jo
b worth from individual worth. Thus, for professionals such as scientists or eng
ineers, pay may re ect individual attributes, accomplishments, or credentials (i.e
., a B.S. in Chemistry, a Ph.D. in Engineering).
902
PERFORMANCE IMPROVEMENT MANAGEMENT
2.
TRADITIONAL JOB EVALUATION
The traditional way to value jobs involves a mix of internal organizational fact
ors as well as external market conditions in setting pay rates. Various job-eval
uation techniques have evolved different strategies for incorporating both of th
ese essential in uences into the wage-setting process. In spite of the long-standi
ng existence and recent expansion of some alternative individual (such as commis
sions and bonuses), market-based (free agent auctions), and parsimonious (delaye
ring and broadbanding) compensation schemes, formal job evaluation continues to
stand the test of time. Like the employment interview, which has been criticized
harshly but still is most useful, job evaluation has been accused of being a barr
ier to excellence and an institutional myth (Emerson 1991; Quaid 1993). Nevertheless,
it, too, remains as an essential building block for human resource management. I
n fact, over 70% of the organizations in this country are estimated to use job e
valuation (Bureau of National Affairs 1976). As noted in the following sections,
for both the ranking method and the factor comparison method, external and inte
rnal factors are incorporated throughout the job-evaluation process. In the clas
si cation method and the point method, internal factors and external factors are c
onsidered separately at rst and are later reconciled with each other. In the poin
t method, for example, point totals denoting relative internal worth can be reco
nciled with market data through statistical procedures such as regression analys
is. Determining which of the job-evaluation processes (outlined in the pages tha
t follow) provides the best t for a given organization depends on numerous consid
erations. One may be more appropriate than the other, but there is no one best s
cheme (Fowler 1996).
2.1.
Ranking Method
Ranking simply involves ordering the job descriptions from highest to lowest bas
ed on a predetermined de nition of value or contribution. Three ways of ranking ar
e usually considered: simple ranking, alternation ranking, and paired comparison
ranking. Simple ranking requires that evaluators order or rank jobs according t
o their overall value to the organization. Alternation ranking involves ordering
the job descriptions alternately at each extreme (e.g., as shown in Figure 1).
Agreement is reached among evaluators on which job is the most valuable, then th
e least valuable. Job evaluators alternate between the next most valued and next
-least valued, and so on, until all the jobs have been ordered. For example, eva
luators agreed that the job of master welder was the most valued of the six jobs
listed above and receiving clerk was the least valued. Then they selected most
and least valued jobs from the four remaining titles on the list. After this, a n
al determination would be made between the last two jobs. The paired comparison
method involves comparing all possible pairs of jobs under study. A simple way t
o do paired comparison is to set up a matrix, as shown in Figure 2. The higher-r
anked job is entered in the cell. For example, of the shear operator and the ele
ctrician, the electrician is ranked higher. Of the shear operator and the punch
press operator, the shear operator is ranked higher. When all comparisons have b
een completed, the job with the highest tally of most valuable rankings (the biggest
winner) becomes the highest-ranked job, and so on. Some evidence suggests that
the alternation ranking and paired comparison methods are more reliable (produce
similar results more consistently) than simple ranking (Chesler 1948). Caution
is required if ranking is chosen. The criteria or factors on which the jobs are
ranked are usually so poorly de ned (if they are speci ed at all) that the evaluatio
904
PERFORMANCE IMPROVEMENT MANAGEMENT
complexity often limits its usefulness (Benge et al. 1941). A simpli ed explanatio
n of this method would include the following steps:
2.3.1.
Conduct Job Analysis
As with all job-evaluation methods, information about the jobs must be collected
and job descriptions prepared. The Factor Comparison Method differs, however, i
n that it requires that jobs be analyzed and described in terms of the compensab
le factors used in the plan. The originators of the method, Benge et al. (1941),
prescribed ve factors: mental requirements, skill requirements, physical factors
, responsibility, and working conditions. They considered these factors to be un
iversal (applicable to all jobs in all organizations) but allowed some latitude
in the speci c de nition of each factor among organizations.
2.3.2.
Select Benchmark Jobs
The selection of benchmark jobs is critical because the entire method is based o
n them. Benchmark jobs (also called key jobs) serve as reference points. The exa
ct number of benchmarks required varies; some rules of thumb have been suggested
(15 to 25), but the number depends on the range and diversity of the work to be
evaluated.
2.3.3.
Rank Benchmark Jobs on Each Factor
Each benchmark job is ranked on each compensable factor. In Table 1, a job famil
y consisting of six jobs is rst ranked on mental requirements (rank of 1 is highe
st), then on experience / skills, and so on. This approach differs from the stra
ight ranking plan in that each job is ranked on each factor rather than as a who
le job.
2.3.4.
Allocate Benchmark Wages across Factors
Once each benchmark job is ranked on each factor, the next step is to allocate t
he current wages paid for each benchmark job among the compensable factors. Esse
ntially, this is done by deciding how much of the wage rate for each benchmark j
ob is associated with mental demands, how much with physical requirements, and s
o on, across all the compensable factors. This is done for each benchmark job an
d is usually based on the judgment of a compensation committee. For example, in
Table 2, of the $5.80 per hour paid to the punch press operator, the committee h
ad decided that $0.80 of it is attributable to the jobs mental requirements, anot
her $0.80 is attributable to the jobs experience / skill requirements, $2.40 is a
ttributable to the jobs physical requirements, $1.10 is attributable to the jobs s
upervisory requirements, and $0.70 is attributable to the jobs other responsibili
ties. The total $5.80 is thus allocated among the compensable factors. This proc
ess is repeated for each of the benchmark jobs. After the wage for each job is a
llocated among that jobs compensable factors, the dollar amounts for each factor
are ranked. The job that has the highest wage allocation for mental requirements
is ranked 1 on that factor, next highest is 2, and so on. Separate rankings are
done for the wage allocated to each compensable factor. In Table 3, the parts-i
nspector position has more of its wages allocated to mental demands than does an
y other job and so it receives the highest rank for that factor. There are now t
wo sets of rankings. The rst ranking is based on comparisons of each benchmark jo
b on each compensable factor. It re ects the relative presence of each factor amon
g the benchmark jobs. The second ranking is based on the proportion of each jobs
wages that is attributed to each factor. The next step is to see how well the tw
o rankings agree.
TABLE 1
Factor Comparison Method: Ranking Benchmark Jobs by Compensable Factorsa Mental
Requirements 6 5 4 3 2 1 Experience / Skills 5 3 6 1 2 3 Physical Factors 2 3 1
6 4 5 Supervision 4 6 1 5 2 3 Other Responsibilities 4 1 3 6 5 2
Benchmark Jobs A. B. C. D. E. F.
a
Punch press operator Parts attendant Riveter Truck operator Machine operator Par
ts inspector
Rank of 1 is high. Source: Milkovich and Newman 1993.
906
PERFORMANCE IMPROVEMENT MANAGEMENT
2.3.5.
Compare Factor and Wage-Allocation Ranks
The two rankings are judgments based on comparisons of compensable factors and w
age distributions. They agree when each benchmark is assigned the same location
in both ranks. If there is disagreement, the rationale for the wage allocations
and factor rankings is reexamined. Both are judgments, so some slight tuning or
adjustments may bring the rankings into line. The comparison of the two rankings
is simply a cross-checking of judgments. If agreement cannot be achieved, then
the job is no longer considered a benchmark and is removed.
2.3.6.
Construct Job Comparison Scale
Constructing a job-comparison scale involves slotting benchmark jobs into a scal
e for each factor based on the amount of pay assigned to each factor. Such a sca
le is illustrated in Figure 3. Under mental requirements, the punch press operat
or is slotted at $0.80, the parts attendant at $2.15, and so on. These slottings
correspond to the wage allocations shown in Figure 3.
2.3.7.
Apply the Scale
The job-comparison scale is the mechanism used to evaluate the remaining jobs. A
ll the nonbenchmark jobs are now slotted into the scales under each factor at th
e dollar value thought to be approFigure 3 Job Comparison Scale. (From Milkovich and Newman 1993)
Compensable factors play a pivotal role in the point method. In choosing factors
, an organization must decide: What factors are valued in our jobs? What factors w
ill be paid for in the work we do? Compensable factors should possess the followin
g characteristics: Work Related They must be demonstrably derived from the actua
l work performed in the organization. Some form of documentation (i.e., job desc
riptions, job analysis, employee and / or supervisory interviews) must support t
he factors. Factors that are embedded in a work-related logic can help withstand
a variety of challenges to the pay structure. For example, managers often argue
that the salaries of their subordinates are too low in comparison to other empl
oyees or that the salary offered to a job candidate is too low for the job. Unio
n members may question their leaders about why one job is paid differently from
another. Allegations of illegal pay discrimination may be raised. Line managers,
union leaders, and compensation specialists must be able to explain differences
in pay among jobs. Differences in factors that are work related help provide th
at rationale. Properly selected factors may even diminish the likelihood of thes
e challenges arising. Business Related Compensable factors need to be consistent
with the organizations culture and values, its business directions, and the natu
re of the work. Changes in the organization or its business strategies may neces
sitate changing factors. While major changes in organizations are not daily occu
rrences, when they do occur, the factors need to be reexamined to ensure that th
ey are consistent with the new circumstances.
908
PERFORMANCE IMPROVEMENT MANAGEMENT
Acceptable to the Parties Acceptance of the pay structure by managers and employ
ees is critical. This is also true for the compensable factors used to slot jobs
into the pay structure. To achieve acceptance of the factors, all the relevant
parties viewpoints need to be considered. Discriminable In addition to being work
related, business related, and acceptable, compensable factors should have the
ability to differentiate among jobs. As part of differentiating among jobs, each
factor must be unique from other factors. If two factors overlap in what they a
ssess in jobs, then that area of overlap will contribute disproportionately to t
otal job points, which may bias the results. Factor de nitions must also possess c
larity of terminology so that all concerned can understand and relate to them. T
here are two basic ways to select and de ne factors: Adapt factors from an existin
g standard plan or custom design a plan. In practice, most applications fall bet
ween these two. Standard plans often are adjusted to meet the unique needs of a
particular organization, and many custom-designed plans rely heavily on existing
factors. Although a wide variety of factors are used in conventional, standard
plans, they tend to fall into four generic groups: skills required, effort requi
red, responsibility, and working conditions. These four were used originally in
the National Electrical Manufacturers Association (NEMA) plan in the 1930s and a
re also included in the Equal Pay Act (1963) to de ne equal work (Gomberg 1947). T
he Hay System is perhaps the most widely used (Milkovich and Newman, 1993). The
three Hay factors are know-how, problem solving, and accountability (note that H
ay Associates does not de ne its guide chartpro le method as a variation of the point
method) (Hay Associates 1981). Adapting factors from existing plans usually inv
olves relying on the judgment of a task force or job evaluation committee. More
often than not, the committee is made up of key decision makers (or their repres
entatives) from various functions (or units, such as nance, operations, engineeri
ng, and marketing). Approaches vary, but typically it begins with a task force o
r committee representing key management players. To identify compensable factors
involves getting answers to one central question: Based on our operating and st
rategic objectives, what should we value and pay for in our jobs? Obviously, cus
tom designing factors is time consuming and expensive. The argument in favor of
it rests on the premise that these factors are more likely to be work related, b
usiness related, and acceptable to the employees involved. In terms of the optim
al number of factors, it is generally recommended to stay below 10 in order to a
void dilution of effect, information overload, and factor redundancy. Five to 7
factors are usually a manageable number to capture the essence of jobs in an org
anization. With regard to the number of total points to be allocated across the
factors, most rms choose either 500 or 1000 points.
2.4.3.
Establish Factor Scales
Once the factors to be included in the plan are chosen, scales re ecting the diffe
rent degrees within each factor are constructed. Each degree may also be anchore
d by the typical skills, tasks, and behaviors taken from benchmark jobs that ill
ustrate each factor degree. Table 4 shows the National Metal Trade Associations s
caling for the factor of knowledge. Belcher (1974) suggests the following criter
ia for determining degrees: 1. 2. 3. 4. Limit to the number necessary to disting
uish among jobs. Use understandable terminology. Anchor degree de nition with benc
hmark job titles. Make it apparent how the degree applies to the job.
Using too many degrees makes it dif cult for evaluators to accurately choose the a
ppropriate degree and may result in a wide variance in total points assigned by
different evaluators. The threat this poses to acceptance of the system is all t
oo apparent. Some plans employ 2D grids to de ne degrees. For example, in the Hay
plan, degrees of the factor know-how are described by four levels of managerial
know-how (limited, related, diverse, and comprehensive) and eight levels of tech
nical know-how (ranging from professional mastery through elementary vocational)
. An evaluator may select among at least 32 (4
8) different combinations of mana
gerial and technical know-how to evaluate a job.
2.4.4.
Establish Factor Weights
Once the degrees have been assigned, the factor weights must be determined. Fact
or weights are important because different weights re ect differences in importanc
e attached to each factor by the employer. There are two basic methods used to e
stablish factor weights: committee judgment and statistical analysis. In the rst,
a standing compensation committee or a team of employees is asked to allocate 1
00% of value among the factors. Some structured decision process such as Delphi
or other nominal group technique may be used to facilitate consensus (Elizur 198
0). For the statistical method, which typically utilizes multiple regression ana
lysis, the weights are empirically derived in
ns that illustrate each degree for each respective factor. Thus, the evaluator c
hooses a degree
ation. For some jobs and some organizations, market wage levels and ability to p
ay are virtually the only determinants of compensation levels. An organization i
n a highly competitive industry may, by necessity, merely price jobs according t
o what the market dictates. For most companies, however, to take all their jobs
(which may number in the hundreds or thousands) and compare them to the market i
s not realistic. One can only imagine the effort required for a company to condu
ct and / or participate in wage surveys for thousands of jobs every year. Altern
atively, one computer company was able to slot thousands of jobs into 20 pay gra
des using a version of the point factor method. Market pricing basically involve
s setting pay structures almost exclusively through reliance on rates paid in th
e external market. Employers following such an approach typically match a large
percentage of their jobs with market data and collect as much summarized market
data as possible. Opting for market pricing usually re ects more of an emphasis on
external competitiveness and less of a focus on internal consistency (the relat
ionships among jobs within the rm).
912
PERFORMANCE IMPROVEMENT MANAGEMENT
performed. Another complication can be the bureaucracy that tends to accompany t
he administration of job evaluation. Job evaluation sometimes seems to exist for
its own sake, rather than as an aid to achieving the organizations mission (Burn
s 1978). So an organization is best served by initially establishing its objecti
ves for the process and using these objectives as a constant guide for its decis
ions.
4.3.
Specifying the Micro Objectives of Job Evaluation
Some of the more micro objectives associated with job evaluation include: Help f
oster equity by integrating pay with a jobs contributions to the organization. As
sist employees to adapt to organization changes by improving their understanding
of job content and what is valued in their work. Establish a workable, agreed-u
pon pay structure. Simplify and rationalize the pay relationships among jobs, an
d reduce the role that chance, favoritism, and bias may play. Aid in setting pay
for new, unique, or changing jobs. Provide an agreed-upon device to reduce and
resolve disputes and grievances. Help ensure that the pay structure is consisten
t with the relationships among jobs, thereby supporting other human resource pro
grams such as career planning, staf ng, and training.
4.4.
Choosing the Job-Evaluation Method
Obviously, the organization should adopt a job-evaluation method that is consist
ent with its jobevaluation objectives. More fundamentally, however, the organiza
tion should rst decide whether job evaluation is necessary at all. In doing so, i
t should consider the following questions (Hills 1989): Does management perceive
meaningful differences between jobs? Can legitimate criteria for distinguishing
between jobs be articulated and operationalized? Will job evaluation result in
clear distinctions in employees eyes? Are jobs stable and will they remain stable
in the future? Is traditional job evaluation consistent with the organizations g
oals and strategies? Do the bene ts of job evaluation outweigh its costs? Can job
evaluation help the organization be more competitive? Many employers design diff
erent job-evaluation plans for different job families. They do so because they b
elieve that the work content of various job families is too diverse to be adequa
tely evaluated using the same plan. For example, production jobs may vary in ter
ms of working conditions and the physical skills required. But engineering and m
arketing jobs do not vary on these factors, nor are those factors particularly i
mportant in engineering or marketing work. Rather, other factors such as technic
al knowledge and skills and the degree of contact with external customers may be
relevant. Another category of employees that might warrant special consideratio
n, primarily due to supervisory responsibilities, is management. The most common
criteria for determining different job families include similar knowledge / ski
ll/ ability requirements, common licensing requirements, union jurisdiction, and
career paths. Those who argue for multiple plans, each with unique compensable
factors, claim that different job families have different and unique work charac
teristics. To design a single set of compensable factors capable of universal ap
plication, while technically feasible, risks emphasizing generalized commonaliti
es among jobs and minimizing uniqueness and dissimilarities. Accurately gauging
the similarities and dissimilarities in jobs is critical to establish and justif
y pay differentials. Therefore, more than one plan is often used for adequate ev
aluation. Rather than using either a set of company-wide universal factors or en
tirely different sets of factors for each job family, some employers, such as He
wlett-Packard, start with a core set of common factors, then add other sets of s
pecialized factors unique to particular occupational or functional areas ( nance,
manufacturing, software and systems, sales, management). These companies experien
ces suggest that unique factors tailored to different job families are more like
ly to be both acceptable to employees and managers and easier to verify as work
related than are generalized universal factors.
4.5.
Deciding Who Will Participate in the Job Evaluation Process
Who should be involved in designing job evaluation? The choice is usually among
compensation professionals, managers, and / or job incumbents. If job evaluation
is to be an aid to managers and if
ving con icts about pay differences that inevitably arise over time.
5.1.
Handling Appeals and Reviews
Once the job structure or structures are established, compensation managers must
ensure that they remain equitable. This requires seeing that jobs that employee
s feel have been incorrectly evaluated are reanalyzed and reevaluated. Likewise,
new jobs or those that experience signi cant changes must be submitted to the eva
luation process.
5.2.
Training Job Evaluators
Once the job-evaluation system is complete, those who will be conducting job ana
lyses and evaluations will require training, especially those evaluators who com
e from outside the human resource department. These employees may also need back
ground information on the entire pay system and how it is related to the overall
strategies for managing human resources and the organizations objectives.
5.3.
Approving and Communicating the Results of the Job-Evaluation Process
When the job evaluations are completed, approval by higher levels in the organiz
ation (e.g., Vice President of Human Resources) is usually required. Essentially
, the approval process serves as a control. It helps ensure that any changes tha
t result from job evaluation are consistent with the organizations overall strate
gies and human resource practices. The particular approval process differs among
organizations. Figure 4 is one example. The emphasis on employee and manager un
derstanding and acceptance of the job-evaluation system requires that communicat
ions occur during the entire process. Toward the end of the process,
914
PERFORMANCE IMPROVEMENT MANAGEMENT
Figure 4 Job Evaluation Approval Process. (From Milkovich and Newman 1993)
the goals of the system, the parties roles in it, and the
be thoroughly explained to all employees.
5.4.
Using Information Technology in the Job-Evaluation Process
Almost every compensation consulting rm offers a computer-based job evaluation sy
stem. Their software does everything from analyzing the job-analysis questions t
o providing computer-generated job descriptions to predicting the pay classes fo
r each job. Some caution is required, however, because computer assisted does not al
ways mean a more ef cient, more acceptable, or cheaper approach will evolve. The p
rimary advantages for computer-aided job evaluation according to its advocates i
nclude: Alleviation of the heavy paperwork and tremendous time saving Marked inc
rease in the accuracy of results Creation of more detailed databases Opportunity
to conduct improved analyses (Rheaume and Jones 1988) The most advanced use of
computers for job evaluation is known as an expert system. Using the logic built
by subject matter experts and coded into the computer, this software leads a jo
b evaluator through a series of prompted questions as part of a decision tree to
arrive at a job-evaluation decision (Mahmood et al. 1995). But even with the as
sistance of computers, job evaluation remains a subjective process that involves
substantial judgment. The computer is only a tool, and misused, it can generate
impractical, illogical, and absurd results (Korukonda 1996).
5.5.
Future Trends in the Job Evaluation Process
Job evaluation is not going to go away. It has emerged and evolved through the i
ndustrial, and now the informational, revolution. Unless everyone is paid the sa
me, there will always be a need to establish and institutionalize a hierarchy of
jobs in the organization. The process should, and will, continue to be improved
upon. The use of computer software will dramatically simplify the administrativ
e burdens of job evaluation. Furthermore, new technologies and processes will en
able organizations to combine internal job-evaluation information with labor mar
ket data to strengthen the internal consistencyexternal competitiveness model dis
cussed above.
6.
EVALUATING THE JOB-EVALUATION SYSTEM
Job evaluation can take on the appearance of a bona de measurement instrument (ob
jective, numerical, generalizable, documented, and reliable). If it is viewed as
such, then job evaluation can be judged according to precise technical standard
s. Just as with employment tests, the reliability and validity of job evaluation
plans should be ascertained. In addition, the system should be evaluated for it
s utility, legality, and acceptability.
6.1.
Reliability: Consistent Results
Job evaluation involves substantial judgment. Reliability refers to the consiste
ncy of results obtained from job evaluation conducted under different conditions
. For example, to what extent do different job evaluators produce similar result
s for the same job? Few employers or consulting rms report the results of their s
tudies. However, several research studies by academics have been reported (Arvey
1986; Schwab 1980; Snelgar 1983; Madigan 1985; Davis and Sauser 1993; Cunningha
m and Graham 1993; Supel 1990). These studies present a mixed picture; some repo
rt relatively high consistency (different evaluators assign the same jobs the sa
me total point scores), while others report lower
The hit rate approach focuses on the ability of the job-evaluation plan to repli
cate a predetermined, agreed-upon job structure. The agreed-upon structure, as d
iscussed earlier, can be based on one of several criteria. The jobs market rates,
a structure negotiated with a union or a management committee, and rates for jo
bs held predominately by men are all examples. Figure 5 shows the hit rates for
a hypothetical job-evaluation plan. The agreed-upon structure has 49 benchmark j
obs in it. This structure was derived through negotiation among managers serving
on the job-evaluation committee along with market rates for these jobs. The new
point factor jobevaluation system placed only 14, or 29%, of the jobs into thei
r current (agreed-upon) pay classes. It came within 1 pay class for 82% of the jobs
in the agreed-upon structure. In a study conducted at Control Data Corporation,
the reported hit rates for six different types of systems ranged from 49 to 73%
of the jobs classi ed within 1 class of their current agreed-upon classes (Gomez-Mej
ia et al. 1982). In another validation study, Madigan and Hoover applied two job
-evaluation plans (a modi cation of the federal governments factor evaluation syste
m and the position analysis questionnaire) to 206 job classes for the State of M
ichigan (Madigan and Hoover 1986). They reported hit rates ranging from 27 to 73
%, depending on the scoring method used for the job-evaluation plans. Is a job-e
valuation plan valid (i.e., useful) if it can correctly slot only one-third of t
he jobs in the right level? As with so many questions in compensation, the answer is
it depends. It depends on the alternative approaches available, on the costs involv
ed in designing and implementing these plans, on the magnitude of errors involve
d in designing and implementing these plans, and on the magnitude of errors invo
lved in missing a direct hit. If, for example, being within 1 pay class translates int
several hundred dollars in pay, then employees probably are not going to expres
s much con dence in the validity of this plan. If, on the other hand, the pay differen
ce between 1 class is not great or the plans results are treated only as an estimate
to be adjusted by the job-evaluation committee, then its validity (usefulness) is m
ore likely.
6.2.2.
Convergence of Results
Job-evaluation plans can also be judged by the degree to which different plans y
ield similar results. The premise is that convergence of the results from indepe
ndent methods increases the chances that the results, and hence the methods, are
valid. Different results, on the other hand, point to lack of validity. For the
best study to date on this issue, we again turn to Madigans report on the result
s of three job-evaluation plans (guide chart, PAQ, and point plan) (Madigan 1985
). He concludes that the three methods generate different and inconsistent job s
tructures. Further, he states that the measurement adequacy of these three metho
ds is open to serious question. An employee could have received up to $427 per m
onth more (or less), depending on the job-evaluation method used. These results
are provocative. They are consistent with the proposition that job evaluation, a
s traditionally practiced and described in the literature, is not a measurement
procedure. This is so because it fails to consistently exhibit properties of rel
iability and validity. However, it is important
Figure 5 Illustration of Plans Hit Rate as a Method to Judge the Validity of Job
Evaluation Results. (From Milkovich and Newman 1993)
916
PERFORMANCE IMPROVEMENT MANAGEMENT
to maintain a proper perspective in interpreting these results. To date, the res
earch has been limited to only a few employers. Further, few compensation profes
sionals seem to consider job evaluation a measurement tool in the strict sense o
f that term. More often, it is viewed as a procedure to help rationalize an agre
ed-upon pay structure in terms of job- and business-related factors. As such, it
becomes a process of give and take, not some immutable yardstick.
6.3.
Utility: Cost-Ef cient Results
The usefulness of any management system is a function of how well it accomplishe
s its objectives (Lawler 1986). Job evaluation is no different; it needs to be j
udged in terms of its objectives. Pay structures are intended to in uence a wide v
ariety of employee behaviors, ranging from staying with an employer to investing
in additional training and willingness to take on new assignments. Consequently
, the structures obtained through job evaluation should be evaluated in terms of
their ability to affect such decisions. Unfortunately, little of this type of e
valuation seems to be done. The other side of utility concerns costs. How costly
is job evaluation? Two types of costs associated with job evaluation can be ide
nti ed: (1) design and administration costs and (2) labor costs that result from p
ay structure changes recommended by the job evaluation process. The labor cost e
ffects will be unique for each application. Winstanley offers a rule of thumb of
1 to 3% of covered payroll (Winstanley). Experience suggests that costs can ran
ge from a few thousand dollars for a small organization to over $300,000 in cons
ultant fees alone for major projects in rms like Digital Equipment, 3M, TRW, or B
ank of America.
6.4.
Nondiscriminatory: Legally Defensible Results
Much attention has been directed at job evaluation as both a potential source of
bias against women and as a mechanism to reduce bias (Treiman and Hartmann 1981
). We will discuss some of the studies of the effects of gender in job evaluatio
n and then consider some recommendations offered to ensure bias-free job evaluat
ion. It has been widely speculated that job evaluation is susceptible to gender
bias. To date, three ways that job evaluation can be biased against women have b
een studied (Schwab and Grams 1985). First, direct bias occurs if jobs held pred
ominantly by women are undervalued relative to jobs held predominantly by men, s
imply because of the jobholders gender. The evidence to date is mixed regarding t
he proposition that the gender of the jobholder in uences the evaluation of the jo
b. For example, Arvey et al. found no effects on job evaluation results when the
y varied the gender of jobholders using photographs and recorded voices (Arvey e
t al. 1977). In this case, the evaluators rightfully focused on the work, not th
e worker. On the other hand, when two different job titles (special assistantacco
unting and senior secretaryaccounting) were studied, people assigned lower job-ev
aluation ratings to the female-stereotyped title secretary than to the more gender-n
eutral title, assistant (McShane 1990). The second possible source of gender bias in
job evaluation ows from the gender of the individual evaluators. Some argue that
male evaluators may be less favorably disposed toward jobs held predominantly b
y women. To date, the research nds no evidence that the job evaluators gender or t
he predominant gender of the job-evaluation committee biases job-evaluation resu
lts (Lewis and Stevens, 1990). The third potential source of bias affects job ev
aluation indirectly through the current wages paid for jobs. In this case, job-e
valuation results may be biased if the jobs held predominantly by women are inco
rrectly underpaid. Treiman and Hartmann argue that womens jobs are unfairly under
paid simply because women hold them (Trieman and Hartmann 1995). If this is the
case and if job evaluation is based on the current wages paid in the market, the
n the job-evaluation results simply mirror any bias that exists for current pay
rates. Considering that many job-evaluation plans are purposely structured to mi
rror the existing pay structure, it should not be surprising that the current wa
ges for jobs in uence the results of job evaluation, which accounts for this perpe
tual reinforcement. In one study, 400 experienced compensation administrators we
re sent information on current pay, market, and job-evaluation results (Rynes et
al. 1989). They were asked to use this information to make pay decisions for a
set of nine jobs. Half of the administrators received a job linked to men (i.e.,
over 70% of job holders were men, such as security guards) and the jobs given t
he other half were held predominately by women (e.g., over 70% of job holders we
re women, such as secretaries). The results revealed several things: (1) Market
data had a substantially larger effect on pay decisions than did job evaluations
or current pay data. (2) The jobs gender had no effects. (3) There was a hint of
possible bias against physical, nonof ce jobs over white-collar of ce jobs. This st
udy is a unique look at several factors that may affect pay structures. Other fa
ctors, such as union pressures and turnover of high performers, that also affect
job-evaluation decisions were not included. The implications of this evidence a
re important. If, as some argue, market rates and current pay already re ect gende
r bias, then these biased pay rates could work indirectly through the jobevaluat
ion process to de ate the evaluation of jobs held primarily by women (Grams and Sc
hwab
918
PERFORMANCE IMPROVEMENT MANAGEMENT
many industrial engineering situations, staff members are already in place, and
even if personnel selection is an option, it is performed by other agencies, suc
h as the human resources 920
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
922
PERFORMANCE IMPROVEMENT MANAGEMENT
have challenged these conclusions, suggesting that speci c aptitudes are more impo
rtant in predicting job performance.
2.1.2.
Simulations
Simulations are increasingly used as predictors in selection situations. Simulat
ions are representations of job situations. Although presumably task relevant, t
hey are nonetheless abstractions of real-world job tasks. Many simulations used
in selection occur in so-called assessment centers, in which multiple techniques
are used to judge candidates. In such situations, people are often appraised in
groups so that both interpersonal variables and individual factors can be direc
tly considered. Simulation exercises are often used to elicit behaviors consider
ed relevant to particular jobs (or tasks that comprise the jobs) for which peopl
e are being selected. This includes so-called in-basket simulations for manageri
al positions, where hypothetical memos and corporate agenda matters are provided
to job candidates in order to assess their decision-making (and other) skills.
Recently, computer-based simulationsranging from low- delity games to highly sophistica
ted complex environments coupled with detailed performance-measurement systemshav
e been developed for use as predictors in selection and assessment situations. O
ne example of this approach is team performance assessment technology (TPAT) (e.
g., Swezey et al. 1999). This and similar methods present simulated task environ
ments that assess how individuals and / or teams develop plans and strategies an
d adapt to changes in uctuating task demands. This technology exposes respondents
to complex environments that generate task-relevant challenges. At known times,
participants have an opportunity to engage in strategic planning; at other time
s, emergencies may require decisive action. These technologies are designed to a
llow participants to function across a broad range of situations that include un
ique demands, incomplete information, and rapid change. They often employ a perf
ormance-measurement technology termed quasi-experimental (Streufert and Swezey 1985)
. Here, xed events of importance require that each participant deal with precise
issues under identical conditions. Other events, however, are directly in uenced b
y actions of participants. In order to enable reliable measurement, xed (preprogr
ammed) features are inserted to allow for comparisons among participants and for
performance assessment against predetermined criteria. Other methodologies of t
his sort include the tactical naval decision making system (TANDEM), a low- delity
simulation of a command, control, and communication environment (Dwyer et al. 1
992; Weaver et al. 1993), and the team interactive decision exercise for teams i
ncorporating distributed expertise (TIDE2) (Hollenbeck et al. 1991).
2.1.3.
Interviews and Biodata
Most reviews concerning the reliability and validity of interviews as a selectio
n device have ended with the depressing but persistent conclusion that they have n
either (Guion 1991, p. 347). Then why are they used at all? Guion provides four ba
sic reasons for using interviews as selection devices in employment situations:
1. They serve a public relations role. Even if rejected, a potential employee wh
o was interviewed in a competent, professional manner may leave with (and convey
to others) the impression that he was treated fairly. 2. They can be useful for
gathering ancillary information about a candidate. Although collecting some for
ms of ancillary information (such as personal, family, and social relationship d
ata) is illegal, other (perfectly legitimate) information on such topics as work
history and education that may have otherwise been unavailable can be collected
924
PERFORMANCE IMPROVEMENT MANAGEMENT
structs, such as abilities, achievement, etc.); and (2) content (or job / task-r
elated) validities. See Guion (1991) for a discussion of these topics. Various U
.S. court rulings have mandated that job-related validities be considered in est
ablishing test fairness for most selection and classi cation situations. These iss
ues are addressed in the latest Standards for Educational and Psychological Test
ing (1999) document.
2.3.
Performance Measurement and Criterion Assessment
An oft-quoted principle is that the best predictor of future performance is past
performance. However, current performance can be a good predictor as well. In s
electing personnel to perform speci c tasks, one may look at measurements of their
performance on similar tasks in the past (e.g., performance appraisals, certi cat
ion results, interviews, samples of past work). As noted earlier, one may also m
easure their current performance on the tasks for which one is selecting them, e
ither by assessing them on a simulation of those tasks or, when feasible, by usi
ng work samples. A combination of both types of measures may be most effective.
Measures of past performance show not only what personnel can do, but what they
have done. In other words, past performance re ects motivation as well as skills.
A disadvantage is that, because past performance occurred under different circum
stances than those under which the candidates will perform in the future, abilit
y to generalize from past measures is necessarily limited. Because measures of c
urrent performance show what candidates can do under the actual (or simulated) c
onditions in which they will be performing, using both types of measures provide
s a more thorough basis on which to predict actual behaviors. Whether based on p
ast performance or current performance, predictors must not only be accurate, th
ey must also be relevant to the actual tasks to be performed. In other words, th
e criteria against which candidates performance is assessed must match those of t
he tasks to be performed (Swezey 1981). One way to ensure that predictors match
tasks is to use criterion-referenced assessment of predictor variables for emplo
yee selection. To do this, one must ensure that the predictors match the tasks f
or which one is selecting candidates as closely as possible on three relevant di
mensions: the conditions under which the tasks will be performed, the actions re
quired to perform the task, and the standards against which successful task perf
ormance will be measured. While it is better to have multiple predictors for eac
h important job task (at least one measure of past performance and one of curren
t performance), it may not always be possible to obtain multiple predictors. We
do not recommend using a single predictor for multiple tasksthe predictor is like
ly to be neither reliable nor valid.
3.
3.1.
TRAINING
Differences between Training and Education
Training (the systematic, structured development of speci c skills required to perform
job tasks) differs from education (development of broad-based informational backgro
und and general skills). Most people are more familiar with education than train
ing because (a) their experiences from elementary school through college usually
fall within an educational model, and (b) much of what is called training in bu
siness and industry is actually more akin to education than training. Education
is important, even essential, for building broad skill areas, but training is th
e approach of preference for preparing people to perform speci c tasks or jobs. Ta
ble 1 shows differences between education and training and between educators and
trainers. Not all differences apply to any speci c instance of training or educat
ion, but as a whole they illustrate the distinction between the two.
3.2.
Training, Learning, Performance, and Outcomes
The purpose of training is to facilitate learning skills and knowledges required
to perform speci c tasks. The outcome of training is acceptable performance on th
ose tasks. People often learn to perform tasks without training, but if training
is effective, it enables people to learn faster and better. For example, a pers
on might be able to learn to apply the principle of cantilevering to bridge cons
truction through reading and trial and error, but the process would be lengthier
and the outcomes would probably be less precise than if the person received sys
tematic, well-designed training. Because training is a subsystem of a larger org
anizational system, it should not be treated outside the context of the larger s
ystem. In organizations, training is usually requested in two types of circumsta
nces: either current performance is inadequate or new performance is required. I
n other words, organizations train either to solve current problems or to take a
dvantage of new opportunities. A second factor affecting training is whether it
is intended for new or incumbent employees. Basically, training for new opportun
ities is essentially the same for new and incumbent employees. However, new empl
oyees may require prerequisite or remedial training prior to receiving training
on new tasks to help take advantage of opportunities, whereas incumbents presuma
bly would have already completed the prerequisite training (or would have otherw
ise mastered the necessary skills).
926
PERFORMANCE IMPROVEMENT MANAGEMENT
human performance-improvement interventions. Others (Mourier 1998) have also dem
onstrated successes using this approach. Job analysis has several purposes. One
deals with determining how to apportion an organizations tasks among jobs. This p
rocess identi es the tasks that belong with each job, the goal being to reduce ove
rlap while providing clean demarcations between products and processes. From the
perspective of an organizations human resources subsystem, job analysis can be u
sed to organize tasks into clusters that take advantage of skill sets common to
personnel classi cation categories (e.g., tool and die press operator, supervisory
budget analyst) and consequently reduce the need for training. By distributing
tasks appropriately among jobs, an organization may be able to select skilled jo
b candidates, thereby reducing attendant training costs. A second purpose of job
analysis involves job redesign. During the 1970s, researchers addressed various
ways to redesign jobs so that the nature of the jobs themselves would promote p
erformance leading to desired organizational outcomes. For example, Lawler et al
. (1973) found that several factors, such as variety, autonomy, and task identit
y, in uenced employee satisfaction and improved job performance. Herzberg (1974) i
denti ed other factors, such as client relationship, new learning, and unique expe
rtise, having similar effects on satisfaction and performance. Peterson and Duff
any (1975) provide a good overview of the job redesign literature. Needs analysi
s (sometimes called needs assessment) examines what should be done so that emplo
yees can better perform jobs. Needs analysis focuses on outcomes to determine op
timal performance for jobs. Rossett (1987) has provided a detailed needs-analysi
s technique. Instead of needs analysis, many organizations unfortunately conduct
a kind of wants analysis, a process that asks employees and / or supervisors to
state what is needed to better perform jobs. Because employees and managers fre
quently cannot distinguish between their wants and their needs, wants analysis t
ypically yields a laundry list of information that is not linked well to perform
ance outcomes. Finally, task analysis is a technique that determines the inputs,
tools, and skills / knowledge necessary for successful task performance. In tra
ining situations, task analysis is used to determine the skill / knowledge requi
rements for personnel to accomplish necessary job outcomes. Skills gaps are de ned
as the difference between required skills / knowledge and those possessed by th
e individuals who are (or will be) performing jobs. Training typically focuses o
n eliminating the skills gaps. Helpful references for conducting task analyses i
nclude Carlisle (1986) and Zemke and Kramlinger (1982).
3.4.
Training Design and Development
A comprehensive reference work with respect to training design and development (
Goldstein 1993) refers to what is generally known as a systems approach to train
ing. Among many other things, this approach includes four important components:
1. Feedback is continually used to modify the instructional process, thus traini
ng programs are never completely nished products but are always adaptive to infor
mation concerning the extent to which programs meet stated objectives. 2. Recogn
ition exists that complicated interactions occur among components of training su
ch as media, individual characteristics of the learner, and instructional strate
gies. 3. The systems approach to training provides a framework for reference to
planning. 4. This view acknowledges that training programs are merely one compon
ent of a larger organizational system involving many variables, such as personne
l issues, organization issues, and corporate policies. Thus, in initiating train
ing program development, it is necessary, according to Goldstein, to consider an
d analyze not only the tasks that comprise the training, but characteristics of
the trainees and organizations involved. This approach to instruction is essenti
ally derived from procedures developed in behavioral psychology. Two generic pri
nciples underlie technological development in this area. First, the speci c behavi
ors necessary to perform a task must be precisely described, and second, feedbac
k (reinforcement for action) is utilized to encourage mastery. This (and previou
s work) served as a partial basis for the militarys instructional systems develop
ment (ISD) movement (Branson et al. 1975) and the subsequent re nement of procedur
es for criterion-referenced measurement technologies (Glaser 1963; Mager 1972; S
wezey 1981). For a discussion of the systems approach to training, we brie y revie
w the Branson et al. (1975) ISD model. This model is possibly the most widely us
ed and comprehensive method of training development. It has been in existence fo
r over 20 years and has revolutionized the design of instruction in many militar
y and civilian contexts. The evolutionary premises behind the model are that per
formance objectives are developed to address speci c behavioral events (identi ed by
task analyses), that criterion tests are developed to address the training obje
ctives, and that training is essentially developed to teach students to pass the
tests and thus achieve the requisite criteria.
928
PERFORMANCE IMPROVEMENT MANAGEMENT
3.5.
Training Techniques, Strategies, and Technologies
Two factors known to affect quality of training are the techniques used to deliv
er the instruction and the methods or strategies used to convey the instructiona
l content. One of the main tasks in the development of training, therefore, is t
he selection and implementation of appropriate instructional methods and techniq
ues. One function of technology-assessment research is to provide a basis for es
timating the utility of particular technology-based approaches for training. Res
earchers have analyzed virtually every signi cant technology trend of the last two
decades, including effects of programmed instruction (Kulik et al. 1980), compu
ter-based instruction (Kulik and Kulik 1987), and visual based instruction (Cohe
n et al. 1981). The majority of these studies show no signi cant differences among
technology groups and, when pooled, yield only slight advantages for innovative
technologies. Kulik and associates report that the size of these statistically
signi cant gains is in the order of 1.5 percentage points on a nal exam. Cohen et a
l. (1981) conducted an analysis of 65 studies in which student achievement using
traditional classroom-based instruction was compared with performance across a
variety of video-based instructional media, including lms, multimedia, and educat
ional TV. They found that only one in four studies (approximately 26%) reported
signi cant differences favoring visual-based instruction. The overall effect size
reported by Cohen et al. was relatively small compared to studies of computer-ba
sed instruction or of interactive video. Computer-based instruction and interact
ive video both represent areas where some data reporting advantages have emerged
; speci cally, reductions in the time required to reach threshold performance leve
ls (Fletcher 1990; Kulik and Kulik 1987). Results of an analysis of 28 studies,
conducted by Fletcher (1990), suggested that interactive video-based instruction
increases achievement by an average of 0.50 standard deviations over convention
al instruction (lecture, text, on-the-job training, videotape). Results of decad
es of work in the area of media research led Clark (1983, 1994) to conclude that
media have no signi cant in uence on learning effectiveness but are mere vehicles f
or presenting instruction. According to Clark (1983, p. 445), the best current evi
dence is that media are mere vehicles that deliver instruction but do not in uence
student achievement any more than the truck that delivers our groceries causes
changes in our nutrition. Similarly, Schramm (1977) commented that learning seems to
be affected more by what is delivered than by the delivery system. Although stude
nts may prefer sophisticated and elegant media, learning appears largely unaffec
ted by these features. Basically, all media can deliver either excellent or inef
fective instruction. It appears that it is the instructional methods, strategies
, and content that facilitate or hinder learning. Thus, many instructional techn
ologies are considered to be equally effective provided that they are capable of
dealing with the instructional methods required to achieve intended training ob
jectives. Meanwhile, teaching methods appear to be important variables in uencing
the effectiveness of instructional systems. Instructional methods de ne how the pr
ocess of instruction occurs: what information is presented, in what level of det
ail, how the information is organized, how information is used by the learner, a
nd how guidance and feedback are presented. The choice of a particular instructi
onal method often limits the choice of presentation techniques. Technology-selec
tion decisions, therefore, should be guided by: (1) capacity of the technology t
o accommodate the instructional method, (2) compatibility of the technology with
the user environment, and (3) trade-offs that must be made between technology e
ffectiveness and costs. Conventional instructional strategies such as providing
advance organizers (Allen 1973), identifying common errors (Hoban and Van Ormer
1950), and emphasizing critical elements in a demonstration (Travers 1967) work
well with many technology forms, including video-based instructional technologie
oncept has signi cant implications for predicting trainee performance. Although a
typical method employed to predict trainee performance involves monitoring acqui
sition data during training, using immediate past performance data as predictors
to estimate future retention performance, this strategy may not appropriately i
ndex performance on the job or outside of the training environment. Further, ini
tial performance of complex skills tends to be unstable and often is a poor indi
cator of nal performance. Correlations between initial and nal performance levels
for a grammatical reasoning task, for example, reached only 0.31 (Kennedy et al.
1980), a moderate level at best. Using initial performance during learning as a
predictor, therefore, may lead to inconsistent and / or erroneous training pres
criptions. Skill acquisition is believed by many to proceed in accordance with a
number of stages or phases of improvement. Although the number of stages and th
eir labels differ among researchers, the existence of such stages is supported b
y a large amount of research and theoretical development during the past 30 year
s. Traditional research in learning and memory posits a three-stage model for ch
aracterizing associative-type learning involving the process of differentiating
between various stimuli or classes of stimuli to which responses are required (s
timulus discrimination), learning the responses (response learning), and connect
ing the stimulus with a response (association). Empirical data suggest that this
process is most ef cient when materials are actively processed in a meaningful ma
nner, rather than merely rehearsed via repetition (Craik and Lockhart 1972; Cofe
r et al. 1966). Anderson (1982) has also proposed a three-stage model of skill a
cquisition, distinguishing among cognitive, associative, and autonomous stages.
The cognitive stage corresponds to early practice in which a learner exerts effo
rt to comprehend the nature of the task and how it should be performed.
930
PERFORMANCE IMPROVEMENT MANAGEMENT
In this stage, the learner often works from instructions or an example of how a
task is to be performed (i.e., modeling or demonstration). Performance may be ch
aracterized by instability and slow growth, or by extremely rapid growth, depend
ing upon task dif culty and degrees of prior experience of the learner. By the end
of this stage, learners may have a basic understanding of task requirements, ru
les, and strategies for successful performance; however, these may not be fully
elaborated. During the associative stage, declarative knowledge associated with
a domain (e.g., facts, information, background knowledge, and general instructio
n about a skill acquired during the previous stage) is converted into procedural
knowledge, which takes the form of what are called production rules (conditionac
tion pairs). This process is known as knowledge compilation. It has been estimat
ed that hundreds, or even thousands, of such production rules underlie complex s
kill development (Anderson 1990). Novice and expert performance is believed to b
e distinguished by the number and quality of production rules. Experts are belie
ved to possess many more elaborate production rules than novices (Larkin 1981).
Similarly, Rumelhart and Norman (1978) recognized three kinds of learning proces
ses: (1) the acquisition of facts in declarative memory (accretion), (2) the ini
tial acquisition of procedures in procedural memory (restructuring), and (3) the
modi cation of existing procedures to enhance reliability and ef ciency (tuning). K
yllonen and Alluisi (1987) reviewed these concepts in relation to learning and f
orgetting facts and skills. Brie y, they suggest that new rules are added to estab
lished production systems through the process of accretion, ne-tuned during the p
rocess of tuning, and subsequently reorganized into more compact units during th
e restructuring process. Rasmussen (1979) has also distinguished among three cat
egories, or modes, of skilled behavior: skill based, rule based, and knowledge b
ased. Skill-based tasks are composed of simple stimulusresponse behaviors, which
are learned by extensive rehearsal and are highly automated. Rule-based behavio
r is guided by conscious control and involves the application of appropriate pro
cedures based on unambiguous decision rules. This process involves the ability t
o recognize speci c well-de ned situations that call for one rule rather than the ot
her. Knowledge-based skills are used in situations in which familiar cues are ab
sent and clear and de nite rules do not always exist. Successful performance invol
ves the discrimination and generalization of rule-based learning. Rasmussen prop
oses that, in contrast to Andersons model, performers can move among these modes
of performance as dictated by task demands. This general framework is useful in
conceptualizing links between task content and the type of training required for
pro ciency in complex tasks.
3.6.2.
Retention
Instructional designers must consider not only how to achieve more rapid, high-q
uality training, but also how well the skills taught during training will endure
after acquisition. Further, what has been learned must be able to be successful
ly transferred or applied to a wide variety of tasks and jobspeci c settings. Swez
ey and Llaneras (1997) have recently reviewed this area. Retention of learned ma
terial is often characterized as a monotonically decreasing function of the rete
ntion interval, falling sharply during the time immediately following acquisitio
n and declining more slowly as additional time passes (Wixted and Ebbesen 1991;
Ebbinghaus 1913; McGeoch and Irion 1952). There is general consistency in the lo
ss of material over time. Subjects who have acquired a set of paired items, for
example, consistently forget about 20% after a single day and approximately 50%
after one week (Underwood 1966). Bahrick (1984), among others, demonstrated that
although large parts of acquired knowledge may be lost rapidly, signi cant portio
ns can also endure for extended intervals, even if not intentionally rehearsed.
Evidence suggests that the rate at which skills and knowledge decay in memory va
ries as a function of the degree of original learning; decay is slower if materi
al has previously been mastered than if lower-level acquisition criteria were im
posed (Loftus 1985). The slope and shape of retention functions also depend upon
the speci c type of material being tested as well as upon the methods used to ass
ess retention. As meaningfulness of material to the student increases, for examp
le, rate of forgetting appears to slow down. Further, recognition performance ma
y be dramatically better than recall, or vice versa, depending simply upon how s
ubjects are instructed (Tulving and Thomson 1973; Watkins and Tulving 1975). Als
o, various attributes of the learning environment, such as conditions of reinfor
cement, characteristics of the training apparatus, and habituation of responses,
appear to be forgotten at different rates (Parsons et al. 1973). Retention, lik
e learning and motivation, cannot be observed directly; it must be inferred from
performance following instruction or practice. To date, no uniform measurement
system for indexing retention has been adopted. General indices of learning and
retention used in the research literature over the past 100 years include a huge
variety of methods for measuring recall, relearning, and recognition. Luh (1922
), for instance, used ve different measures to index retention: recognition, reco
nstruction, written reproduction, recall by anticipation, and relearning. The li
st of variables known to reliably in uence rate of forgetting of learned knowledge
and skills is relatively short. In a recent review, Farr (1987) surveyed the li
terature and identi ed a list of
3.6.3.
Transfer
The topic of transfer-of-training is integrally related to other training issues
, such as learning, memory, retention, cognitive processing, and conditioning; t
hese elds make up a large subset of the subject matter of applied psychology (Swe
zey and Llaneras 1997). In general, the term transfer-of-training concerns the w
ay in which previous learning affects new learning or performance. The central q
uestion is how previous learning transfers to a new situation. The effect of pre
vious learning may function either to improve or to retard new learning. The rst
of these is generally referred to as positive transfer, the second as negative t
ransfer. (If new learning is unaffected by prior learning, zero transfer is said
to have occurred.) Many training programs are based upon the assumption that wh
at is learned during training will transfer to new situations and settings, most
notably the operational environment. Although U.S. industries are estimated to
spend billions annually on training and development, only a fraction of these ex
penditures (not more than 10%) are thought to result in performance transfer to
the actual job situation (Georgenson 1982). Researchers, therefore, have sought
to determine fundamental conditions or variables that in uence transfer-of-trainin
g and to develop comprehensive theories and models that integrate and unify know
ledge about these variables.
932
PERFORMANCE IMPROVEMENT MANAGEMENT
Two major historical viewpoints on transfer exist. The identical elements theory
( rst proposed by Thorndike and Woodworth 1901) suggests that transfer occurs in
situations where identical elements exist in both original and transfer situatio
ns. Thus, in a new situation, a learner presumably takes advantage of what the n
ew situation offers that is in common with the learners earlier experiences. Alte
rnatively, the transfer-through-principles perspective suggests that a learner n
eed not necessarily be aware of similar elements in a situation in order for tra
nsfer to occur. This position suggests that previously used principles may be applie
d to occasion transfer. A simple example involves the principles of aerodynamics
, learned from kite ying by the Wright brothers, and the application of these pri
nciples to airplane construction. Such was the position espoused by Judd (1908),
who suggested that what makes transfer possible is not the objective identities
between two learning tasks, but the appropriate generalization in the new situa
tion of principles learned in the old. Hendriksen and Schroeder (1941) demonstra
ted this transfer-of-principles philosophy in a series of studies related to the
refraction of light. Two groups were given practice shooting at an underwater t
arget until each was able to hit the target consistently. The depth of the targe
t was then changed, and one group was taught the principles of refraction of lig
ht through water while the second was not. In a subsequent session of target sho
oting, the trained group performed signi cantly better than did the untrained grou
p. Thus, it was suggested that it may be possible to design effective training e
nvironments without a great deal of concern about similarity to the transfer sit
uation, as long as relevant underlying principles are utilized. A model develope
d by Miller (1954) attempted to describe relationships among simulation delity an
d training value in terms of cost. Miller hypothesized that as the degree of deli
ty in a simulation increased, the costs of the associated training would increas
e as well. At low levels of delity, very little transfer value can be gained with
incremental increases in delity. However, at greater levels of delity, larger tra
nsfer gains can be made from small increments in delity. Thus, Miller hypothesize
d a point of diminishing returns, where gains in transfer value are outweighed b
y higher costs. According to this view, changes in the requirements of training
should be accompanied by corresponding changes in the degree of delity in simulat
ion if adequate transfer is to be provided. Although Miller did not specify the
appropriate degree of simulation for various tasks, subsequent work (Alessi 1988
) suggests that the type of task and the trainees level of learning are two param
eters that interact with Millers hypothesized relationships. To optimize the rela
tionship among delity, transfer, and cost, therefore, one must rst identify the am
ount of delity of simulation required to obtain large amount of transfer and the
point where additional increments of transfer are not worth the added costs. Ano
ther model, developed by Kinkade and Wheaton (1972), distinguishes among three c
omponents of simulation delity: equipment delity, environmental delity, and psychol
ogical delity. Equipment delity refers to the degree to which a training device du
plicates the appearance and feel of the operational equipment. This characteristic o
f simulators has also been termed physical delity. Environmental, or functional, d
elity refers to the degree to which a training device duplicates the sensory sti
mulation received from the task situation. Psychological delity (a phenomenon tha
t Parsons [1980], has termed verisimilitude) addresses the degree to which a trainee
perceives the training device as duplicating the operational equipment (equipme
nt delity) and the task situation (environmental delity). Kinkade and Wheaton post
ulated that the optimal relationship among levels of equipment, environmental, a
nd psychological delity varies as a function of the stage of learning. Thus, diff
erent degrees of delity may be appropriate at different stages of training. The r
elationship between degree of delity and amount of transfer is complex. Fidelity
and transfer relationships have been shown to vary as a function of many factors
, including instructor ability, instructional techniques, types of simulators, s
tudent time on trainers, and measurement techniques (Hays and Singer 1989). Neve
information, and overall team effectiveness (Morgan and Lassiter 1992). However,
the unique aspects associated with larger teams may do more harm than good due
to the fact that the increase in size may also pose problems. Various studies (I
ndik 1965; Gerard et al. 1968) have found that problems in communication and lev
el of participation may increase due to increases in team size. These two issues
, when combined, may diminish team performance by placing increased stress on a
team in the areas of coordination and communication workload (Shaw 1976). Task g
oal characteristics are another important contributor to team performance. Given
a task that varies in dif culty, it has been found that goals that appear to be c
hallenging and that target a speci c task tend to have a higher motivational impac
t than those that are more easily obtainable (Ilgen et al. 1987; Gladstein 1984;
Goodman 1986; Steiner 1972). According to Ilgen et al. (1987), motivation direc
ts how much time an individual is willing to put forth in accomplishing a team t
ask. The use of feedback is another factor that in uences team motivation and perf
ormance. Feedback helps to motivate an individual concerning recently performed
tasks (Salas et al. 1992). Researchers concur that the feedback should be given
within a short period of time after the relevant performance (Dyer 1984; Nieva e
t al. 1978). The sequence and presentation of the feedback may also play signi can
t roles in motivation. Salas et al. (1992) have noted that during the early stag
es of training,
934
PERFORMANCE IMPROVEMENT MANAGEMENT
feedback should concentrate on one aspect of performance; however, later in time
it should concentrate on several training components. It has been found that se
quencing helps team training in that teams adjust to incorporate their feedback
into the task(s) at hand (Briggs and Johnston 1967). According to Salas and Cann
on-Bowers (1995), a variety of methods and approaches are currently under develo
pment for use in building effective team training programs. One such technique i
s the developing technology of team task analysis (Levine and Baker 1991), a tec
hnique that, as it matures, may greatly facilitate the development of effective
team training. The technology provides a means to distinguish team learning obje
ctives for effective performance from individual objectives and is seen as a pot
ential aid in identifying the teamwork-oriented behaviors (i.e., cues, events, a
ctions) necessary for developing effective team training programs. A second area
is team-performance measurement. To be effective, team-performance measurement
must assess the effectiveness of various teamwork components (such as team-membe
r interactions) as well as intrateam cognitive and knowledge activities (such as
decision making and communication) in the context of assessing overall performa
nce of team tasks. Hall et al. (1994) have suggested the need for integrating te
am performance outcomes into any teamwork-measurement system. As team feedback i
s provided, it is also necessary to estimate the extent to which an individual t
eam member is capable of performing his or her speci c tasks within the team. Thus
, any competently developed team performance-measurement system must be capable
of addressing both team capabilities and individual capabilities separately and
in combination. Teamwork simulation exercises are a third developing technology
cited by Salas and CannonBowers (1995). The intent of such simulations is to pro
vide trainees with direct behavioral cues designed to trigger competent teamwork
behaviors. Essential components of such simulations include detailed scenarios
or exercises where speci c teamwork learning objectives are operationalized and in
corporated into training. Salas and Cannon-Bowers (1995) have also identi ed three
generic training methods for use with teams; information based, demonstration b
ased, and practice based. The information-based method involves the presentation
of facts and knowledge via the use of such standard delivery techniques as lect
ures, discussions, and overheads. Using such group-based delivery methods, one c
an deliver relevant information simultaneously to large numbers of individuals.
The methods are easy to use and costs are usually low. Information-based methods
may be employed in many areas, such as helping teammates understand what is exp
ected of them, what to look for in speci c situations, and how and when to exchang
e knowledge. Demonstration-based methods are performed, rather than presented, a
s are information-based methods. They offer students an opportunity to observe b
ehaviors of experienced team members and thus of the behaviors expected of them.
Such methods help to provide shared mental models among team members, as well a
s examples of how one is expected to handle oneself within complex, dynamic, and
multifaceted situations (Salas and Cannon-Bowers 1995). Practice-based methods
are implemented via participatory activities such as role playing, modeling, and
simulation techniques. These methods provide opportunities to practice speci c le
arning objectives and receive feedback information. With such methods, trainees
can build upon previous practice attempts until achieving the desired level(s) o
f success. A nal category of developmental teamwork technologies, according to Sa
las and Cannon-Bowers (1995), involves teamwork training implementation strategi
es. These authors discuss two such strategies: cross-training and team coordinat
ion training. In their view, cross-training, training all team members to unders
tand and perform each others tasks, is an important strategy for integrating inex
perienced members into experienced groups. Team coordination training addresses
the issue that each team member has speci c duties and that those duties, when per
formed together, are designed to provide a coordinated output. Team coordination
training involves the use of speci c task-oriented strategies to implement coordi
nation activities and has been successfully applied in the aviation and medical e
lds.
3.8.
Training Evaluation
Training evaluation may serve a variety of purposes, including improving trainin
g and assessing the bene ts of training. Formative evaluation consists of processe
s conducted while training is being developed. Summative evaluation consists of
processes conducted after training has been implemented. Because training typica
lly does not remain static (a jobs tasks are often modi ed, replaced, or added), th
e distinction between formative evaluation and summative evaluation is sometimes
blurred (see Goldstein 1993). Formative evaluation focuses on improving trainin
g prior to implementation. It may also serve the purpose of assessing bene ts of t
raining in a preliminary way. Summative evaluation focuses on assessing the bene t
s of training to determine how training outcomes improve on-job performance. It
may also serve the purpose of improving future training iterations. In terms
Development
Evaluation
Implementation
Figure 1 Evaluations Role in an ISD Model.
936 TABLE 2
PERFORMANCE IMPROVEMENT MANAGEMENT Kirkpatricks Levels Applied to Formative and S
ummative Evaluation Type of Evaluation Formative Dog food principle: Even if its very
nutritious, if the dogs dont like it, they wont eat it. Make sure participants ar
e satis ed with the course. Does the course meet its learning objectives? If it do
esnt, revise and repilot test until it does. Typically not included in formative
evaluation. Summative Reactions can be used to distinguish between instructors,
setting, audience, and other factors (e.g., are ratings for some instructors con
sistently higher than for others?). Use end-of-course measures to determine need
s for remediation and for improving future iterations of course. If the course d
oesnt change behavior on the job, the investment in development has been wasted.
Analyze causes for lack of change. Modify course, motivational factors, and / or
environmental factors as necessary. If the behavior change occurs but does not
improve the job outcomes it targeted, the training may be awed and / or external
factors may be affecting the outcome. Reanalyze needs.
Level of Evaluation 1. Reaction: Participants reaction to course (customer satisf
action).
2. Learning: Change measured at end of course.
3. Behavior: Change measured on the job.
4. Results: Impact of change on job outcomes.
Typically not included in formative evaluation.
require an experiment. Table 3 shows an example of an experimental design permit
ting a level 3 evaluation. Using the design shown in Table 3, training has been
randomly assigned to group B. Baseline performance of both groups is then measur
ed at the same time, before group B receives training. If the assignment is trul
y random, there should be no signi cant differences between the performance of tra
inees in groups A1 and B1. At the same point after training, job performance of
both groups is again measured. If all other factors are equal (as they should be
in random assignment), differences between groups A2 and B2 will show the effec
t of training on job behavior. Depending on the type of measurement, one may use
various statistical indices, such as a chi squared test or phi coef cient as a te
st of statistical signi cance (see Swezey 1981 for a discussion of this issue.) Th
is design does not use level 2 evaluation data. However, by modifying the design
so that measures of A1 and B1 behaviors occur at the end of training (rather th
an prior to training) and measures of A2 and B2 occur later (e.g., at some inter
val after training), one can use level 2 evaluation data to assess the relative
effect of nontraining factors on job performance. In this case (level 2 evaluati
on), B1 performance should be signi cantly higher than A1 performance. But B2 perf
ormance may not be signi cantly higher than A2 performance, depending upon whether
nontraining factors inhibit job performance. Again, statistical comparisons are
used to determine signi cance of the effect(s).
TABLE 3
Experimental Design for Level 3 Evaluation Experimental Condition No Training A1
A2 Training B1 B2
Points of Measurement Before Training After Training
938
PERFORMANCE IMPROVEMENT MANAGEMENT
1. Those that address speci c job skills, e.g., can calibrate a variety of instrumen
ts used to measure tensile strength 2. Those that address traits, (sometimes disgu
ised in behavioral terms), e.g., demonstrates the ability to work well with others
in a variety of situations Although training may be an appropriate development ac
tivity for the rst type of competency, the second may not bene t from any developme
nt activity. For development activities to be useful, their outcomes must be cle
arly speci ed. If works well with others is de ned in operational terms (e.g., others re
port that the individual contributes well in team situations), then development
activities, such as training, coaching, and mentoring, may be appropriate. A var
iety of experiential forms of training are in use as development activities. The
se range from wilderness survival courses to games and specially designed on-the
-job training experiences that focus on an individuals performance. Spitzer (1986
) reviews the preparation, conduct, and follow up of a variety of specially desi
gned on-the-job training experiences. Most such experiential approaches have not
been validated via careful research. Thiagarajan (1980) also summarizes advanta
ges and disadvantages of games and other forms of experiential learning. He note
s that, while experiential learning is often shown to produce the same results a
s more conventional forms of training, it occasionally leads to poorer skill acq
uisition. However, he argues that because such experiential techniques are often
fun, they may gain trainee acceptance and therefore be a good vehicle for indiv
idual skill enhancement. We tend to disagree with this view. Coaching and mentor
ing are other methods of performance enhancement used in development situations.
In coaching, a coach works with a student during one or more sessions to help t
he student develop and apply skills on the job. Pearlstein and Pearlstein (1991)
present an ISD-based approach to coaching, and Fournies (1987) describes a wide
variety of contexts for coaching. Generally, mentoring is a long-term relations
hip between an experienced and a less-experienced individual that is designed to
bring about changes in behavior via counseling, feedback, and a variety of othe
r techniques. Murray (1991) provides an overview of factors and techniques contr
ibuting to effective mentoring. Career planning as a development activity can al
so be an effective means of enhancing performance. In career planning, individua
ls and their supervisors and / or human resources departments develop and plan a
series of steps to help align individual goals with organizational goals and ca
reer progression. These steps may include training and education (either on or o
ff the job), coaching, mentoring, special assignments, and job rotations. Schein
(1978) provide a detailed model for career planning. While most employee-assess
ment systems appear to be a sound basis for individual performance enhancement,
their unsuccessful implementation may work against job performance. According to
Scherkenbach (1988), Demings total quality management (TQM) process argues again
st such systems precisely because poor implementation can cause major roadblocks
to success of both individuals and organizations. Since most employee assessmen
t systems are linked to pay decisions, employees who admit to development needs
may, in effect, be arguing against raises. This link thus creates both unsuccess
ful implementations and possible con icts of interest.
4.2.
Organizational Performance Enhancement
Two commonly practiced forms of organizational performance enhancement are manag
ement and leadership development and organization development. Management and le
adership development focuses on improving organizational outcomes by improving t
he performance of those responsible for the outcomes. Organization development f
ocuses on developing a total organization, as opposed to some particular aspect(
s) of the organization, such as its leadership, management, or employee skills (
see Gallessich 1982; Vaill 1971). Pearlstein (1991) argues that leadership devel
940
PERFORMANCE IMPROVEMENT MANAGEMENT
forms, tests, and interviews will also become increasingly available in multilin
gual and multicultural versions. Further, simulations and work sample assessment
s will be increasingly used, also in different culturally oriented and languagebased forms.
5.2.
Training
The basic development process for training is likely to remain some form of syst
ems-oriented instructional model, although cognitively oriented technologies wil
l increasingly be developed. The well-researched ISD approach has been adapted f
or use in many cultures and leads to reliable outcomes regardless of the speci c t
ype of technology employed. As cognitive approaches become better able to predic
t factors that build expertise, they will be more frequently used in task analys
is. Results may be used to design training that is more useful for far transfer situ
ationsthat is, to tasks relatively dissimilar to those used as examples during tr
aining (see Clark 1992). Still, results of cognitive analysis are likely to be u
sed within the context of a systems-oriented instructional model. Training deliv
ery and administration systems are very likely candidates for change. Computerba
sed training (CBT), now widely available in CD-ROM format, is increasingly more
available in other disk-based formats (e.g., DVD) and on the Internet as Web-bas
ed training (WBT). Limitations on processing power, memory, storage, and bandwid
th are all decreasing rapidly. This is leading to an increased ability for CBT,
regardless of its mode of delivery, to offer high levels of simulation. Streamin
g audio and streaming video, voice recognition, and other technological advances
are rapidly improving simulation delity. Similarly, advances in manipulanda (e.g
., tactile gloves, aircraft yokes) are also increasing simulation delity. The res
ult is that training delivered to remote sites can use both simulation and work
sample presentation more effectively. Other distance learning approaches, such a
s virtual classrooms, are being used increasingly. Both synchronous (real-time c
lassroom) and asynchronous (classes via forums and message boards) types of dist
ance learning will become widely used. For example, it is likely that organizati
ons will increasingly incorporate university courses directly into work environm
ents. This can be done globally with, for example, Saudi managers taking economi
cs courses offered by Japanese universities. There are negative aspects to the r
apid spread of distance learning techniques. Often media values are emphasized a
t the expense of sound instructional practice. For example, much CBT available t
oday does not make use of the technology for branching to different points in in
struction based upon learner responses. Although the technology for branching ha
s existed since the 1960s (see, e.g., Markle 1969), much CBT available today is
linear. Although some argue that learner control (the ability to move through le
ssons at will) is more important than programmed control, research ndings suggest
otherwise. Clark (1989), for instance, showed that given a choice of approach (
structured or unstructured), CBT users were more likely to choose the alternativ
e least useful for skill acquisition. Novice learners, who may have bene ted from
a structured approach, often chose an unstructured one, while experts, who could
use their advanced ability to explore advantageously, were more likely to choos
e structured approaches. Similarly, the advance of virtual classrooms in corpora
te settings may lead to an emphasis on education, as opposed to training. Thus,
employees may learn more generalities about topics, at the expense of learning h
ow to perform tasks. One recent development in training has been slow to evolve.
Electronic performance support systems (EPSS), which combine online help, granu
lar training, and expert advice functions, were popularized by Gery (1991). Alth
ough used in some corporations today, EPSS use does not seem to be accelerating,
possibly because development costs are relatively high. In an EPSS, help is dir
942
PERFORMANCE IMPROVEMENT MANAGEMENT
Cofer, C. N., Bruce, D. R., and Reicher, G. M. (1966), Clustering in Free Recall a
s a Function of Certain Methodological Variations, Journal of Experimental Psychol
ogy, Vol. 71, pp. 858866. Cohen, P. A., Ebeling, B. J., and Kulik, J. A. (1981), A
Meta-analysis of Outcome Studies of VisualBased Instruction, Educational Communica
tion and Technology Journal, Vol. 29, pp. 2636. Cormier, S. M. (1984), Transfer-oftraining: An interpretive review, Tech. Rep. No. 608, Army Research Institute, Ale
xandria, VA. Craik, F. I. M., and Lockhart, R. S. (1972), Levels of Processing: A
Framework for Memory Research, Journal of Verbal Learning and Verbal Behavior, Vol
. 11, pp. 671684. Druckman, D., and Bjork, R. A., Eds. (1991), In the Minds Eye: E
nhancing Human Performance, National Academy Press, Washington, DC. Dwyer, D., H
all, J., Volpe, C., Cannon-Bowers, J. A., and Salas, E. (1992), A Performance Asse
ssment Task for Examining Tactical Decision Making under Stress, Tech. Rep. No. 92
-002, U.S. Naval Training Systems, Center, Orlando, FL. Dyer, J. L. (1984), Review
on Team Training and Performance: A State-of-the-Art Review, in Human factors rev
iew, F. A. Muckler, Ed., The Human Factors Society, Santa Monica, CA. Ebbinghaus
, H. (1913), Memory: A Contribution to Experimental Psychology, H. A. Ruger and
C. E. Bussenius, Trans., Columbia University, New York (original work published
1885). Farr, M. J. (1987), The Long-Term Retention of Knowledge and Skills: A Co
gnitive and Instructional Perspective, Springer-Verlag, New York. Fletcher, J. D
. (1990), Effectiveness and Cost of Interactive Videodisc Instruction in Defense T
raining and Education, IDA Paper P-2372, Institute for Defense Analyses, Alexandri
a, VA. Fournies, F. F. (1987), Coaching for Improved Work Performance, Liberty H
all Press, Blue Ridge Summit, PA. Fraser, S. L., and Kroeck, K. G. (1989), The Imp
act of Drug Screening on Selection Decisions, Journal of Business Psychology, Vol.
3, pp. 403411. Gagne , E. D. (1978), Long-Term Retention of Information Following
Learning from Prose, Review of Educational Research, Vol. 48, pp. 629665. Galless
ich, J. (1982), The Profession and Practice of Consultation, Jossey-Bass, San Fr
ancisco. Georgenson, D. L. (1982), The Problem of Transfer Calls for Partnership, Tr
aining and Development Journal, Vol. 36, No. 10, pp. 7578. Gerard, H. B., Wilhelm
y, R. A., and Conolley, E. S. (1968), Conformity and Group Size, Journal of Personal
ity and Social Psychology, Vol. 8, pp. 7982. Gery, G. J. (1991), Electronic Perfo
rmance Support Systems: How and Why to Remake the Workplace through the Strategi
c Application of Technology, Weingarten, Boston. Gilbert, T. F. (1978), Human Co
mpetence: Engineering Worthy Performance, McGraw-Hill, New York. Gladstein, D. L
. (1984), Groups in Context: A Model of Task Group Effectiveness, Administrative Sci
ence Quarterly, Vol. 29, pp. 499517. Glaser, R. B. (1963), Instructional Technology
and the Measurement of Learning Outcomes: Some Questions, American Psychologist,
Vol. 18, pp. 519521. Goldstein, I. L. (1993), Training in Organizations, 3rd Ed.,
Brooks / Cole, Paci c Grove, CA. Goodman, P. S., Ed. (1986), Designing Effective
Work Groups, Jossey-Bass, San Francisco. Guion, R. M. (1991), Personnel Assessment
, Selection, and Placement, in Handbook of Industrial and Organizational Psycholog
y, M. D. Dunnette and L. M. Hough, Eds., Consulting Psychologists Press, Palo Al
to, CA, pp. 327397. Hall, J. K., Dwyer, D. J., Cannon-Bowers, J. A., Salas, E., a
nd Volpe, C. E. (1994). Toward Assessing Team Tactical Decision Making under Stres
s: The Development of a Methodology for Structuring Team Training Scenarios, in Pr
oceedings of the 15th Annual Interservice / Industry Training Systems and Educat
ion Conference (Washington, DC), pp. 8798. Hammer, E. G., and Kleiman, L. A. (198
8), Getting to Know You, Personnel Administration, Vol. 34, pp. 8692. Harless, J. H.
(1975), An Ounce of Analysis (Is Worth a Pound of Objectives), Harless Performan
ce Guild, Newman, GA. Hays, R. T. (1980), Simulator Fidelity: A Concept Paper, Tech.
Rep. No. 490, Army Research Institute, Alexandria, VA. Hays, R. T., and Singer,
M. J. (1989), Simulation Fidelity in Training System Design, Springer, New York
.
944
PERFORMANCE IMPROVEMENT MANAGEMENT
Levine, E. L., and Baker, C. V. (1991), Team Task Analysis: A Procedural Guide and
Test of the Methodology, in Methods and Tools for Understanding Teamwork: Researc
h with Practical Implications, E. Salas, Chair, Symposium presented at the 6th A
nnual Conference of the Society for Industrial and Organizational Psychology, St
. Louis, MO. Lintern, G. (1987), Flight Simulation Motion Systems Revisited, Human F
actors Society Bulletin, Vol. 30, No. 12, pp. 13. Lippitt, R., Watson, J., and We
stley, B. (1958), The Dynamics of Planned Change, Harcourt, Brace & World, New Y
ork. Loftus, G. R. (1985), Evaluating Forgetting Curves, Journal of Experimental Psy
chology: Learning, Memory, and Cognition, Vol. 11, pp. 397406. Luh, C. W. (1922),
The Conditions of Retention, Psychological Monographs, Whole No. 142, Vol. 31, No.
3. Mager, R. F. (1972), Goal Analysis, Lear Siegler / Fearon, Belmont, CA. Mager
, R. F., and Pipe, P. (1970), Analyzing Performance Problems or You Really Ought
a Wanna, Fearon, Belmont, CA. Markle, S. M. (1969), Good Frames and Bad: A Gramm
ar of Frame Writing, 2nd Ed., John Wiley & Sons, New York. Martin, C. L., and Na
gao, D. H. (1989), Some Effects of Computerized Interviewing on Job Applicant Resp
onses, Journal of Applied Psychology, Vol. 75, pp. 7280. McCall, M. W., Lombardo, M
. M., and Morrison, A. M. (1988), Lessons of Experience: How Successful Executiv
es Develop on the Job, Lexington Books, Lexington, MA. McGeoch, J. A., and Irion
, A. L. (1952), The Psychology of Human Learning, 2nd Ed., Longmans, Green & Co.
, New York. McHenry, J. J., Hough, L. M., Toquam, J. L., Hanson, M. L., and Ashw
orth, S. (1990), Project A Validity Results: The Relationship between Predictor an
d Criterion Domains, Personnel Psychology, Vol. 43, pp. 335354. Miller, R. B. (1954
), Psychological Considerations for the Design of Training Equipment, American I
nstitutes for Research, Pittsburgh. Morgan, B. B., Jr., and Lassiter, D. L. (199
2), Team Composition and Staf ng, in Teams: Their Training and Performance, R. W. Swez
ey and E. Salas, Eds., Ablex, Norwood, NJ. Morris, N. M., and Rouse, W. B. (1985
), Review and Evaluation of Empirical Research in Troubleshooting, Human Factors, Vo
l. 27, No. 5, pp. 503530. Mourier, P. (1998), How to Implement Organizational Chang
e That Produces Results, Performance Improvement, Vol. 37, No. 7, pp. 1928. Murphy,
K. R., Thornton, G. C., and Reynolds, D. H. (1990), College Students Attitudes tow
ard Employee Drug Testing Programs, Personnel Psychology, Vol. 43, pp. 61531. Murra
y, M. (1991), Beyond the Myths and Magic of Mentoring: How to Facilitate an Effe
ctive Mentoring Program, Jossey-Bass, San Francisco. Nieva, V. F., Fleishman, E.
A., and Rieck, A. (1978), Team Dimensions: Their Identity, Their Measurement, and
Their Relationships, Contract No. DAHC19-78-C-0001, Response Analysis Corporation
, Washington, DC. OHara, J. M. (1990), The Retention of Skills Acquired through Sim
ulator-Based Training, Ergonomics, Vol. 33, No. 9, pp. 11431153. Parsons, H. M. (19
80), Aspects of a Research Program for Improving Training and Performance of Nav
y Teams, Human Resources Research Organization, Alexandria, VA. Parsons, P. J.,
Fogan, T., and Spear, N. E. (1973), Short-Term Retention of Habituation in the Rat
: A Developmental Study from Infancy to Old Age, Journal of Comparative Psychologi
cal Psychology, Vol. 84, pp. 545553. Pearlstein, G. B., and Pearlstein, R. B. (19
91), Helping Individuals Build Skills through ISD-Based Coaching, Paper presented at
Association for Education and Communication Technology Annual Conference (Orlan
do, FL, March). Pearlstein, R. B. (1991), Who Empowers Leaders? Performance Improvem
ent Quarterly, Vol. 4, No. 4, pp. 1220. Pearlstein, R. B. (1992), Leadership Basics
: Behaviors and Beliefs, Paper presented at the 30th Annual Conference of the Nati
onal Society for Performance and Instruction (Miami). Pearlstein, R. B. (1997), Or
ganizational Development for Human Performance Technologists, in The Guidebook for
Performance Improvement: Working with Individuals and Organizations, R. Kaufman
, S. Thiagarajan, and P. MacGillis, Eds., Pfeiffer, San Francisco.
946
PERFORMANCE IMPROVEMENT MANAGEMENT
Shaw, M. E. (1976), Group Dynamics: The Psychology of Small Group Behavior, McGr
aw-Hill, New York. Shea, J. F., and Morgan, R. L. (1979), Contextual Interference
Effects on the Acquisition, Retention and Transfer of a Motor Skill, Journal of Ex
perimental Psychology: Human Learning and Memory, Vol. 5, pp. 179187. Spitzer, D.
R. (1986), Improving Individual Performance, Educational Technology Publication
s, Englewood Cliffs, NJ. Steiner, I. D. (1972), Group Process and Productivity,
Academic Press, New York. Stodgill, R. M. (1974), Handbook of Leadership, Free P
ress, New York. Streufert, S., and Swezey, R.W. (1985), Simulation and Related Res
earch Methods in Environmental Psychology, in Advances in Environmental Psychology
, A. Baum and J. Singer, Eds., Erlbaum, Hillsdale, NJ, pp. 99118. Swezey, R. W. (
1978), Retention of Printed Materials and the Yerkes-Dodson Law, Human Factors Socie
ty Bulletin, Vol. 21, pp. 810. Swezey, R. W. (1981), Individual Performance Asses
sment: An Approach to Criterion-Referenced Test Development, Reston, Reston, VA.
Swezey, R. W. (1989), Generalization, Fidelity and Transfer-of-Training, Human Fact
ors Society Bulletin, Vol. 32, No. 6, pp. 45. Swezey, R. W., Hutcheson, T. D., Ro
hrer, M. W., Swezey, L. L., and Tirre, W. C. (1999), Development of a Team Perform
ance Assessment Device (TPAD): Final Report, InterScience America, Leesburg, VA (R
eport prepared under Contract No. F41624-97-C-5006 with the U.S. Air Force Resea
rch Laboratory, Brooks AFB, TX). Swezey, R. W., and Llaneras, R. E. (1992), Valida
tion of an Aid for Selection of Instructional Media and Strategies, Perceptual and
Motor Skills, Vol. 74, p. 35. Swezey, R.W., and Llaneras, R.E. (1997), Models in
Training and Instruction, in Handbook of Human Factors and Ergonomics, 2nd Ed., G.
Salvendy, Ed., John Wiley & Sons, New York. pp. 514577. Swezey, R. W., and Salas
, E. (1992), Guidelines for Use in Team Training Development, in Teams, Their Traini
ng and Performance, R. W. Swezey and E. Salas, Eds., Ablex, Norwood, NJ. Swezey,
R. W., Perez, R. S., and Allen, J. A. (1991), Effects of Instructional Strategy a
nd Motion Presentation Conditions on the Acquisition and Transfer of Electromech
anical Troubleshooting Skill, Human Factors, Vol. 33, No. 3, pp. 309323. Swezey, R.
W., Meltzer, A. L., and Salas, E. (1994), Issues Involved in Motivating Teams, in M
otivation: Theory and Research, H. F. ONeil, Jr. and M. Drillings, Eds., Erlbaum,
Hillsdale, NJ. pp. 141170. Tannenbaum, S. I., and Yukl, G. (1992), Training and De
velopment in Work Organizations, Annual Review of Psychology, Vol. 43, pp. 399441.
Teather, D. C. B., and Marchant, H. (1974), Learning from Film with Particular Ref
erence to the Effects of Cueing, Questioning, and Knowledge of Results, Programmed
Learning and Educational Technology, Vol. 11, pp. 317327. Thiagarajan, S. (1980)
, Experiential Learning Packages, Educational Technology Publications, Englewood
Cliffs, NJ. Thorndike, R. L. (1986), The Role of General Ability in Prediction, Jou
rnal of Vocational Behavior, Vol. 29, pp. 332339. Thorndike, E. L., and Woodworth
, R. S. (1901), The In uence of Improvement in One Mental Function upon the Ef ciency
of Other Functions, Psychological Review, Vol. 8, pp. 247261. Thorpe, J. A. (1987),
The New Technology of Large Scale Simulation Networking: Implications for Masteri
ng the Art of Non ghting, in Proceedings of the 15th Annual Interservice / Industry
Training Systems and Education Conference (Washington, DC), pp. 8798. Travers, R.
M. (1967), Research and Theory Related to Audiovisual Information Transmission, U.S
. Of ce of Education contract No. OES-16-006, Western Michigan University, Kalamaz
oo (ERIC Document Reproduction Service No. ED 081 245). Tullar, W. L. (1989), Rela
tional Control in the Employment Interview, Journal of Applied Psychology, Vol. 74
, pp. 971977. Tulving, E., and Thomson, D. M. (1973), Encoding Speci city and Retriev
al Processes in Episodic Memory, Psychological Review, Vol. 3, pp. 112129. Underwoo
d, B. J. (1966), Experimental Psychology, 2nd Ed., Appleton-Century-Crofts, New
York.
948
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
950
PERFORMANCE IMPROVEMENT MANAGEMENT
for just $80 million, and led a $500 million lawsuit against Andersen Consulting,
the implementers of the ERP system (Information Week 1998; Wall Street Journal
1998b). Oxford Health Plans lost $363 million in 1997 when their new claims-proc
essing system delayed claims processing and client billing (Wall Street Journal
1998a; Champy 1998). Computer systems were blamed for delaying the scheduled ope
ning of the rst deregulated electricity market in the United States (Information
Week 1997). Hershey, the nations largest candy maker, installed a $110 million ER
P system in July 1999. Glitches in the system left many distributors and retaile
rs with empty candy shelves in the season leading up to Halloween (Wall Street J
ournal 1999). Whirlpool reported that problems with a new ERP system and a high
volume of orders combined to delay shipments of appliances to many distributors
and retailers. These failures are expensive. In an internal document of Septembe
r 15, 1997, the information systems research rm MetaFax calculated an average yea
rly loss of $80 billion from a 30% cancellation rate and a $59 billion loss from
a 50% over-budget rate. In 1997 alone (before the Y2K in ated IT expenditures), c
ompanies spent $250 billion on information technology; a 3070% failure rate clear
ly means that billions of dollars are spent with disappointing results (Wall Str
eet Journal 1998b). Aside from a disappointing return on investment, the impacts
of failed technology investments include:
Harm to the rms reputation (where poor implementation gets blamed on the technolog
y vendor
or designer)
Broken trust (where workers are unwilling to go the extra mile the next time) Re
duced management credibility (because management cant deliver on promises) Slower
learning curve (leading to crisis management as problems increase with implemen
tation
rather than decrease)
Reduced improvement trajectory (since there is no time to explore opportunities
for new technology or new business opportunities for existing technology)
2.2.
Why These High Failure Rates?
In one of the rst major studies on this problem of implementation, the Congressio
nal Of ce of Technology Assessment concluded: The main stumbling blocks in the near
future for implementation of programmable automation technology are not technica
l, but rather are barriers of cost, organization of the factory, availability of
appropriate skills, and social effects of the technologies (OTA 1984, p. 94). A f
ew years later, the Manufacturing Studies Board of the National Research Council
conducted a study of 24 cases of the implementation of CAM and CIM technologies
and concluded: Realizing the full bene ts of these technologies will require system
atic change in the management of people and machines including planning, plant c
ulture, plant organizations, job design, compensation, selection and training, a
nd labor management relations (MSB 1986). In a 1986 Yankee Consulting Group market
ing survey of CAM and CIM users, the users reported that 75% of the dif culties th
ey experienced with the technologies could be attributable to issues concerned w
ith planning the use of the technology within the context of the organization (C
riswell 1988). Recent evidence continues to support the conclusion that a signi ca
nt component of the complexity of technological change lies in the organizationa
l changes often experienced. C. Jackson Grayson, Jr., then Chairman of the Ameri
can Productivity and Quality Center in Houston, Texas, analyzed the 68 applicati
ons for the Malcolm Baldrige National Quality Award for 1988 and 1989 and found
that a major reason for failing to meet the examination criteria was the neglect
of and failure to integrate human and organizational aspects with technology in
vestments (Grayson 1990). Peter Unterweger of the UAW Research Department, after
extensive case study visits in the United States and abroad, concluded that the
successes of technological implications can be attributable to: (a) hardware pl
aying a subordinate role to organizational or human factors and (b) developing t
he technical and organizational systems in step with one another (Unterweger 198
8). In a study of 2000 U.S. rms implementing new of ce systems, less than 10% of th
e failures were attributed to technical failures; the majority of the reasons gi
ven were human and organizational in nature (Long 1989). The MIT Commission on I
ndustrial Productivity concluded from their extensive examination of the competi
tiveness of different American industries: Reorganization and effective integratio
n of human resources and changing technologies within companies is the principal
driving force for future productivity growth (Dertouzos et al. 1989). More recent
ly, in a 1997 survey by the Standish Group of 365 IT executive managers, the top
factors identi ed in application development project failures were poor managemen
t of requirements and user inputs (Computerworld 1998a). The 1997 MetaFax survey
found the reasons for IS failures to include poor project planning and manageme
nt. In a 1998 Computerworld survey of 365 IT executives, the top factors for sof
tware development project failures were the lack of user input and changing requ
irements (Computerworld 1998a). A careful study of six failures of information t
echnology projects found that those projects that devoted more effort to
uter system had to be withdrawn. But when, after suitable delay, the airline rei
ntroduced more or less the same system for engineers to use when and if they wan
ted, it was much better received. A nal example of a project devoting too much at
tention to the technology side and too little to the organizational side is the
London Ambulance system failure. In the formal inquiry on the failure, it was no
ted that the initial concept of the system was to fully automate ambulance dispa
tching; however, management clearly underestimated the dif culties involved in cha
nging the deeply ingrained culture of London Ambulance and misjudged the industr
ial relations climate so that staff were alienated to the changes rather than br
ought on board. (Flowers 1997). While much of this information supporting the im
portant role of aligning technology and organizations is anecdotal, there have b
een several econometric studies of larger samples supporting this claim. A growi
ng body of literature has established strong empirical links among such practice
s as high-involvement work practices, new technologies, and improved economic pe
rformance (MacDuf e 1995; Arthur 1992). Pil and MacDuf e (1996) examined the adoptio
n of high-involvement work practices over a ve-year period in 43 automobile assem
bly plants located around the world, their
952
PERFORMANCE IMPROVEMENT MANAGEMENT
technologies (ranging from highly exible to rigidly integrated), and their econom
ic performance and found that the level of complementary human resource practice
s and technology was a key driver of successful introduction of high-involvement
practices. Kelley (1996) conducted a survey of 973 plants manufacturing metal p
roducts and found that a participative bureaucracy (i.e., group-based employee p
articipation that provides opportunities to reexamine old routines) is complemen
tary to the productive use of information technology in the machining process. O
sterman (1994) used data on 694 U.S. manufacturing establishments to examine the
incidence of innovative work practices, de ned as the use of teams, job rotation,
quality circles, and total quality management. He found that having a technolog
y that requires high levels of skills was one factor that led to the increased u
se of these innovative work practices. To conclude, it should be clear that tech
nological change often necessitates some organizational change. If both organiza
tional and technological changes are not effectively integrated and managed to a
chieve alignment, the technological change will fail.
3.
WHAT ARE THE STUMBLING BLOCKS TO ALIGNMENT?
If the bene ts of aligning technology and organizational design are so clear, why
isnt it done? We suggest that there are many reasons.
3.1.
The Future of Technology Is Probabilistic
The technology S curve has been historically documented as describing technology
change over the years (Twiss 1980; Martino 1983). The curve, plotted as the rat
e of change of a performance parameter (such as horsepower or lumens per watt) o
ver time, has been found to consist of three periods: an early period of new inv
ention, a middle period of technology improvement, and a late period of technolo
gy maturity. The technology S curve, however, is merely descriptive of past tech
nology changes. While it can be used for an intelligent guess at the rate of tec
hnology change in the future, technology change is suf ciently unpredictable that
it cannot be used to predict precisely when and how future change may occur. Flu
ctuating market demand and / or novelty in the technology base exacerbate the ch
allenge. For example, at Intel, typically at least one third of new process equi
pment has never been previously used (Iansiti 1999). This probabilistic nature o
f the technology makes creating aligned technology and organizational solutions
dif cult because it cannot be known with any certainty what the future organizatio
nal-technology solution is likely to be over the long term. In his study of six
information technology project failures, Flowers (1997) concluded that the unpre
dictability of the technology is a primary complexity factor that contributes to
project failure. The more that the technology is at the bleeding edge, the greater
the complexity. Avoiding overcommitment to any one technology or organizational
solution, avoiding escalationary behavior where more resources are thrown at the
solution-generation process without adequate checks and balances, and maintaini
ng project-reporting discipline in the face of uncertainty are suggested ways of
managing the inherent probabilistic nature of technology.
3.2.
Some Factors Are Less Malleable Than Others
A series of research studies on the process by which technologies and organizati
ons are adapted when technologies are implemented into an organization have show
n that adaptations of both technologies and the organization can occur (Barley 1
986; Contractor and Eisenberg 1990; Orlikowski and Robey 1991; Orlikowski 1992;
Giddens 1994; Rice 1994; Orlikowski et al. 1995; Rice and Gattiker 1999). Howeve
r, in reality, some adaptations are less likely to occur because some of these f
actors tend to be less malleable (Barley 1986; Johnson and Rice 1987; Poole and
DeSanctis 1990; Orlikowski 1992; DeSanctis and Poole 1994; Orlikowski and Yates
1994). One of these factors is the existing organizational structure. For exampl
e, Barley (1986) found evidence that one factor that tends to be less malleable
is the existing power structure in the organization. Barley found that when a me
dical radiation device was installed into two separate hospitals, the work chang
ed in accordance with the organizational structure, not vice versa. That is, in
the hospital where the radiologists had more power in the organizational structu
re than the technicians, the rift between the two jobs became greater with the n
ew technology. In contrast, in the hospital where technicians and radiologists w
ere not separated hierarchically, the technology was used to share knowledge bet
ween the two. Another factor often found to be less malleable is what DeSanctis
and Poole (1994) refer to as the technology spirit, which they de ne as the intended u
ses of the technology by the developer or champion who in uenced the developer. If
the spirit is intended to displace workers, then this spirit is unlikely to be
changed during implementation. Research contradicting this assertion has been co
nducted recently, however (Majchrzak et al. 2000). Moreover, Tyre and Orlikowski
(1994) have found that malleability may be temporal, that is, that technologies
and structures can be changed, but only during windows of opportunity that may
periodically reopen as the technology is used. In their study, the authors found
these windows to include new rethinking about the use of the technology or new
needs for
ple, when globally implementing ERP systems, managers have a choice whether to r
oll out a single standardized ERP solution worldwide or to allow some issues (su
ch as user interface screens or data structures) to have localized solutions. Fo
rcing a single standardized implementation world-wide has been the preferred str
ategy in most implementations because it minimizes the complexity and resources
required to accommodate to localized modi cations (Cooke and Peterson 1998). Howev
er, implementers at Owens-Corning believe that part of their success in their gl
obal ERP implementation was attributable to allowing localized solutions, even t
hough it was slightly more complicated in the beginning. They believe that allow
ing eld locations to tailor some aspects of the ERP system not only ensured the b
uy-in and commitment of eld personnel to the ERP project, but also ensured that t
he ERP system met each and every eld locations particular needs. Thus, another stu
mbling block to alignment is that alignment solutions are best construed as nonr
epeatable and highly contextuala concept that raises management concerns about th
e resources required to allow such contextualization.
3.5. Alignment Requires Comprehensive Solutions That Are Dif cult to Identify and
Realize
A solution aligned for technology and organization is a comprehensive one involv
ing many factors. Today it is widely believed that in addition to strategy and s
tructure, an organizations culture, technology, and people all have to be compati
ble. If you introduce change in technology, you should
954
PERFORMANCE IMPROVEMENT MANAGEMENT
expect to alter your corporate strategy to capitalize on the new capabilities, a
lter various departmental roles and relations, add personnel with new talents, a
nd attempt to manage change in shared beliefs and values needed to facilitate use of
the new technology. (Jambekar and Nelson 1996, p. 29.5) Despite this need for i
ntegration, Iansiti (1999) charges that technology choices are too often made in s
cattershot and reactive fashion, with technology possibilities chosen for their
individual potential rather than from their system-level integration. Iansiti spec
i cally suggests that only when there is a proactive process of technology integra
tionone comprising a dedicated, authorized group of people armed with appropriate k
nowledge, experience, tools, and structurewill results be delivered on time, lead t
imes be shorter, resources be adequately utilized, and other performance measure
s be achieved. In a study of reengineering efforts, Hall et al. (1993) argue tha
t many attempts at reengineering have failed because of a focus on too few of th
e factors needing to be changed. Instead, for reengineering to work, fundamental
change is required in at least six elements: roles and responsibilities, measur
ements and incentives, organizational structure, information technology, shared
values, and skills. Thus, another stumbling block to alignment is the need to co
nsider all these factors and their relationships. For many managers and industri
al engineers, there are too many factors and relationships; as a result, it is f
ar easier to focus mistakenly on only one or a few factors.
3.6. Alignment Involves Long Planning Cycles, Where Observable Results and Knowi
ng Whether You Made the Right Decisions Take Awhile
Years ago, Lawrence and Lorsch (1967) helped us to recognize the importance of t
he time horizon of feedback from the environment in determining whether strategi
c and organizational decisions are the right decisions. In their research, they
found that some departments had very quick time horizons, such as a manufacturin
g department that is structured and oriented to obtaining quick feedback from th
e environment. In contrast are departments with longer time horizons, such as a
research and development department, in which the department is organized to exp
ect feedback about their work over a much longer time period. Lawrence and Lorsc
h further found that these different time horizons of feedback created different
needs for organizational structures, performance-monitoring systems, and person
nel policies. The technology-development process can be characterized as one tha
t has a long planning cycle so that the time horizon of feedback may be months o
r years. For example, the average CIM implementation may take up to 3 years to c
omplete; while the implementation of a large ERP system takes at least 18 months
. As a result, managers and engineers need to make decisions about the design of
the technology-organization solution in the absence of any data from the eld. Wh
ile some of these decisions may be changed later if data from the eld indicate a
problem in the design, some of these decisions are changeable only at great cost
. This creates a bias toward conservativeness, that is, making decisions that mi
nimize risk. As a result, only those factors that decision makers have historica
l reason to believe should be changed are likely to be changed, increasing the p
robability of misalignment. Thus, another stumbling block to achieving alignment
is the long planning cycle of technology-organizational change, which tends to
create a bias against change because learning whether planning decisions are the
right ones.
3.7.
Alignment Involves Compromises
Given the many factors involved in deriving an aligned solution and the many fun
ctions affected by an aligned solution, the nal aligned solution is unlikely to b
e an idealized solution. Rather, the nal solution is likely to be the outcome of
a series of negotiations among the relevant parties. For example, a labor union
may not want to give up the career-progression ladder provided by specialized jo
bs and embrace cross-functional teamwork; management may not want to give up the
decision-making control they enjoy and embrace autonomy among the teams. The pr
ocess of negotiating these different positions to result in some amicable compro
mise may be dif cult and frustrating, adding to the challenges imposed by alignmen
t. Information technology, because it tends to break down organizational barrier
s, turfs, and layers, could face opposition from individuals entrenched in the c
ompanies hierarchy. For example, production planning, inventory control, and qual
ity control will increasingly be under the control of front-line employees, and
this will pose a major threat to low-level supervisors and middle managers (Oste
rman 1989) and may even lead to their extinction (Drucker 1988).
4. HOW CAN TECHNOLOGY PLANNERS PURSUE ALIGNMENT DESPITE THESE DIFFICULTIES?
The dif culties identi ed in Section 3 are real dif culties not likely to go away with
new managers, new technologies, new industrial engineering skills, new organiza
tional designs, or new motivations. Therefore, industrial engineers must identif
y ways to move past these dif culties. This means taking the dif culties into accoun
t when pursuing alignment, rather than ignoring them. Effort then is not
at are these factors? The U.S. industrys initiative on agile manufacturing (docum
ented in Goldman et al. 1995) identi ed a range of factors, including the producti
on hardware, the procurement process, and the skills of operators. The National
Center for Manufacturing Sciences created a program to promote manufacturing rms
to assess themselves on their excellence. The assessment contained 171 factors d
istributed across 14 areas ranging from supplier development to operations, from
cost to exibility, from health and safety to customer satisfaction. In a ve-year
industryuniversity collaborative effort funded by the National Center for Manufac
turing Sciences, 16 sets of factors were identi ed that must be aligned (Majchrzak
1997; Majchrzak and Finley 1995), including:
Business strategies Process variance-control strategies Norms of behavior
956
PERFORMANCE IMPROVEMENT MANAGEMENT Strategies for customer involvement Employee
values Organizational values Reporting structure Performance measurement and rew
ard systems Areas of decision-making authority Production process characteristic
s Task responsibilities and characteristics Tools, xtures, and material character
istics Software characteristics Skill breadth and depth Information characterist
ics Equipment characteristics
Within each set, 5100 speci c features were identi ed, with a total of 300 speci c desi
gn features needing to be designed to create an aligned organizational-technolog
y solution for a new technology. In this ve-year study, it was also found that ac
hieving alignment meant that each of these factors needed to be supportive of ea
ch other factor. To determine whether a factor was supportive of another factor,
each factor was assessed for the degree to which it supported different busines
s strategies, such as minimizing throughput time or maximizing inventory turnove
r. Supportive factors were then those that together contributed to the same busi
ness strategy; inversely, misaligned solutions were those for which design featu
res did not support similar business strategies. Recognizing this range of facto
rs and their relationships may seem overwhelming; but it can be done. The crossfunctional teams and use of CAD technologies for developing the Boeing 777 aircr
aft present an excellent example of alignment. In designing the 777, Boeing crea
ted approximately 240 teams, which were labeled design-build teams. These teams incl
uded cross-functional representatives from engineering design, manufacturing, nan
ce, operations, customer support, maintenance, tool designers, customers, and su
ppliers (Condit 1994). To communicate part designs, the teams used 100% digital
design via the 3D CAD software and the networking of over 2000 workstations. Thi
s allowed the suppliers to have real-time interactive interface with the design
data; tool designers too were able to get updated design data directly from the
drawings to speed tool development. In addition, the CAD softwares capability in
performing preassembly checks and visualization of parts allowed suf cient interro
gation to determine costly misalignments, interferences, gaps, con rmation of tole
rances, and analysis of balances and stresses (Sherman and Souder 1996). In sum,
the technology of CAD was aligned with the organizational structure of the cros
s-functional teams.
4.3.
Understand the Role of Cultures in Alignment
Culture affects alignment by affecting the change process: changes that support
the existing culture are easier to implement successfully than changes that caus
e the culture to change. At least two types of culture must be considered in des
igning a technology-organization solution: the national culture of the country a
nd the culture of the organization. According to Schein (1985), organizational c
ulture is a pattern of basic assumptionsinvented, discovered, or developed by a giv
en group as it learns to cope with its problems of external adaptation and inter
nal integrationthat has worked well enough to be considered valid and, therefore,
to be taught to new members as the correct way to perceive, think, and feel in
relation to those problems. Kotter and Heskett (1992, p. 4) contend that organizat
ional culture has two levels that differ in terms of their visibility and their
resistance to change:
At the deeper and less visible level, culture refers to values that are shared b
y the people in a group and that tend to persist over time even when group membe
rship changes. . . . At the more visible level, culture represents the behavior
patterns or style of an organization that new employees are automatically encour
aged to follow by their fellow employees. . . . Each level of culture has a natu
ral tendency to in uence the other.
958
PERFORMANCE IMPROVEMENT MANAGEMENT
3. To establish whether there was any compatibility between the organizational c
ulture of the plants and the national culture of the three countries, and examin
e whether the compatibility (or incompatibility) affected their performance in t
erms of production yields, quality, safety, and cycle time Although the results
of this study indicate that there are differences among the national culture dim
ensions of Puerto Rico, the United States, and Mexico, no signi cant differences w
ere found between the organizational cultures of the three plants. This may be d
ue to selection criteria, by which candidates, by assessment of their behavioral
styles, beliefs, and values, may have been carefully screened to t in with the e
xisting organizational culture, Additionally, socialization may have been anothe
r factor. This means that the company may have had in-house programs and intense
interaction during training, which can create a shared experience, an informal
network, and a company language. These training events often include songs, picn
ics, and sporting events that build a sense of community and feeling of together
ness. Also, the company may have had artifacts, the rst level of organizational c
ulture, such as posters, cards, and pens that remind the employees of the organi
zations visions, values, and corporate goals and promote the organizations culture
. Therefore, it seems that a total transfer has been realized by this multinational
corporation. Because these manufacturing plants produce similar products, they m
ust obtain uniform quality in their production centers. To gain this uniformity,
this company has transferred its technical installations, machines, and organiz
ation. Moreover, to ful ll this purpose, the company chooses its employees accordi
ng to highly selective criteria. Notwithstanding, Hofstedes research demonstrates
that even within a large multinational corporation known for its strong culture
and socialization efforts, national culture continues to play a major role in d
ifferentiating work values (Hofstede 1980a). There are concepts in the dimension
s of organizational culture that may correspond to the same concepts of the dime
nsions of national culture: The power distance dimension of national culture add
resses the same issues as the perceived oligarchy dimension of organizational cu
lture. They both refer to the nature of decision making; in countries where powe
r distance is large, only a few individuals from the top make the decisions. Unc
ertainty avoidance and perceived change address the concepts of stability, chang
e, and risk taking. One extreme is the tendency to be cautious and conservative,
such as in avoiding risk and change when possible in adopting different program
s or procedures. The other is the predisposition to change products or procedure
s, especially when confronted with new challenges and opportunitiesin other words
, taking risks and making decisions. Uncertainty avoidance may also be related t
o perceived tradition in the sense that if the employees have a clear perception
of how things are to be done in the organization, their fear of uncertainties and a
mbiguities will be reduced. An agreement to a perceived tradition in the organiz
ation complements well a country with high uncertainty avoidance. Individualismco
llectivism and perceived cooperation address the concepts of cooperation between
employees and trust and assistance among colleagues at work. In a collectivist
country, cooperation and trust among employees are perceived more favorably than
in an individualist country. The perceived tradition of the organizational cult
ure may also be related to individualismcollectivism in the sense that if member
s of an organization have shared values and know what their company stands for a
nd what standards they are to uphold, they are more likely to feel as if they ar
e an important part of the organization. They are motivated because life in the
organization has meaning for them. Ceremonies of the organizational culture and
rewards given to honor top performance are very important to employees in any or
ganization. However, the types of ceremonies or rewards that will motivate emplo
yees may vary across cultures, depending on whether the country has a masculine
orientation, where money and promotion are important, or a feminine orientation,
where relationships and working conditions are important. If given properly, th
ese may keep the values, beliefs, and goals uppermost in the employees minds and
hearts. Cultural differences may play signi cant roles in achieving the success of
the corporations performance. The ndings of this study could have important manag
erial implications. First, an organizational culture that ts one society might no
t be readily transferable to other societies. The organizational culture of the
company should be compatible with the culture of the society the company is tran
sferring to. There needs to be a good match between the internal variety of the
organization and the external variety from the host country. When the cultural d
ifferences are understood, the law of requisite variety can then be applied as a
concept to investigate systematically the in uence of culture on the performance
of the multinational corporations manufacturing plants. This law may be useful fo
r examining environmental variety in the new cultural settings. Second, the nding
s have con rmed that cultural compatibility between the multinational corporations
organizational culture and the culture of the countries they are operating in pl
ays a signi cant role in the performance of the corporations manufacturing plants.
Therefore, it can be suggested that the decision concerning which management sys
tem or method to promote should be based on speci c human, cultural, social, and d
eeply rooted local behavior patterns. It is critical for multinational corporati
ons operating in different cultures from their own to ensure and enhance cultura
l compatibility for the success of their operations. As a consequence, it
960
PERFORMANCE IMPROVEMENT MANAGEMENT
room and looking for change or deviation from standards or routines in the plant
. It is contended that their responses during transition from the rule-based to
the knowledge-based level of cognitive control, especially in the knowledge-base
d level, are affected by the safety culture of the plant and are also moderated
or in uenced by their cultural background. Their responses could start a vicious c
ycle, which in turn could lead to inaction, which wastes valuable time and contr
ol room resources. Breaking this vicious cycle requires boldness to make or take
over decisions so that the search for possible answers to the unfamiliar situat
ion does not continue unnecessarily and inde nitely. It is contended that the bold
ness is strongly culturally driven and is a function of the plants organizational
culture and reward system and the regulatory environment. Boldness, of course,
is also in uenced by operators personality traits, risk taking, and perception (as
mentioned before), which are also strongly cultural. Other important aspects of
the national culture include hierarchical power distance and rule orientation (L
ammers and Hickson 1979) which govern the acceptable behavior and could determin
e the upper bound of operators boldness. According to the International Atomic En
ergy Agency, two general components of the safety culture are the necessary fram
ework within an organization whose development and maintenance is the responsibi
lity of management hierarchy and the attitude of staff at all different levels i
n responding to and bene ting from the framework (IAEA 1991). Also, the requiremen
ts of individual employees for achieving safety culture at the installation are
a questioning attitude, a rigorous and prudent approach, and necessary communica
tion. However, it should be noted that other dimensions of national cultureuncert
ainty avoidance, individualismcollectivism, and masculinityfemininity while interac
ting with these general components and requirements, could either resonate with
and strengthen or attenuate safety culture. For instance, the questioning attitu
de of operators is greatly in uenced by the power distance, rule orientation, and
uncertainty avoidance of the societal environment and the openness in the organi
zational culture of the plant. A rigorous and prudent approach that involves und
erstanding the work procedures, complying with procedure, being alert for the un
expected, and so on is moderated by power distance and uncertainty avoidance in
the culture and by the sacredness of procedures, the criticality of step-by-step
compliance, and a de nite organizational system at the plant. Communication which
involves obtaining information from others, transmitting information to others,
and so on, is a function of all the dimensions of national culture as well as t
he steepness and rigidity of the hierarchical organizational structure of the pl
ant. The nuclear industry shares many safety-related issues and concerns with th
e aviation industry, and there is a continuous transfer of information between t
hem (e.g., EPRI 1984). Cultural and other human factors considerations affecting
the performance of a cockpit crew are, to a large extent, similar to those affe
cting nuclear plant control room operators. Therefore, it is worth referring bri
e y to a fatal accident involving a passenger airplane in which, according to an i
nvestigation by the U.S. National Transportation Safety Board (NTSB 1991), natio
nal cultural factors within the cockpit and between it and the air traf c control
tower contributed signi cantly to the crash. Avianca ight 052 (AV052) (Avianca is t
he airline of Colombia), a Boeing 707, crashed in Cove Neck, New York, on Januar
y 25, 1990, and 73 of the 158 persons aboard were killed. According to the NTSB:
The NTSB determines that the probable cause of this accident was the failure of
the ight crew to adequately manage the airplanes fuel load, and their failure to c
ommunicate an emergency fuel situation to air traf c control before fuel exhaustio
n occurred. (NTSB 1991, p. 76, emphasis added) The word priority was used in procedu
res manuals provided by the Boeing Company to the airlines. A captain from Avianc
a Airlines testi ed that the use by the rst of cer of the word priority, rather than e
ncy, may have resulted from training at Boeing. . . . He stated that these personn
el received the impression from the training that the words priority and emergen
cy conveyed the same meaning to air traf c control. . . . The controllers stated t
hat, although they would do their utmost to assist a ight that requested priority, th
e word would not require a speci c response and that if a pilot is in a low fuel e
mergency and needs emergency handling, he should use the word emergency. (NTSB 1991,
p. 63; emphasis added)
The NTSB concluded:
The rst of cer, who made all recorded radio transmissions in English, never used th
e word Emergency, even when he radioed that two engines had amed out, and he did not
use the appropriate phraseology published in United States aeronautical publicat
ions to communicate to air traf c control the ights minimum fuel status. (NTSB 1991,
p. 75, emphasis added)
Helmreichs (1994) comprehensive analysis of the AV052 accident thoroughly address
es the role of cultural factors. His contention is that
had air traf c controllers been aware of cultural norms that may in uence crews from
other cultures, they might have communicated more options and queried the crew
more fully regarding the ight status. . . . The possibility that behavior on this
[ ight] was dictated in part by norms of national culture cannot be dismissed. It
seems likely that national culture may have contributed to [the crews behavior a
nd decision
962
PERFORMANCE IMPROVEMENT MANAGEMENT
for being informed when new knowledge relevant to their area of expertise was en
tered into the knowledge base (e.g., pro ling their interests coupled with e-mail
noti cation when an entry t that pro le), ability to link desktop applications intera
ctively to the knowledge base (called hot links), templates for commonly capture
d knowledge (such as for meeting agendas, meeting minutes, action items, decisio
n rationale), and access anywhere by anyone anytime (24
7 access by team members
and managers). A system was developed to these speci cations. Then the team was e
ncouraged to develop a set of coordination norms for how to conduct their creati
ve engineering design work virtually using the collaborative technology. They cr
eated a new work process that would encourage all members of the team (including
suppliers and specialists) and external managers to enter all knowledge asynchr
onously into the knowledge base and for each member then to comment on the entri
es as need be. The team worked for 10 months and successfully developed a breakt
hrough product. What is relevant for this discussion is that while the team had
the opportunity to create its own technology and work process at the outset, in
the end it changed every single one of its norms and most of the ways in which i
t used the technology. Thus, while there was careful planning prior to the begin
ning of the teams workfar more planning than would normally be permitted in many o
rganizations todaythe team still found it necessary to make changes. The team was
fortunate because it were encouraged and able to make those changes as they bec
ame necessary. The technology was designed suf ciently exibly so that entries could
be identi ed using simple searches rather than the complex navigation tools that
they thought they might need. Management was suf ciently exible that when the team
asked them to stop using the technology, they obliged. The teams work process was
suf ciently exible that when asynchronous communication proved insuf cient, they wer
e able to add a meet-me teleconference line so that all future encounters could be s
ynchronously conducted using both the collaborative technology and the teleconfe
rence capability. Thus, the team succeeded not only because there had been caref
ul planning, but because they could also innovate their work process and the tec
hnology as problems arose. Thus, critical to the success of technology-organizat
ion alignment is that the technologyorganization solution be designed to encoura
ge localized innovation (Johnson and Rice 1987; Rogers 1995), that is, innovatio
n required to make a particular technology-organization solution work in a parti
cular context with a particular set of people. Characteristics of solutions that
allow localized innovation include:
4.4.1.
Solutions That Enhance, Not Deskill Workers
When workers are deskilled from a technology-organization solution, they do not
have the knowledge to be able to intervene when necessary, identify problems, fo
rmulate solutions, and then implement the solutions. Thus, solutions must not pe
rmit deskilling. Technologies that avoid deskilling are those that allow workers
to understand what the technology is doing and how it is doing it and provide w
orkers with ways to intervene in the process to perform the planning, thinking,
and evaluation work, leaving the routine work to the technology (Majchrzak 1988)
. The collaborative technology used by the virtual team members described in Maj
chrzak et al. (2000) was entirely open, with no hidden formula, hidden menus, or
hidden processing; thus, the team was able to evolve the technology to the poin
t where they could it make useful to them. CNC machines that hide processing log
ic from the operators are examples of technologies that violate this principle a
nd thus inhibit innovation.
4.4.2.
bution to the central knowledge repository (called Ernie) and the uses by other
consultants made of their entries.
4.4.5.
Solutions Should Decentralize Continuous Improvement
964
PERFORMANCE IMPROVEMENT MANAGEMENT
People become anxious when they are thrust into chaotic situations over which th
ey have no control and which affect their jobs and possibly their job security a
nd careers (Driver et al. 1993). People are given some sense of control when the
y know what the process is: how decisions will be made, by whom, and when, and w
hat their role is in the decision-making and implementation process. Sociotechni
cal systems design suggests the following nine steps in the design and implement
ation of a technology-organization solution (Emery 1993; Taylor and Felten 1993)
: 1. Initial scanning of the production (or transformation) system and its envir
onment to identify the main inputs, transforming process, outputs, and types of
variances the system will encounter 2. Identi cation of the main phases of the tra
nsformation process 3. Identi cation of the key process variances and their interr
elationships 4. Analysis of the social system, including the organizational stru
cture, responsibility chart for controlling variances, ancillary activities, phy
sical and temporal relationships, extent to which workers share knowledge of eac
h others roles, payment system, how roles ll psychological needs of employees, and
possible areas of maloperation 5. Interviews to learn about peoples perceptions
of their roles 6. Analysis of relationship between maintenance activities and th
e transformation system 7. Relationship of transformation system with suppliers,
users, and other functional organizations 8. Identi cation of impact on system of
strategic or development plans and general policies 9. Preparation of proposals
for change These nine steps have been created to optimize stakeholder participa
tion in the process, where stakeholders include everyone from managers to engine
ers, from suppliers to users, from maintainers to operators. A similar process i
s participative design (Eason 1988; Beyer and Holtzblatt 1998), tenets of which
include:
No one who hasnt managed a database should be allowed to program one. People who
use the system help design the infrastructure. The best information about how a
system will be used comes from in-context dialogue and role
playing.
Prototyping is only valuable when it is done cooperatively between users and dev
elopers. Users are experts about their work and thus are experts about the syste
m; developers are technical consultants.
Employees must have access to relevant information, must be able to take indepen
dent positions
on problems, must be able to participate in all decision making, must be able to
facilitate rapid prototyping, must have room to make alternative technical and
/ or organizational arrangements, must have management support but not control,
must not have fear of layoffs, must be given adequate time to participate, and m
ust be able to conduct all work in public. The participative design process rst i
nvolves establishing a steering committee of managers who will ultimately be res
ponsible for ensuring that adequate resources are allocated to the project. The
steering committee is charged with chartering a design team and specifying the b
oundaries of the redesign effort being considered and the resources management i
s willing to allocate. The design team then proceeds to work closely with the te
chnical staff rst to create a set of alternative organizational and technical sol
utions and then to assess each one against a set of criteria developed with the
steering committee. The selected solutions are then developed by the design and
technical personnel, with ever-increasing depth. The concept is that stakeholder
s are involved before the technology or organizational solutions are derived and
then continue to be involved as the design evolves and eventually makes the tra
nsition to implementation (Bodker and Gronbaek 1991; Clement and Van den Bessela
ar 1993; Kensing and Munk-Madsen 1993; Damodaran 1996; Leonard and Rayport 1997)
. Both the participative design and STS processes also focus on starting the cha
nge process early. Typically, managers wait to worry about alignment after the t
echnology has been designed, and possibly purchased. Because too many organizati
onal and other technology choices have now been constrained, this is too late (M
ajchrzak 1988). For example, if the data entry screens for an enterprise resourc
e-planning system are designed not to allow clerks to see the next steps in the
data ow, and the organizational implications of this design choice have not been
considered in advance, clerks may well misunderstand the system and input the wr
ong data, leading to too many orders being sent to the wrong locations. This is
what happened at Yamaha. If the enterprise resource-planning system had been des
igned simultaneously with the jobs of clerks, then the need to present data ow in
formation would have been more apparent and the costly redesign of the user inte
rface would not have
966
PERFORMANCE IMPROVEMENT MANAGEMENT
informed by TOP Modeler of an ideal organizational pro le customized to their busi
ness strategies. Then users can describe features of their current or proposed f
uture organization and be informed by TOP Modeler of prioritized gaps that need
to be closed if business strategies are to be achieved. There are three sets of
business strategies contained in TOP Modeler: business objectives, process varia
nce control strategies, and organizational values. TOP Modeler also contains kno
wledge about the relationships among 11 sets of enterprise features, including i
nformation resources, production process characteristics, empowerment characteri
stics, employee values, customer involvement strategies, skills, reporting struc
ture characteristics, norms, activities, general technology characteristics, and
performance measures and rewards. The TOP Modeler system has a graphical, inter
active interface; a large, thorough, state-of-the-art knowledge representation;
and a exible system architecture. TOP Modeler contains a tremendous depth of scie
nti c and best-practice knowledgeincluding principles of ISO-9000, NCMSs Manufacturi
ng 2000, etc.on more than 30,000 relationships among strategic and business attri
butes of the enterprise. It allows users to align, analyze, and prioritize these
attributes, working from business strategies to implementation and back. The us
er of TOP Modeler interacts primarily with a screen that we call the ferris whee
l. This screen provides an immediate, intuitive understanding of what it means t
o have TOP integration in the workplace: TOP integration requires that numerous
different aspects of the workplace (e.g., employee values, information, and resp
onsibilities for activities) must all be aligned around core organizational fact
ors (e.g., business objectives) if optimum organizational performance is to be a
chieved. TOP Modeler has been used in over 50 applications of organizational red
esign, business process redesign, or implementation of new manufacturing technol
ogy. The companies that have used it have ranged from very small companies to ve
ry large companies, located in the United States, Brazil, and Switzerland. Some
of the uses we have been informed about include:
Use by government-sponsored manufacturing consultants (e.g., Switzerlands CCSO) t
o help
small companies develop strategic plans for restructuring (in one case, the tool
helped the consultant understand that the companys initial strategic plan was un
likely to succeed until management agreed to reduce the amount of variation that
it allowed in its process). Use by large software vendors (e.g., EDS) to help a
company decide to not relocate its plant from one foreign country to another (b
ecause the expense of closing the gaps created by the move was likely to be too high
). Use by a large manufacturing company (General Motors) to decide whether a joi
nt venture plant was ready to be opened (they decided on delaying the opening be
cause the tool helped to surface differences of opinion in how to manage the wor
kforce). Use by a small manufacturing company (Scantron) to decide whether its b
est practices needed improving (the tool helped the company to discover that whi
le it did indeed have many best practices, it needed to involve the workforce mo
re closer with the supplier and customer base, an action the company subsequentl
y took). Use in a large technology change effort at a large manufacturing compan
y (Hewlett-Packard) to help identify the workforce and organizational changes ne
eded for the new production technology to operate correctly (resulting in a subs
tantial improvement in ramp-up time when the new product and production process
was introduced). Use by a redesign effort of a maintenance crew (at Texas Instru
ments) to determine that the team-based approach they had envisioned needed seve
968
PERFORMANCE IMPROVEMENT MANAGEMENT
the management to align its organizations and coordinate functions (e.g., centra
lized vs. decentralized planning, ability to manage customer or product lines by
one person or department) effectively to meet its business goals. 4.6.2.2. Mate
rials Management The materials organization or individual responsible for raw ma
terials, subassembly suppliers, feeder plants, and nished goods management needs
full visibility into all requirements and changes as they happen. Based on the e
nterprise, these functions may be aligned by product family, facility, or materi
al type. The material managers have access to the latest long-term forecasts and
plans, any market changes, order status and changes, effectivities, part substi
tutions, and all speci c rules as they apply to the vendor or supplier, and the la
test company rules regards products, materials, customers, or priorities. The en
terprise is thus free to align the organization to achieve lowest inventories, l
owest material-acquisition costs, best vendor contracts (reduced set of reliable
suppliers, quality, etc.), effective end-of-life planning, and reduced obsolesc
ence. 4.6.2.3. Order Management and Request for Quotes (RFQs) An organization, w
hich is responsible for the rst line of attack on responding to RFQs, order chang
es, new orders, new customers, should be able to respond rapidly and accurately
to delivery capability and costs. More importantly, the response should be based
on the current plant loads and re ects the true deliverable lead times and capabi
lities. 4.6.2.4. Strategic and Operational Change Propagation As is the norm, st
rategies and operational activities change for various internal or external reas
ons. Most organizations without access to the right technology manage this chang
e by incurring high costs in terms of additional people both to convey the messa
ge of change and to manage and monitor. Visibility and instant change propagatio
n in either direction allow enterprises to respond only when necessary, and they
are guided by a system-oriented decision so that their responses are optimal an
d effective immediately. 4.6.2.5. New Product Management New product development
, engineering, materials, sales, and production functions require seamless colla
boration. Business processes that take advantage of these functionalities can be
implemented so that new product introduction is as much a part of dayto-day ope
rations as the making and delivery of current products. There may not necessaril
y be a need for any special organizations or staf ng to meet new product introduct
ions. These products become akin to new demands on resources; and in fact, with
the added visibility and speed of change propagation, the enterprise can develop
better-quality products and more of them. This can be done because an enterpris
e utilizing a tool such as iCollaboration can easily try out more ideas and func
tions simultaneously, which increases the ability of the enterprise to ramp up p
roduction faster 4.6.2.6. Collaborative Customer / Demand Planning The CDP tool
allows the customer-facing individuals to evaluate and analyze the demands by sa
les organizations, geography, product managers, and manufacturing and product pl
anners to interact and control all activities seamlessly and consistent with ent
erprise goals of maximizing pro tability and related corporate strategies. The app
lication of this tool may result in the synchronization among the entire sales a
nd customer relationship teams, in conjunction with customer relationship manage
ment (CRM) integration, which would produce customer satisfaction. 4.6.2.7. Cust
omer Satisfaction Customer satisfaction, as measured by product delivery due dat
e performance, accurate order ll rate, response to quotes, and response to change
s in orders, can be signi cantly enhanced by creating an empowered customer facing
organization that is enabled and empowered. It should be noted that this issue
is one of the most critical determinants of success for todays e-commerce busines
ses. With the iCollaboration tools, an organization can create customerfacing or
ganizations that may be aligned with full customer responsibility, product respo
nsibility, order responsibility, or any combination of those. These organization
s or individuals are independent, do not have to call someone, and yet are in sy
nch with all other supporting organizations.
5.
CONCLUSIONS
A review of possible decisions leaves a long list of dos and donts for implementin
g new technology. Some of the more important ones are:
Dont regard new technology and automation as a quick x for basic manufacturing or
human
resource problems; look to the rms entire humanorganizationtechnology infrastructure
as the x. Dont assume that human resource problems can be resolved after the equi
pment is installed; some of the problems may have to do with the speci c equipment
selected.
ems in American Steel Minimills, Industrial and Labor Relations Review, Vol. 45, N
o. 3, pp. 588506. Azimi, H. (1991), Circles of Underdevelopment in Iranian Econom
y, Naey, Teheran (in Farsi). Barley, S. R. (1986), Technology as an Occasion for S
tructuring: Evidence from Observations of CT Scanners and the Social Order of Ra
diology Departments, Administrative Science Quarterly, Vol. 31, pp. 78108. Beyer, J
. M. (1992), Metaphors, Misunderstandings and Mischief: A Commentary, Organization S
cience, Vol. 3, No. 4, pp. 467474.
970
PERFORMANCE IMPROVEMENT MANAGEMENT
Beyer, H., and Holtzblatt, K. (1998), Contextual Design, Morgan Kaufmann, San Fr
ancisco. Billing, C. E. (1996), Aviation Automation: The Search for a Human-Cent
ered Approach, Lawrence Erlbaum Associates, Mahwah, NJ. Bodker, S., and Gronbaek
, K. (1991), Cooperative Prototyping: Users and Designers in Mutual Activity, Intern
ational Journal of ManMachine Studies, Vol. 34, pp. 453478. Boeing Commercial Airc
raft Group (BCAG) (1993), Crew Factor Accidents: Regional Perspective, in Proceeding
s of the 22nd Technical Conference of the International Air Transport Associatio
n (IATA) on Human Factors in Aviation (Montreal, October 48), IATA, Montreal, pp.
4561. Burgelman, R. A., and Rosenbloom, R. S. (1999), Design and Implementation of
Technology Strategy: An Evolutionary Perspective, in The Technology Management Ha
ndbook, R. C. Dorf, Ed., CRC Press, Boca Raton, FL. Champy, J. (1998), What Went W
rong at Oxford Health, Computerworld, January 26, p. 78. Ciborra, C., and Schneide
r, L. (1990), Transforming the Practives and Contexts of Management, Work, and Tec
hnology, Paper presented at the Technology and the Future of Work Conference (Stan
ford, CA, March 2830). Clement, A., and Van den Besselaar, P. A. (1993), Retrospect
ive Look at PD Projects, Communications of the ACM, Vol. 36, No. 4, pp. 2939. Compu
terworld (1997), Work ow Software Aids App Development, November 3, p. 57. Computerwor
ld (1998a), Briefs, June 22, p. 55. Computerworld (1998b), The Bad News, August 10,
p. 53. Condit, P. M. (1994), Focusing on the Customer: How Boeing Does It, ResearchTechnology Management, Vol. 37, No. 1, pp. 3337. Contractor, N., and Eisenberg, E
. (1990), Communication Networks and New Media in Organizations, in Organizational a
nd Communication Technology, J. Fulk and C. Stein eld, Eds., Sage, Newbury Park, C
A. Cooke, D. P., and Peterson, W. J. (1998), SAP Implementation: Strategies and
Results, Conference Board, New York. Criswell, H. (1998), Human System: The People
and Politics of CIM, Paper presented at UTOFACT Conference (Chicago). Damodaran,
L. (1996), User Involvement in the System Design Process, Behavior and Information T
echnology, Vol. 15, No. 6, pp. 363377. DAveni, R. A., and Gunther, R. (1994), Hype
rcompetition: Managing the Dynamics of Strategic Maneuvering, Free Press, New Yo
rk. Davenport, T. (1994), Saving ITs Soul: Human-Centered Information Management, Har
vard Business Review, MarchApril, Vol. 72, pp. 119131. Demel, G. (1991), In uences of
Culture on the Performance of Manufacturing Plants of American Multinational Cor
porations in Other Countries: A Macroergonomics Analysis, Masters Thesis, Institute
of Safety and Systems Management, University of Southern California, Los Angele
s. Demel, G., and Meshkati, N. (1989), Requisite variety: A concept to analyze the
effects of cultural context for technology transfer, in Proceedings of the 33rd A
nnual Meeting of the Human Factors Society. Santa Monica, CA: Human Factors Soci
ety, pp. 765769. Dertouzos, M., Lester, R., Salon, R., and The MIT Commission on
Industrial Productivity (1988), Made in America: Regaining the Productive Edge,
MIT Press, Cambridge, MA. DeSanctis, G., and Poole, M. (1994), Capturing the Compl
exity in Advanced Technology Use: Adaptive Structuration Theory, Organization Scie
nce, Vol. 5, No. 2, pp. 121147. Driver, M. J., Brousseau, K. R., and Hunsaker, P.
L. (1993), The Dynamic Decision Maker: Five Decision Styles for Executive and B
usiness Success, Jossey-Bass, San Francisco. Drucker, P. F. (1988), The Coming of
the New Organization, Harvard Business Review, Vol. 66, JanuaryFebruary, 4553. Eason
, K. (1988), Information Technology and Organizational Change, Taylor & Francis,
London. Economist, The (1990), Detroits Big Three: Are Americas Carmakers Headed fo
r the Junkyard? April 14. Economist, The (1992), Cogito, Ergo Something (Computers a
nd human intelligence: A survey of arti cial intelligence), March 14. Economist, The
(1999), Business and the Internet: The Net Imperative, June 26. Electric Power Rese
arch Institute (EPRI) (1984), Commercial Aviation Experience of Value to the Nuc
lear Industry, Prepared by Los Alamos Technical Associates, Inc. EPRI NP.3364, J
anuary, EPRI, Palo Alto, CA.
Emery, F. (1993), The Nine-Step Model, in The Social Engagement of Social Science: A
Tavistock Anthology, E. Trist and H. Murray, Eds., University of Pennsylvania P
ress, Philadelphia. Ettlie, J. (1986),Implementing Manufacturing Technologies: Les
sons from Experience, in Managing Technological Innovation: Organizational Strateg
ies for Implementing Advanced Manufacturing Technologies, D. D. Davis, Ed., Joss
ey-Bass, San Francisco. Federal Aviation Administration (FAA) (1996), The Interfac
es Between Flightcrews and Modern Flight Deck Systems, FAA Washington, DC. Flowers
, S. (1997), Information Systems Failure: Identifying the Critical Failure Factors
, Failure and Lessons Learned in Information Technology Management, Vol. 1, pp. 192
9. Gibbs, W. (1994), Softwares Chronic Crisis, Scienti c American, March, pp. 7281. Gid
ens, A. (1994), The Constitution of Society: Outline of the Theory of Structurat
ion, University of California Press, Berkeley. Goldman, S. L., Nagel, R. N., and
Preiss, K. (1995), Agile Competitors and Virtual Organizations, Van Nostrand Re
inhold, New York. Graeber, R. C. (1994), Integrating Human Factors Knowledge into
Automated Flight Deck Design, Invited presentation at the International Civil Avia
tion Organization (ICAO) Flight Safety and Human Factors Seminar (Amsterdam, May
18). Grayson, C. (1990), Strategic Leadership, Paper presented at the Conference on
Technology and the Future of Work (Stanford, CA, March 2830). Hall, G., Rosentha
l, J., and Wade, J. (1993), How to Make Reengineering Really Work, Harvard Business
Review, Vol. 71, NovemberDecember, pp. 119133. Helmreich, R. L. (1994), Anatomy of a
System Accident: Avianca Flight 052, International Journal of Aviation Psychology
, Vol. 4, No. 3, pp. 265284. Helmreich, R. L., and Sherman, P. (1994), Flightcrew P
erspective on Automation: A Cross-Cultural Perspective. Report of the Seventh ICAO
Flight Safety and Human Factors Regional Seminar, Montreal, Canada: Internation
al Civil Aviation Organization (ICAO), pp. 442453. Helmreich, R. L., and Merritt,
A. (1998), Culture at Work in Aviation and Medicine: National, Organizational,
and Professional In uences, Ashgate, Brook eld, VT. Hofstede, G. (1980a), Cultures Co
nsequences, Sage, Beverly Hills, CA. Hofstede, G. (1980b), Motivation, Leadership,
and Organization: Do American Theories Apply Abroad? Organizational Dynamics, Vol
. 9, Summer, pp. 4263. Iansiti, M. (1999), Technology Integration: Matching Technol
ogy and Context, in The Technology Management Handbook, R. C. Dorf, Ed., CRC Press
, Boca Raton, FL. Information Week (1997), California Targets March for Electric U
tility System, December 30. Information Week (1998), Andersen Sued on R / 3, July 6, p
. 136. International Atomic Energy Agency (IAEA) (1991), Safety Culture, Safety
Series No. 75-INSAG4, IAEA, Vienna. Jackson, J. M. (1960), Structural Characterist
ics of Norms, in The Dynamics of Instructional Groups: Socio-Psychological Aspects
of Teaching and Learning, M. B. Henry, Ed., University of Chicago Press, Chicag
o. Jaikumar, R. (1986), Post-Industrial Manufacturing, Harvard Business Review, Nove
mber December. Jambekar, A. B., and Nelson, P. A. (1996), Barriers to Implementatio
n of a Structure for Managing Technology, in Handbook of Technology Management, G.
H. Gaynor, Ed., McGraw-Hill, New York. Jentsch, F., Barnett, J, Bowers, C. A.,
and Salas, E. (1999), Who Is Flying This Plane Anyway? What Mishaps Tell Us about
Crew Member Role Assignment and Air Crew Situation Awareness, Human Factors, Vol.
41, No. 1, pp. 114. Johnson, B., and Rice, R. E. (1987), Managing Organizational
Innovation, Columbia University Press, New York. Johnson, H. T., and Kaplan, R.
S. (1987), Relevance Lost: The Rise and Fall of Management Accounting, Harvard B
usiness School Press, Boston. Johnston, A. N. (1993), CRM: Cross-cultural Perspect
ives, in Cockpit Resource Management, E. L. Wiener, B. G. Kanki, and R. L. Helmrei
ch, Eds., Academic Press, San Diego, pp. 367397. Kalb, B. (1987), Automations Myth:
Is CIM Threatening Todays Management? Automotive Industries, December. Kahneman, D.
, Slovic, P., and Tversky, A. (1982), Judgment under Uncertainty: Heuristic and
Biases, Cambridge University Press, New York.
972
PERFORMANCE IMPROVEMENT MANAGEMENT
Kanz, J., and Lam, D. (1996), Technology, Strategy, and Competitiveness, in Handbook
of Technology Management, G. H. Gaynor, Ed., McGraw-Hill, New York. Kelley, M.
R. (1996), Participative Bureaucracy and Productivity in the Machined Products Sec
tor, Industrial Relations, Vol. 35, No. 3, pp. 374398. Kensing, F., and Munk-Madsen
, A. (1993), PD: Structure in the Toolbox, Communications of the ACM, Vol. 36, No. 4
, pp. 7885. Kotter, J. P., and Heskett, J. L. (1992), Corporate Culture and Perfo
rmance, Free Press, New York. Lammers, C. J., and Hickson, D. J. (1979), A Cross-n
ational and Cross-institutional Typology of Organizations, In Organizations, C. J.
Lammers and D. J. Hickson, Eds., Routledge & Kegan Paul, London, pp. 420434. Lan
dis, D., Triandis, H. C., and Adamopoulos, J. (1978), Habit and Behavioral Intenti
ons as Predictors of Social Behavior, Journal of Social Psychology, Vol. 106, pp.
227237. Lawrence, P. R., and Lorsch, J. W. (1967), Organization and Environment:
Managing Differentiation and Integration, Graduate School of Business Administra
tion, Harvard University, Boston. Leonard-Barton, D. (1995), Wellsprings of Know
ledge: Building and Sustaining the Sources of Innovation, Harvard Business Schoo
l Press, Boston. Leonard-Barton, D. (1998), Implementation as Mutual Adaptation of
Technology and Organization, Research Policy, Vol. 17, No. 5, pp. 251267. Leonard,
D., and Rayport, J. (1997), Spark Innovation through Empathic Design, Harvard Busin
ess Review, Vol. 75, NovemberDecember, pp. 103113. Long, R. J. (1989), Human Issues
in New Of ce Technology, in Computers in the Human Context: Information Technology,
Productivity, and People, T. Forester, Ed., MIT Press, Cambridge, MA. Los Angele
s Times (1997), Snarled Child Support Computer Project Dies, November 21, p. A1. Los
Angeles Times (1999), $17 Million Later, Tuttle Scraps Computer Overhaul, p. B-1. M
acDuf e, J. P. (1995), Human Resource Bundles and Manufacturing Performance: Organiz
ational Logic and Flexible Production Systems in the World Auto Industry, Industri
al and Labor Relations Review, Vol. 48, No. 2, pp. 199221. Majchrzak, A. (1988),
The Human Side of Factory Automation, Jossey-Bass, San Francisco. Majchrzak, A.
(1997), What to Do When You Cant Have It All: Toward a Theory of Sociotechnical Dep
endencies, Human Relations, Vol. 50, No. 5, pp. 535565. Majchrzak, A., and Gasser,
L. (2000), TOP-MODELER: Supporting Complex Strategic and Operational Decisionmakin
g, Information, Knowledge, Systems Management, Vol. 2, No. 1. Majchrzak, A., and F
inley, L. (1995), A Practical Theory and Tool for Specifying Sociotechnical Requir
ements to Achieve Organizational Effectiveness, In The Symbiosis of Work and Techn
ology, J. Benders, J. deHaan and D. Bennett, Eds., Taylor & Francis, London. Maj
chrzak, A., and Wang, Q. (1996), Breaking the Functional Mindset in Process Organi
zations, Harvard Business Review, Vol. 74, No. 5, SeptemberOctober, pp. 9299. Majchr
zak, A., Rice, R. E., Malhotra, A., King, N., and Ba, S. (2000), Technology Adapta
tion: The Case of a Computer-Supported Inter-Organizational Virtual Team, Manageme
nt Information Sciences Quarterly, Vol. 24, No. 4, pp. 569600. Manufacturing Stud
ies Board (MSB) (1988), Committee on the Effective Implementation of Advanced Ma
nufacturing Technology, National Research Council, National Academy of Sciences,
Human Resource Practice for Implementing Advanced Manufacturing Technology, Nat
ional Academy Press, Washington, DC. Martino, J. P. (1983), Technological Foreca
sting for Decision Making, 2nd ed., North Holland, New York. McDermott, R. (1999
). Why Information Technology Inspired but Cannot Deliver Knowledge Management, Cali
fornia Management Review, Vol. 41, No. 4, pp. 103117. Meshkati, N. (1996), Organiza
tional and Safety Factors in Automated Oil and Gas Pipeline Systems, in Automation
and Human Performance: Theory and Applications, R. Parasuraman and M. Mouloua,
Eds., Erlbaum, Hillsdale, NJ, pp. 427446. Meshkati, N., Buller, B. J., and Azadeh
, M. A. (1994), Integration of Workstation, Job, and Team Structure Design in th
e Control Rooms of Nuclear Power Plants: Experimental and simultation Studies of
Operators Decision Styles and Crew Composition While Using Ecological and Tradit
ional User Interfaces, Vol. 1, Grant report prepared for the U.S. Nuclear Regula
tory Commission, (Grant # NRC-04-91-102), University of Southern California, Los
Angeles. Mitroff, I. I., and Kilmann, R. H. (1984), Corporate Tragedies: Produc
t Tampering, Sabotage, and Other Catastrophes, Praeger, New York.
974
PERFORMANCE IMPROVEMENT MANAGEMENT
Tyre, M. J., and Orlikowski, W. J. (1994), Windows of OpportunityTemporal Patterns
of Technological Adaptation in Organizations, Organization Science, Vol. 5, No. 1,
pp. 98118. Unterweger, P. (1988), The Human Factor in the Factory of the Future, in
Success Factors for Implementing Change: A Manufacturing Viewpoint, K. Balache,
Ed., Society of Manufacturing Engineers, Dearborn, MI. Wall Street Journal (1998
a), At Oxford Health Financial Controls Were out of Control, April 29, p. 1. Wall Stre
et Journal, The (1998b), Some Firms, Let Down by Costly Computers, Opt to Deenginee
r, April 30, p. 1. Wall Street Journal (1999), Europes SAP Scrambles to Stem Big Glit
ches, November, p. A26. Weick, K. E. (1990), Technology as Equivoque: Sensemaking in
New Technologies, in Technology and Organizations, P. S. Goodman and L. Sproull,
Eds., Jossey-Bass, San Francisco. Weick, K. E., and Roberts, H. K. (1993), Collect
ive Mind in Organizations: Heedful Interrelating on Flight Decks, Administrative S
cience Quarterly, Vol. 38, pp. 357381. Whiston, T. G. (1996), The Need for Interdis
ciplinary Endeavor and Improved Interfunctional Relationships, in Handbook of Tech
nology Management, G. H. Gaynor, Ed., McGraw-Hill, New York. Wiener, E. (1994), In
tegrating Practices and Procedures into Organizational Policies and Philosophies
, Invited presentation at the International Civil Aviation Organization (ICAO) Fli
ght Safety and Human Factors Seminar (Amsterdam, May 18). Works, M. (1987), Cost J
usti cation and New Technology Addressing Managements No. 1 to the Funding of CIM, in
A Program Guide for CIM Implementation, 2nd Ed., L. Bertain and L. Hales, Eds.,
SME, Dearborn, MI.
976
PERFORMANCE IMPROVEMENT MANAGEMENT
Teamwork represents one form of work organization that can have large positive a
nd / or negative effects on the different elements of the work system and on hum
an outcomes, such as performance, attitudes, well being, and health. Conceiving
work as a social system and advocating the necessity of both technical and socia
l performance optimizations as necessary for organizational effectiveness, the s
ociotechnical theory provides several arguments and examples supporting teamwork
. The germinal study, which gave the essential evidence for the development of t
he sociotechnical eld, was a team experience observed in the English mining indus
try during the 1950s. In this new form of work organization, a set of relatively
autonomous work groups performed a complete collection of tasks interchanging r
oles and shifts and regulating their affairs with a minimum of supervision. This
experience was considered a way of recovering the group cohesion and self-regul
ation concomitantly with higher level of mechanization (Trist 1981). The group h
ad the power to participate in decisions concerning work arrangements, and these
changes resulted in increased cooperation between task groups, personal commitm
ent from the participants, and reduction in absenteeism and the number of accide
nts. A GM assembly plant located in Fremont, California was considered until 198
2, the year it ended operations, the worst plant in the GM system and in the aut
o industry as whole. For years, the plant presented dismay levels of quality, lo
w productivity, and prevalent problems of absenteeism and turnover. The plant wa
s reopened two years later under a new joint venture with Toyota. Changes focusi
ng primarily on the relationship between workers and management, the organizatio
nal structure, and the widespread use of teamwork transformed the plant in one o
f the most productive ones of the GM system, with high levels of employee satisf
action and very low levels of absenteeism (Levine, 1995). Examples like those ab
ove illustrate the advantages of using teams to perform a variety of organizatio
nal assignments and tasks. Supporters of teamwork have promoted teamwork on many
grounds, highlighting its potential for increased productivity, quality, job sa
tisfaction, organizational commitment, and increased acceptance of change, among
others. Teamwork is the preferred strategy to increase employee involvement in
the workplace. Indeed, terms such as participatory management, employee involvem
ent, and participation are frequently equated with teamwork. According to Lawler
(1986), employee involvement affects ve major determinants of organizational eff
ectiveness: motivation, satisfaction, acceptance of change, problem solving, and
communication. Lawler states that employee involvement can create a connection
between a particular level of performance and the perception of a favorable cons
equence. Involvement in teams can provide rewards beyond those allocated by the
organization, such as money and promotion: it can supply intrinsic rewards, that
is, accomplishment, personal growth, and so on. Similarly, Lawler argues that a
llowing people to participate in the de nition of the procedures and methods utili
zed in their daily activities is an effective way to improve those methods and c
an motivate employees to produce a better-quality job. Teams also can ease the p
rocess of implementation of organizational changes and avoid the not-inventedhere pe
rception and create commitment with the implementation of change. It has been ar
gued that in many circumstances the effort of a group of people generates effect
ive solutions that would not be produced by the same individuals working indepen
dently. This superior outcome would result not only from the greater pooled know
ledge available to the group members, but also from the interaction process amon
g them, from the mutual in uence on each others thinking. This process has been ter
med collective intelligence (Wechsler 1971) and refers to a mode of cooperative
thinking that goes beyond simple collective behavior. Finally, system reliabilit
y is also assumed to improve through employee participation since it increases t
he human knowledge variety and enables the workers to understand their role in m
aking the system more ef cient and safer. While there are a number of positive ele
ments associated with teamwork in general, there are also some potential negativ
e elements. Furthermore, these elements are not constant but rather depend on th
ties Self-designing; in uence over key organizational conditions Clear purpose and
deadline Using and honing professional expertise Play that is fueled by competi
tion and / or audiences Inherent signi cance of helping people Bridging between pa
rent organization and its customers Continuity of work; ability to home both the
team design and the product
Top management teams Task Forces Professional support groups Performing groups H
uman service teams Customer service teams Production teams
Adapted from J. R. Hackman, Creating More Effective Work Groups in Organizations, in
Groups That Work (And Those That Dont), J. R. Hackman, Ed., copyright
1990 Josse
y-Bass, Inc., a subsidiary of John Wiley & Sons, Inc. Reprinted by permission.
978
PERFORMANCE IMPROVEMENT MANAGEMENT
culture, the nature of the teams tasks, the skills and knowledge of the team memb
ers, and the training received and to be received. Such a decision has important
implications for management and employee involvement, which will be addressed i
n Section 6. Decisions at the tactical level, that is, the speci cs of the group d
esign and mechanics are usually easier to make and are negotiated with team memb
ers. This includes matters of size and composition/ membership, work area covera
ge or tasks, and coordination mechanisms. For many teams, the optimal size is di
f cult to determine. In fact, a variety of factors may affect team size. Obviously
the primary factor is the size and scope of a required project or set of tasks.
However, several other factors can in uence team size (it should be noted that al
l factors are not necessarily applicable to all types of teams). Factors affecti
ng team size include:
Amount of work to be done Amount of time available to do the work Amount
any one person can do in the available time Differentiation of tasks to
ormed in sequence Number of integrated tasks required Balancing of tasks
ents Cycle time required Variety of skills, competences, knowledge bases
d Need for reserve team members Technological capabilities
of work
be perf
assignm
require
Finally, at the third level, decisions regarding team performance and duration s
hould be negotiated and made prior to engaging teamwork. Section 5 provides a co
mprehensive, structured list of variables affecting and / or de ning team performa
nce. Such characteristics can be used to develop a system to measure and monitor
team performance over time. As mentioned above, teams are widely used in todays
organizational environment, with increased global competition and a more demandi
ng workforce (Katzenbach and Smith 1993). The next section describes two importa
nt current applications of teamwork.
3. QUALITY IMPROVEMENT AND PARTICIPATORY ERGONOMICS TEAMS
Teamwork has been the backbone of quality improvement. More recently, teamwork h
as been used in the context of participatory ergonomics (PE). However, while QI
teams primarily focus on activities related to identifying, designing, and imple
menting improvements in both work processes and products / services, PE teams pr
imarily focus on improvement of working conditions. The following sections revie
w the state of the art for both applications of teamwork.
3.1.
Teams in the Context of Quality Improvement
Employee participation, particularly through teamwork, is one of the essential f
oundations of QI. Different from many other management approaches that present t
eamwork effectiveness as contingent to several aspects, QI supports the use of t
eams without any speci c provisions (Dean and Bowen 1994). A variety of quality-re
lated teams can exist within organizations. Kano (1993) categorizes teams into t
hree types: (1) functional teams, which are ongoing voluntary problem-solving gr
oups made up of workers in the same workplace; (2) quality building-in-process t
eams, in which a set of related jobs are shared, with the goal of building quali
ty into a product during the assembling process; and (3) task / project teams, w
hich are ad hoc groups comprised of staff or line managers, who disband once the
task is completed. Quality circle (QC) is one of the most widely discussed and
adopted forms of teamwork (Cotton 1993). QCs are project / problem-solving teams
that have been de ned as small groups of volunteers from the same work area who m
eet regularly to identify, analyze, and solve quality-related problems in their
area of responsibility (Wayne et al. 1986). These groups usually consist of 8 to
10 employees who meet on a regular basis, such as one hour per week. In many QC
s, the supervisor is designated as the circle leader. Formal training in problem
-solving techniques is often a part of circle meetings. The claimed bene ts of QCs
include quality and cost awareness; reduction in con ict and improved communicati
ons; higher morale, motivation, and productivity; and cost savings (Head et al.
1986). The effect of this type of teamwork on employee attitudes is assumed to b
e the primary reason for their success (Head et al. 1986). Marks et al. (1986) p
ropose that QC participation will lead to enriched jobs, with employees experien
cing work as more meaningful, obtaining greater knowledge
on and organizational commitment, are less affected. It has been suggested that,
in order to obtain greater bene ts, teamwork should be part of a more comprehensi
ve program (Head et al. 1986). Quality circles are a parallel structure in the o
rganization and may not have the authority or resources to affect change (Lawler
and Mohrman 1987). QCs may not be related to the day-to-day work done in organi
zations, and nonparticipants may feel left out, resulting in a negative backlash
(Lawler 1986). Demands may be increased for both participants and nonparticipan
ts. For participants, there are the additional duties of going to team meetings
and training sessions, while nonparticipants may occasionally have to ll in for p
articipants who are away from their jobs. The main drawback with QCs, according
to Lawler and Mohrman (1987), is that they are not well integrated into the orga
nization, in terms of management philosophy, technical and organizational redesi
gn, personnel policies, and training. Quality improvement (QI) uses quality team
s that are similar to QCs in that they address speci c problem areas, employ stati
stical tools, provide group process and problem-solving training to team members
, and use team facilitators. Both QCs and QI teams use the PDCA cycle (Plan, Do,
Check,
980
PERFORMANCE IMPROVEMENT MANAGEMENT
Act) and the QC Story (i.e., seven-step problem-solving method) as their primary
problem-solving methodologies. However, there are differences in the context of
quality teams under QI that may result in better integration within the organiz
ation. Carr and Littman (1993) identify several differences between QI teams and
QCs. QCs are often limited to employees and front-line supervisors, while QI te
ams include members from management as well. Involving management in quality tea
ms can reduce management resistance and fear. QCs in general have a more limited
focus than QI teams in both issues addressed and composition of teams. While QC
s generally include only the members of a speci c work area, QI teams may be cross
functional, including members from different units within the organization. Team
s such as this can deal with broader organizational issues and can implement sol
utions that are more likely to be accepted and effective since more stakeholders
are involved. Teams under QI have the potential for a high degree of integratio
n into the organization through greater involvement of management and the existe
nce of more broadly based teams.
3.2.
Participatory Ergonomics
Perhaps one the fastest-growing applications of teamwork has been in the eld of e
rgonomics. The use of teams to evaluate, design, and implement jobs and workstat
ions is relatively recent but has met widespread acceptance. A clear indication
of this trend is the growing number of submissions on the topic in most national
and international ergonomics conferences and the inclusion of employee particip
ation as one of the basic requirements in the proposed OSHA Ergonomics Standard
(OSHA 1999). Participatory ergonomics can be understood as a spinoff of the acti
vity of quality-related teams focusing on working conditions. Noro and Imada cre
ated the term participatory ergonomics (PE) in 1984 with the main assumption tha
t ergonomics is bounded by the degree to which people are involved in conducting
this technology. According to Imada (1991), PE requires users (the real bene ciar
ies of ergonomics) to be directly involved in developing and implementing ergono
mics. Wilson (1995) more recently proposed a more comprehensive de nition of PE as
the involvement of people in planning and controlling a signi cant amount of their
own work activities, with suf cient knowledge and power to in uence both processes a
nd outcomes in order to achieve desirable goals. Imada (1991) points out three maj
or arguments in support of worker involvement in ergonomics. First, ergonomics b
eing an intuitive science, which in many cases simply organizes the knowledge th
e workers are already using, it can validate the workers accumulated experience.
Second, people are more likely to support and adopt solutions they feel responsi
ble for. Involving users / workers in the ergonomic process has the potential to
transform them into makers and supporters of the process rather than passive re
cipients. Finally, developing and implementing technology enables the workers to
modify and correct occurring problems continuously. Participatory ergonomics ca
n be an effective tool for disseminating ergonomic information allowing for the
utilization of this knowledge in a company-wide basis. It is evident that profes
sional ergonomists will not be available to deal with all the situations existen
t in an entire organization and that there is a need to motivate, train, and pro
vide resources to workers to analyze and intervene in their work settings. Parti
cipatory ergonomics sees end users contributions as indispensable elements of its
scienti c methodology. It stresses the validity of simple tools and workers experi
ence in problem solution and denies that these characteristics result in nonscie
nti c outcomes. Employees or end users are in most situations in the best position
to identify the strengths and weaknesses of the work situations. Their involvem
ent in the analysis and redesign of their workplace can lead to better designs a
s well as increase their and the companys knowledge on the process. This approach
stresses the relevance of small wins (Weick 1984), a series of concrete, complete,
implemented contributions that can construct a pattern of progress. The nature o
f these small victories allows the workers to see the next step, the next improv
ement, and it constitutes a gradual, involving movement towards organizational c
hange. Participatory ergonomics is seen occasionally either as method to design
and implement speci c workplace changes or a work organization method in place reg
ardless of the presence of change. Wilson and Haines (1997) argue that the parti
cipatory process is in itself more important than the focus of that participatio
n since a exible and robust process may support the implementation of any change. Par
ticipatory ergonomics emphasizes self-control and self-determination and provide
s workers more control over their working conditions. This approach also offers
potential for reduced job strain through increased social interaction and suppor
t. In fact, worker involvement has been shown to be the most common feature amon
g effective stress-management programs (Karasek 1992). Participatory ergonomics
has been implemented through a variety of different organizational approaches an
d team designs, and no clear unifying model has been proposed or seems likely to
be achieved (Vink et al. 1992). Liker et al. (1989) describe six different mode
ls of participation based on either direct or representative participation and o
n different levels of worker input. Wilson and
oups as well as their evolution and ending and the transfer of learning from the
m. Management support has been widely recognized as the fundamental condition fo
r the implementation of teamwork initiatives (Carr and Litman 1993; Hyman and Ma
son 1995; Kocham et al. 1984). Without continued support and commitment from man
agement, team efforts are doomed to failure. The role of top management is to in
itiate the teamwork process, setting up policy and guidelines, establishing the
infrastructure for team functioning, providing resources, promoting participatio
n, guidance, and cooperation, and assigning a meaningful and feasible task. Tang
et al. (1991) report a relationship between upper-management attendance and tea
m members participation and between middle-management attendance and teams problem
-solving activities. In a study of 154 QC applications from 1978 to 1988, Park a
nd Golembiewski (1991) found middle-management attitude to be the strongest pred
ictor of team success. Employees who are involved in team projects that receive
low levels of management support may become frustrated due to lack of resources
and cooperation. This may in turn result in negative attitudes, not only towards
the project itself, but also towards the job and organization.
982
PERFORMANCE IMPROVEMENT MANAGEMENT
Hackman (1990) points out that unclear or insuf cient authority or mandate, which
relate to the lack of support from top management, are critical impediments to t
eam achievement. Hackman indicates some consequential issues for teams with rega
rd to group authority. First, the team needs to have authority to manage its own
affairs. Second, teams need a stable authority structure. Finally, the timing a
nd substance of interventions by authoritative gures. Interventions by authoritat
ive gures can be most effective as the beginning of team development and can be p
articularly harmful if done on concerns that the group sees as theirs. For ad ho
c teams in particular, the clarity and importance of the team charge also play a
n important role in the de nition and achievement of success. The team charge shou
ld be speci c and relevant from the participants and organizations perspectives. Tim
e limits are powerful organizing factors that shape team performance, particular
ly for ad hoc teams (Gersick and Davis-Sacks 1990). The available time guides th
e pace of work and the selection of strategies employed by teams. The lack of cl
ear timelines can cause problems for teams making adopted strategies inadequate
and impacting negatively the members motivation. Time landmarks can in some situa
tions be provided to the team through other avenues, such as a training delivery
schedule (Taveira 1996). The careful de nition of team composition is emphasized
as an essential aspect of team success in the literature (Carr and Littman 1993;
Larson and LaFasto 1989; Scholtes 1988; Kanter 1983). These authors indicate th
at the absence of members with key expertise or critical organizational linkages
can be a sticking point for teams. Both technical and organizational aspects ne
ed to be observed in team composition. The team leader role is essential as an e
xternal linkage between the group and upper management, as a promoter of involve
ment, and as a coordinator and facilitator of communication inside the group (Ta
veira 1996; Scholtes 1988). Another facet of the team leaders position, serving a
s a role model, is highlighted by Bolman and Deals (1992) symbolic tenet: example r
ather than command holds a team together. The diligence of the leader in his effor
t of coordinating and supporting the team can motivate members to volunteer for
work assignments and ease the distribution of assignments. Training is considere
d to be the one of the most essential resource for team success. It can provide
fundamental principles and procedures for its functioning. Training can impart g
round rules, guidelines for internal and external communication, and favored way
s to make decisions. Training sessions can provide opportunities to discuss and
learn with other teams and be conducive to a perception of support and predictab
ility about oncoming tasks and group development. It can introduce the team to a
number of procedures and behaviors that enhance communication and involvement (
Taveira 1996). Training in problem solving, data collection and analysis, and gr
oup decision making is necessary for employees to fully contribute to the group
process. Training is seen as fundamental for giving the team structure for a sou
nd internal process (Hackman 1990). In the speci c case of one-project teams, where a
nonroutine task is undertaken by a new mix of people, training may be critical.
Since such groups are unlikely to have established routines for coordinating mem
bers efforts or for determining how work and in uence will be distributed among the
m, training may provide vital guidelines (Gersick and Davis-Sacks 1990). Morelan
d and Levine (1992) de ne commitment as an emotional bond between a person and a g
roup. These authors point out two prevalent theories on commitment: (1) people a
re committed to a group insofar as it generates more rewards and fewer costs tha
n do other groups to which they already belong or that they could join; (2) comm
itment depends primarily on how important a group is to someones social identity.
This second theory implies that a need for self-enhancement leads people to fee
l more committed to groups that seem more successful. A logical extension could
be that early success increases the members commitment to the group. Correspondin
gly, Hackman (1990) asserts that groups that begin well and achieve some early w
ins often trigger a self-sustained upward trend in performance. Hackman delineat
es a two-factor hypothesis in this regard. The rst factor is the quality of the g
roups initial design, and the second is the occurrence of positive or negative ev
ents that trigger the spiral. Consensus is frequently referred to as the preferr
ed decision-making strategy for teams. Shared de nitions of consensus and clear pr
ocedures to put this mode of decision making in place are needed. Consensus is d
e ned by Scholtes (1988) as a process of nding a proposal that is acceptable enough
that all members can support it and no member opposes to it. Consensus requires
time and active participation from team members (Carr and Littman 1993). It dem
ands mutual respect (listening), open-mindedness, and effort at con ict resolution
. Amason et al. (1995) characterize the management of con icts as the crux of team e
ffectiveness. They assert that effective teams manage con ict in a way that contribu
tes to its objective. Lesseffective teams either avoid con ict, which leads to com
promised decisions, or let it seriously disrupt the group process. The authors d
ivide con ict into two types of cognitive and affective. Cognitive con ict focuses o
n the substance of the issues under discussion. Examination, comparison, and con
ciliation of opinions characterize it. Cognitive con ict is useful because it invi
tes team members to consider their perspectives from different angles and questi
on underlying assumptions. It can improve members understanding and commitment to
the teams objectives. Affective con ict focuses on
Team performance can be approached in many ways. The following model (developed
speci cally for QI teams) adopts a systems perspective to conceptualizing team per
formance by classifying the various factors affecting or related to performance
into three broad categories derived from the structureprocessoutcome paradigm espo
used by Donabedian (1992). The model is displayed in Figure 1 and discussed in d
etail below. While it was developed in the context of quality improvement, many
or all of its elements apply to other teams as well.
5.1.
Structure Variables
Structure variables are the contextual parameters that may impact team processes
and outcomes. We identi ed the following three dimensions within which the differ
ent structure variables could be classi ed: organizational characteristics, team c
haracteristics, and task characteristics.
5.1.1.
Organizational Characteristics
Researchers have discussed the impact of several organizational variables on pro
ject outcomes. Three factors stand out: top-management support, middle-managemen
t support, and suf ciency of resources.
Top-management support of project teams has been stressed in terms of the extent
to which the
management encourages the team, provides constructive feedback, actively champio
ns the pro-
984
STRUCTURE
PERFORMANCE IMPROVEMENT MANAGEMENT
PROCESS OUTCOME
I
ORGANIZATION CHARACTERISTICS
I
TASK ISSUES (1) Benefits to Indivdual Team Members (2) Cross-Functional Cooperat
ion (3) Team Efficiency (4) Team Effectiveness: Qualitative Measures (5) Team Ef
fectiveness: Quantitative Measures
(1) Top Management Support (2) Middle Management Support (3) Sufficiency of Reso
urces II TEAM CHARACTERISTICS (1) (2) (3) (4) (5) (6) (7) Team Heterogeneity Tea
m Expertise Team Authority Preference for Team Work Familiarity Team Size Traini
ng and Experience
(1) Meetings Management (2) Use of Tools (3) Involvement of Key Affected Personn
el (4) External Expertise (5) Solution Implementation II RELATIONSHIP ISSUES (1)
(2) (3) (4) (5) (6) (7) Harmony Potency Participatory Decision Making Workload
Sharing Commitment Communication Rewards and Recognition
III TASK CHARACTERISTICS (1) Task Complexity (2) Tension for Change (3) Clear Di
rection And Boundaries
III LEADERSHIP (1) Team Leader Characteristics (2) Team Leadership: Consideratio
n (3) Team Leadership: Initiating Structure
Figure 1 Model of Team Effectiveness.
ject, regularly reviews the teams progress, and rewards the team for its performa
nce (Waguespack 1994; Mosel and Shamp 1993; McGrath 1964; Smith and Hukill 1994;
Van de Ven 1980; CHSRA 1995; Rollins 1994). Middle-management support is deemed
important for successful QI team performance because team members need to be gi
ven time off by their supervisors from their routine jobs to work on team issues
(Gladstein 1984; Davis 1993). Team members have often reported dif culty in obtai
ning permission from their supervisors to attend team meetings (CHSRA 1995). Ind
eed, lack of encouragement and recognition and inadequate freedom provided by su
pervisors has been shown to contribute to delays in successful completion of pro
jects (Early and Godfrey 1995). Suf ciency of resources, although an aspect of top
-management support, has been so consistently linked with project outcome that i
t warrants separate discussion. Availability of adequate training, access to dat
a resources, ongoing consulting on issues related to data collection, analysis,
and presentation, adequate allocation of nances, and availability of administrati
ve support are some of the resources cited as important in studies on team effec
tiveness (Mosel and Shamp 1993; Levin 1992; Smith and Hukill 1994; Early and God
frey 1995; Gustafson et al. 1992; Nieva et al. 1978; CHSRA 1995).
5.1.2.
Team Characteristics
Team characteristics, or group composition, has received signi cant attention in s
tudies on team effectiveness. We identi ed the following seven distinct aspects of
team composition that are likely to impact QI team performance: team heterogene
ity, team expertise, team authority, preference for teamwork, familiarity, team
especially the process owners who have intimate knowledge of the process. In add
ition, successful teams also include members with prior QI teamwork experience (
Rollins et al. 1994; Flowers et al. 1992; Johnson and Nash 1993; CHSRA 1995). Te
am authority assesses the relative power of the team within the organization tha
t would facilitate the completion of the project ef ciently and successfully. For
instance, Gustafson et al. (1992) suggest that the reputation of the team leader
and other team members among the affected parties signi cantly impacts the implem
entation success of solutions. Involvement of people in a position of authority
on the team, such as department heads, and inclusion of opinion leaders, i.e., p
eople whose opinions are well respected by the affected parties, helps in overco
ming resistance to change and eases the process of solution implementation (CHSR
A 1995; Davis 1993; Nieva et al. 1978; Rollins et al. 1994). Preference for team
work is another important element of team composition. Campion et al. (1993) cit
e research that shows that employees who prefer to work in teams are likely to b
e more satis ed and effective as members of a team. Familiarity among team members
may lead to improved group dynamics and hence better team effectiveness. Team m
embers who are more familiar with each other may be more likely to work together
better and exhibit higher levels of team performance (Misterek 1995). Team size
has been found to have an important effect on team performance (Morgan and Lass
iter 1992). Although larger teams result in increased resources, which may lead
to improved team effectiveness, they may also lead to dif culties in coordination
and reduced involvement of team members (Campion et al. 1993; Morgan and Lassite
r 1992). Campion et al. (1993) suggest that teams should be staffed to the small
est number needed to carry out the project. Quality-improvement training and exp
erience of team members has also been shown to affect the outcome of QI projects
signi cantly. Although it features as an aspect of overall team expertise, we con
sider it important enough in the context of QI project teams to discuss separate
ly. Case studies of successful QI teams show that most of the members of the tea
ms had either participated on other QI projects or at least received some form o
f prior QI training, such as familiarity with the common QI tools and group proc
esses (CHSRA 1995).
5.1.3.
Task Characteristics
Task characteristics are factors that are speci c to the problem assigned to the p
roject team. We classify the various task characteristics deemed important in pr
evious research studies into three categories: 1. Task complexity has been studi
ed at two levels: (a) as a measure of the complexity of the process being studie
d, e.g., number of departments affected by the process (CHSRA 1995; Misterek 199
5), the dif culty of measuring the process quantitatively (Davis 1993; Juran 1994)
; and (b) as a measure of the complexity of the goals assigned to the team, e.g.
, scope of the project, number of goals to be accomplished (Juran 1994). 2. Tens
ion for change assesses the importance, severity, and signi cance of the project.
Greater tension for change leads to higher motivation for solving the problem (J
uran 1994; Gustafson et al. 1992; Van de Ven 1980). In order to be successful, p
rojects should (a) be selected based on data-driven evidence of the existence of
the problem (Mosel and Shamp 1993; Rollins et al. 1994; CHSRA 1995); (b) focus
on processes that are a cause of dissatisfaction among the process owners (Gusta
fson et al. 1992); and (c) be considered important areas for improvement by mana
gement (Mosel and Shamp 1993). 3. Clear directions and boundaries refer to the e
xtent to which management provides the team with a clear mandate. The clarity wi
th which management describes the problem, its requirements, and project goals a
nd explains the available team resources and constraints has been discussed as d
irectly affecting team processes and outcomes (Misterek 1995; Levin 1992; Gustaf
son et al. 1992; Fleishman and Zaccaro 1992).
5.2.
Process Variables
Mosel and Shamp (1993) and Levin (1992) classify process variables into two core
dimensions: (1) task or project dimension, which consists of processes that are
directly related to solving the assigned problem, such as use of QI tools, ef cie
nt planning of meetings, and solution generation and implementation, and (2) rel
ationship or socioemotional dimension, which deals with the dynamics and relatio
nships among team members, such as communication and harmony. Since team leaders
hip impacts both task and relationship issues, we consider it separately as a th
ird dimension of process variables. We discuss these three dimensions of process
variables next.
986
PERFORMANCE IMPROVEMENT MANAGEMENT
5.2.1.
Task Issues
The following ve task variables have been shown to impact team outcomes and other
team processes: 1. Ef cient meetings management has been shown to result in susta
ined member involvement and improved overall ef ciency with which the team solves
the problem (CHSRA 1995). In particular, the advantages of mutually establishing
team norms up front (such as meeting times, frequency, and length), and advance
d planning of meeting agenda and assignment of responsibility to members for spe
ci c agenda items have been highlighted (Davis 1993; Juran 1994). 2. Quality-impro
vement tools aid the team at various stages of the project. Effective use of too
ls has been shown to help teams keep track of their activities, clarify understa
nding of the system, help identify problems and solution, help maintain focus, a
nd aid in decision making and data collection and analyses (Plsek 1995; Scholtes
1988; CHSRA 1995; Levin 1992). 3. Involvement of key personnel, especially thos
e who are directly affected by the process being studied, signi cantly improves th
e chances of success of a QI project (Gustafson et al. 1992). For instance, invo
lvement of process owners during the various stages of problem exploration, solu
tion design, and implementation results in a better understanding of the problem
by the team and leads to solutions that are more likely to be accepted and impl
emented smoothly (Rollins et al. 1994; CHSRA 1995; Van de Ven 1980). 4. External
expertise refers to sources outside the organization that may be helpful to the
team during the various stages of the project. For instance, comparison of curr
ent levels of performance with industry standards often helps in providing databased evidence of the severity of the problem, thereby resulting in increased ma
nagement support (CHSRA 1995). Networking with other organizations that have suc
cessfully solved similar problems and identifying benchmarks helps teams develop
successful solutions (Gustafson et al. 1992). Examples of other sources of exte
rnal expertise that can help teams better understand the problem, and design eff
ective solutions include literature, consultants, and clearinghouses (Rollins et
al. 1994; CHSRA 1995). 5. Poor solution implementation not only may lead to sig
ni cant delays in completion of a project (Early and Godfrey 1995) but may also re
sult in the failure of the teams solutions in resulting in any substantial improv
ement (Gustafson et al. 1992). In order to implement its solutions successfully,
the team needs to get buy-in from the process owners (Juran 1994; Rollins et al
. 1994; Gustafson et al. 1992; Johnson and Nash 1993; CHSRA 1995). In order to e
valuate and demonstrate the advantages of their solutions, the team needs to dev
elop easy to measure process and outcome variables and must have in place a well
-designed data-collection strategy (Juran 1994; CHSRA 1995). In addition, feedba
ck from the process owners should be obtained to facilitate further improvement
of the process (Gustafson et al. 1992).
5.2.2.
Relationship Issues
The relationship-based variables that have been shown to impact a teams performan
ce are as follows:
Team harmony refers to the ability of team members to manage con ict and work toge
ther as
a cohesive unit. The extent to which team members cooperate with one and other a
nd work well together has been shown to affect team outcomes positively (Mistere
k 1995; Guzzo and Dickson 1996; Mosel and Shamp 1993). Team potency has been de ne
d as a teams collective belief that it can be effective (Guzzo and Dickson 1996).
It is similar to Banduras (1982) notion of self-ef cacy. Campion et al. (1993) hav
e demonstrated a positive relationship between team potency and outcomes such as
team productivity, effectiveness, and team member satisfaction. Participatory d
ecision making (PDM) refers to involvement of all team members on important team
decisions. Participation in decisions results in an increase in members sense of
responsibility and involvement in the teams task (Campion et al. 1993). PDM styl
e has been shown to be a common characteristic of successful QI teams (CHSRA 199
5; Scholtes 1988). Workload sharing, similar to PDM, ascertains the extent of ba
lanced participation among members of a team. Teams where most of the members co
ntribute equally to the work have been shown to be more productive and successfu
l (CHSRA 1995; Campion et al. 1993; Scholtes 1988). Commitment of team members t
o the teams goals is one of the driving forces behind effective teams (Waguespack
1994). Successful QI teams have reported member motivation and commitment to im
prove the process as being a critical factor in their success (CHSRA 1995). Comm
unication is a critical component of teamwork because it serves as the linking m
echanism among the various processes of team functioning (Rosenstein 1994). For
instance, Davis (1993)
such as reduction in length of stay, dollars saved, and reduction in error rate
(e.g., CHSRA 1995).
6.
IMPACT OF TEAMS
Teamwork represents one form of work organization that can have large positive a
nd / or negative effects on the different elements of the work system and on hum
an outcomes, such as performance, attitudes, well being, and health. Given the v
ariety of team characteristics and organizational settings, it is likely that th
e impact of teamwork on the work system will be highly variable. Some teams may
provide for positive characteristics, such as increased autonomy and more intere
sting tasks, whereas other teams may produce production pressures and tightened
management control (Lawler 1986). One important issue in team design is the degr
ee of authority and autonomy (Medsker and Campion 1997; Goodman et al. 1988). It
is, therefore, important to examine the impact of teamwork on the task and orga
nizational elements of the work system. Tasks performed by teams are typically o
f
988
PERFORMANCE IMPROVEMENT MANAGEMENT
different nature of tasks performed by individual employees. Understanding the p
hysical and psychosocial characteristics of the tasks performed by the team and
the members of the team is highly signi cant for ergonomists. Teams can provide op
portunities for reducing the physical and psychosocial repetitiveness of tasks p
erformed by individual employees. This is true only if employees have suf cient tr
aining on the different tasks and if rotation among tasks occurs. In some instan
ces, the increased authority and autonomy provided to teams may allow employees
to in uence their work rhythms and production schedules. This may have bene cial phy
sical impact if adequate workrest schedules are used. On the other hand, members
of the team may work very hard at the beginning of the shift in order to rest at
the end of the day. This overload at the beginning of the shift may have some p
hysical health consequences, such as cumulative trauma disorders. A more balance
d workload over the entire shift is preferred. In other instances, teamwork has
been accompanied by tightened management control (Barker 1993) and electronic an
d peer surveillance (Sewell 1998). In conclusion, the impact of teamwork on work
organization and ergonomics is largely undetermined and depends on a range of f
actors. However, teamwork can provide many opportunities to improve elements of
the work system.
6.1.
Impact on Management
The upper managerial levels of organizations have been traditionally targeted in
the efforts to sell teamwork. For these management segments, the bene ts would co
me in improvements to the whole organization success and the possibility of spen
ding more time working at the strategic level once the daily decisions can be un
dertaken by the employee teams. However, one group whose needs are frequently ov
erlooked when implementing employee involvement programs is the middle managers
or supervisors. Because the supervisors are a part of management, it is often as
sumed that they will buy into the philosophies adopted by upper management. Othe
rwise, according to studies by Klein (1984), even though 72% of supervisors view
participation programs as being good for the company and 60% see them as good f
or employees, less than 31% view them as bene cial to themselves. This perspective
is clearly portrayed by Kanter (1983): participation is something the top orders
the middle to do for the bottom. Concerns among supervisors relate to job security
, job de nition, and additional work created to implement these programs (Klein 19
84). A common fear is that employee participation would take supervisors out of
the chain of command. Supervisors typically have attained their positions via pr
omotions intended to reward them for outstanding performance as a worker. Sharin
g their supervisory tasks can be seen as a loss of status to less-deserving work
ers. Support from rst-line supervisors is essential for success of overall partic
ipation programs. Some successful experiences in obtaining this support have inc
luded the introduction of presentations to upper management, by supervisors, abo
ut teamwork activities and creation of teams for forepersons themselves (Harriso
n 1992).
6.2.
Impact on Employees
It is considered that todays better trained and educated workers have expectation
s greater than basic pay, bene ts, and a safe place to work. According to Lawler (
1986), these enlarged expectations include participating in meaningful decisions
. On the other side, potential problems from the employee perspective need to be
addressed. The literature on the subject of employee involvement has dedicated
much less emphasis on the problems than on the bene ts. Indeed, when these problem
s are discussed, they are almost always seen from the perspective of the organiz
ation and its management. Very little has been written about the problems from t
he workers standpoint. Regarding the negative consequences of teamwork experience
d by workers, Baloff and Doherty (1988) state that it can be very disruptive, es
pecially during the crucial start-up period of employee involvement. These autho
rs classify the negative consequences into three categories. First, participants
may be subjected to peer-group pressure against what is perceived as collaborat
ion with management in ways that endanger employees interests. Second, the partic
ipants manager may attempt to coerce them during the group activity, or they may
retaliate against the participants if the results of their involvement displease
them. Third, participants may have dif culty adapting psychologically at the end
of a highly motivating participation effort if they are thrust back into narrow,
rigidly de ned tasks. Lawler (1986) expresses similar concern about some types of
participation that do not match the overall structure of organization and inevi
tably will produce frustrated expectations among the workers. On the more negati
ve side of the spectrum of the assessments on teams, Parker and Slaughter (1994,
1988) see them as a way of undermining the union and exploiting workers. Team c
oncept is seen as part of management by stress, whereby production is speeded up and
management actually exerts more control on employees. According to these author
s the work rationalization that used to be done by management is being made now by t
he employees themselves. The authors point out that peer pressure related to thi
s kind of involvement is even more restrictive than the hierarchy itself. They s
tate that there are several myths about teamwork, that the team concept involves
: job security, increased productivity, more control by workers, working smarter
not harder, workers with more
990
PERFORMANCE IMPROVEMENT MANAGEMENT
Cotton, J. L. (1993), Employee Involvement: Methods for Improving Performance an
d Work Attitudes, Sage, Newbury Park, CA. Cronbach, L. J. (1951), Coef cient Alpha a
nd the Internal Structure of Tests, Psychometrika, Vol. 16, pp. 97334. Davis, R. N.
(1993), Cross-functional Clinical Teams: Signi cant Improvement in Operating Room Q
uality and Productivity, Journal of the Society for Health Systems, Vol. 4, No. 1,
pp. 3447. Day, D. (1998), Participatory ErgonomicsA Practical Guide for the Plant M
anager, in Ergonomics in Manufacturing: Raising Productivity Through Workplace Imp
rovement, W. Karwowski and G. Salvendy, Eds., Engineering & Management Press, No
rcross, GA. Dean, J. W., and Bowen, D. E. (1994), Management Theory and Total Qual
ity: Improving Research and Practice Through Theory Development, Academy of Manage
ment Review, Vol. 19, pp. 392 418. Deming, W. E. (1986), Out of the Crisis, Massa
chusetts Institute of Technology, Center for Advanced Engineering Study, Cambrid
ge, MA. Dickinson, T. L., McIntyre, R. M., Ruggeberg, B. J., Yanushefski, A., Ha
mill, L. S., and Vick, A. L. (1992), A Conceptual Framework for Developing Team Pr
ocess Measures of Decision-Making Performances, Final Report, Naval Training Syste
ms Center, Human Factors Division, Orlando, FL. Doherty, E. M., Nord, W. R., and
McAdams, J. L. (1989), Gainsharing and Organization Development: A Productive Syn
ergy, Journal of Applied Behavioral Science, Vol. 25, pp. 209229. Donabedian, A. (1
992), The Role of Outcomes in Quality Assessment and Assurance, Quality Review Bulle
tin, pp. 356360. Duffy, V. G., and Salvendy, G. (1997), Prediction of Effectiveness
of Concurrent Engineering in Electronics Manufacturing in the U.S, Human Factors
and Ergonomics in Manufacturing, Vol. 7, No. 4, pp. 351373. Duffy, V. G., and Sal
vendy, G. (1999), The Impact of Organizational Ergonomics on Work Effectiveness wi
th Special Reference to Concurrent Engineering in Manufacturing Industries, Ergono
mics, Vol. 42, No. 4, pp. 614637. Early, J. F., and Godfrey, A. B. (1995), But It T
akes Too Long, Quality Progress, Vol. 28, No. 7, pp. 5155. Evanoff, B. A., Bohr, P.
C., and Wolf, L. D. (1999), Effects of a Participatory Ergonomics Team Among Hosp
ital Orderlies, American Journal of Industrial Medicine, Vol. 35, No. 4, pp. 358 36
5. Fleishman, E. A., and Zaccaro, S. J. (1992), Toward a Taxonomy of Team Performa
nce Functions, in Teams: Their Training and Performance, R. W. Swezey and E. Sal
as, Eds., Ablex, Norwood, NJ, pp. 3156. Flowers, P., Dzierba, S., and Baker, O. (
1992), A Continuous Quality Improvement Team Approach to Adverse Drug Reaction Rep
orting, Topics in Hospital Pharmacy Management, Vol. 12, No. 2, pp. 6067. Garg, A.,
and Moore, J. S. (1997), Participatory Ergonomics in a Red Meat Packing Plant, Pa
rt 1: Evidence of Long Term Effectiveness, American Industrial Hygiene Association
Journal, Vol. 58, No. 2, pp. 127131. Gersick, C. J. G., and Davis-Sacks, M. L. (
1990), Summary; Task Forces, in Groups That Work (And Those That Dont), J. R. Hackman
, Ed., Jossey-Bass, San Francisco, pp. 147153. Gladstein, D. L. (1984), Groups in C
ontext: A Model for Task Group Effectiveness, Administrative Science Quarterly, Vo
l. 29, pp. 499517. Goodman, P. S., Devadas, S., and Hughson, T. L. (1988), Groups a
nd Productivity: Analyzing the Effectiveness of Self-Managing Teams, in Productivi
ty in Organizations, J. P. Campbell, R. J. Campbell, and Associates, Eds., Josse
y-Bass, San Francisco, pp. 295327. Grif n, R. W. (1988), Consequences of Quality Circ
les in an Industrial Setting: A Longitudinal Assessment, Academy of Management Jou
rnal, Vol. 31, pp. 338358. Gustafson, D. H., and Hundt, A. S. (1995), Findings of I
nnovation Research Applied to Quality Management Principles for Health Care, Healt
h Care Management Review, Vol. 20, No. 2, pp. 1633. Gustafson, D. H., Cats-Baril,
W. L., and Alemi, F. (1992), Systems to Support Health Policy Analysis: Theory,
Models, and Uses, Health Administration Press, Ann Arbor, MI, pp. 339357. Guzzo,
R. A., and Dickson, M. W. (1996), Teams in Organizations: Recent Research on Perf
ormance and Effectiveness, Annual Review of Psychology, Vol. 47, pp. 307338.
992
PERFORMANCE IMPROVEMENT MANAGEMENT
Katzenbach, J. R., and Smith, D. K. (1993), The Wisdom of Teams: Creating the Hi
gh Performance Organization, HarperCollins, New York. Keyserling, W. M., and Han
kins, S. E. (1994), Effectiveness of Plant-Based Committees in Recognizing and Con
trolling Ergonomic Risk Factors Associated with Musculoskeletal Problems in the
Automotive Industry, in Proceedings of the XII Congress of the International Ergon
omics Association, Toronto, Vol. 3, pp. 346348. Kinlaw, D. C. (1990), Developing
Superior Work Teams, Lexington Books, Lexington, MA, pp. 162 187. Klein, J. (1984
), Why Supervisors Resist Employee Involvement, Harvard Business Review, Vol. 62, Se
ptemberOctober, pp. 8795. Kochan, T. A., Katz, H. C., and Mower, N. R. (1984), Wor
ker Participation and American Unions: Threat or Opportunity?, Upjohn Institute,
Kalamazoo, MI. Kossek, E. E. (1989), The Acceptance of Human Resource Innovation
by Multiple Constituencies, Personnel Psychology, Vol. 42, No. 2, pp. 263281. Landy
, F. J., and Farr, J. L. (1983), The Measurement of Work Performance: Methods, T
heory, and Application, Academic Press, New York. Larson, C. E., and LaFasto, F.
M. J. (1989), Teamwork: What Must Go Right, What Can Go Wrong, Sage, Newbury Pa
rk, CA. Lawler, E. E., III (1986), High-Involvement Management, Jossey-Bass, San
Francisco. Lawler, E. E., III, Morhman, S. A., and Ledford, G. E., Jr. (1992),
Employee Participation and Total Quality Management, Jossey-Bass, San Francisco.
Lawler, E. E., and Mohrman, S. A. (1987), Quality Circles: After the Honeymoon, Org
anizational Dynamics, Vol. 15, No. 4, pp. 4254. Ledford, G. E., Lawler, E. E., an
d Mohrman, S. A. (1988), The Quality Circle and Its Variations, in Productivity in O
rganizations: New Perspectives from Industrial and Organizational Psychology, J.
P. Campbell and R. J. Campbell, Eds., Jossey-Bass, San Francisco. Levine, D. I.
(1995), Reinventing the Workplace: How Business and Employees Can Both Win, The Bro
okings Institution, Washington, DC. Levin, I. M. (1992), Facilitating Quality Impr
ovement Team Performance: A Developmental Perspective, Total Quality Management, V
ol. 3, No. 3, pp. 307332. Liker, J. K., Nagamachi, M., and Lifshitz, Y. R. (1989)
, A Comparative Analysis of Participatory Programs in the U.S. and Japan Manufactu
ring Plants, International Journal of Industrial Ergonomics, Vol. 3, pp. 185189. Lo
cke, E. A., Schweiger, D. M., and Latham, G. P. (1986), Participation in Decision
Making: When Should It Be Used? Organizational Dynamics, Winter, pp. 6579. Marks,
M. L., Mirvis, P. H., Hackett, E. J., and Grady, J. F. (1986), Employee Participa
tion in a Quality Circle Program: Impact on Quality of Work Life, Productivity,
and Absenteeism, Journal of Applied Psychology, Vol. 71, pp. 6169. McGrath, J. E. (
1964), Social Psychology: A Brief Introduction, Holt, New York. McLaney, M. A.,
and Hurrell, J. J., Jr. (1988), Control, Stress, and Job Satisfaction in Canadian
Nurses, Work and Stress, Vol. 2, No. 3, pp. 217224. Medsker, G. J., and Campion, M.
A. (2000), Job and Team Design, in Handbook of Industrial Engineering, 3rd Ed., G.
Salvendy, Ed., John Wiley & Sons, New York. Melum, M. M., and Sinioris, M. E. (1
993), Total Quality Management in Health Care: Taking Stock, Quality Management in H
ealth Care, Vol. 1, No. 4, pp. 5963. Misterek, S. D. A (1995), The Performance of C
ross-Functional Quality Improvement Project Teams, Ph.D. dissertation, University
of Minnesota. Mohrman, S. A., and Novelli, L. (1985), Beyond Testimonials: Learnin
g from a Quality Circles Programme, Journal of Occupational Behavior, Vol. 6, No.
2, pp. 93110. Moreland, R. L., and Levine, J. M. (1992), Problem Identi cation by Gro
ups, in Group Process and Productivity, S. Worchel, W. Wood, and J. A. Simpson, Ed
s., Sage, Newberry Park, CA, pp. 1747. Morgan, B. B., Jr., and Lassister, D. L. (
1992), Team Composition and Staf ng, in Teams: Their Training and Performance, R. W. S
wezey and E. Salas, Eds., Ablex, Norwood, NJ, pp. 75100. Mosel, D., and Shamp, M.
J. (1993), Enhancing Quality Improvement Team Effectiveness, Quality Management in
Health Care, Vol. 1, No. 2, pp. 4757. Nemeth, C. J. (1992), Minority Dissent as a S
timulant to Group Performance, in Group Process and Productivity, S. Worchel, W. W
ood, and J. A. Simpson, Eds., Sage, Newbury Park, CA, pp. 95111.
994
PERFORMANCE IMPROVEMENT MANAGEMENT
Tang, T. L., Tollison, P. S., and Whiteside, H. D. (1991), Managers Attendance and
the Effectiveness of Small Work Groups: The Case of Quality Circles, Journal of So
cial Psychology, Vol. 131, pp. 335344. Taveira, A. D. (1996), A Successful Quality
Improvement Team Project in the Public Sector: A Retrospective Investigation, Ph.D
. dissertation, University of Wisconsin-Madison. Trist, E. (1981), The Evolution
of Socio-technical Systems, Ontario Ministry of LaborQuality of Working Life Cen
tre, Toronto. Van de Ven, A. H. (1980), Problem Solving, Planning, and Innovation.
Part I: Test of the Program Planning Model, Human Relations, Vol. 33, No. 10, pp.
711740. Varney, G. H. (1989), Building Productive Teams: An Action Guide and Res
ource Book, Jossey-Bass, San Francisco. Vink, P., Peeters, M., Grundeman, R. W.,
Smulders, P. G., Kompier, M. A., and Dul, J. (1995), A Participatory Ergonomics A
pproach to Reduce Mental and Physical Workload, International Journal of Industria
l Ergonomics, Vol. 15, pp. 389396. Vink, P., Lorussen, E., Wortel, E., and Dul, J
. (1992), Experiences in Participatory Ergonomics: Results of Roundtable Session D
uring the 11th IEA Congress, Paris, July 1991, Ergonomics, Vol. 35, No. 2, pp. 1231
27. Waguespack, B. G. (1994), Development of a Team Culture Indicator, Ph.D. dissert
ation, Louisiana State University. Wayne, S. J., Grif n, R. W., and Bateman, T. S.
(1986), Improving the Effectiveness of Quality Circles, Personnel Administrator, Vo
l. 31, No. 3, pp. 7988. Wechsler, D. (1971), Concept of Collective Intelligence, Amer
ican Psychologist, Vol. 26, No. 10, pp. 904907. Weick, K. E. (1984), Small Wins: Re
de ning the Scale of Social Problems, American Psychologist, Vol. 39, pp. 4049. Wilso
n, J. R. (1995), Ergonomics and Participation, in Evaluation of Human Work: A Practi
cal Ergonomics Methodology, J. R. Wilson and E. N. Corlett, Eds., Taylor & Franc
is, London, pp. 10711096. Wilson, J. R., and Haines, H. M. (1997), Participatory Er
gonomics, in Handbook of Human Factors and Ergonomics, 2nd Ed., G. Salvendy, Ed.,
John Wiley & Sons, New York. Wood, R., Hull, F., and Azumi, K. (1983), Evaluating
Quality Circles: The American Application, California Management Review, Vol. 26,
pp. 3749. Wycoff, M. A., and Skogan, W. K. (1993), Community Policing in Madison: Q
uality from the Inside Out, National Institute of Justice Research Report. Ziegenf
uss, J. T. (1994), Toward a General Procedure for Quality Improvement: The Double
Track Process, American Journal of Medical Quality, Vol. 9, No. 2, pp. 9097.
1.
INTRODUCTION
This chapter is about performance management. Performance relates to the measura
ble outcomes or results achieved by an organization. Management relates to the a
ctions or activities an organization deploys to improve its desired outcomes. Th
is chapter focuses on three major themes: 1. Key ideas organizations have found
helpful in dealing with powerful and fundamental forces of change at work in the
world
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
995
996
PERFORMANCE IMPROVEMENT MANAGEMENT
2. How goal setting and metrics can improve performance at the individual, team,
working-group and organizational-unit levels 3. How the formal and informal asp
ects of an organization each contribute to managing performance in a world of ch
ange We have grounded this chapter in our own extensive experience as well as th
e writings of leading commentators. Our objective is to provide readers with a b
lend of best practices. In particular, we seek to avoidand advise readers to avoi
dselecting any single approach to performance management as the best and only. Rather
, we promote a both / and view of the world, in which readers carefully craft that b
lend of approaches that best ts the needs and challenges ahead of them. This cont
rasts with the far too dominant either / or mindset that maintains, incorrectly, tha
t performance is best managed by selecting a single comprehensive approach throu
gh some debate grounded in the proposition that either approach A or approach B
is best. In our experience, it usually turns out that both approach A and approa
ch B are relevant to the challenge of managing performance in a changing world.
This chapter concentrates on providing guidance that can be applied, practiced,
and perfected by any individuals and teams in any organization. The concepts and
techniques we put forward do not depend on senior-management support. We do not
attempt to put forward a comprehensive performance-management system model. Our
focus is on performance in an environment of change and on how individuals, tea
ms, and groups can signi cantly improve their mindsets and approaches to improving
performance. You and your colleagues can begin to make a performance difference
tomorrow for yourself and your organizations. We will provide some ideas and pe
rspectives on formal aspects to integrating performance management and to formal
top-management-driven approaches to change. But by and large, our ideas are for
you, and you can use them wherever you sit in an organization and on whatever p
erformance challenge arises.
2.
THE CHANGE IMPERATIVE
After summarizing the fundamental forces of change that so often determine the n
ature of todays performance challenges, this section reviews a series of key conc
epts and ideas useful in managing performance itself. These include:
The balanced scorecard: a change in what performance means and how its is measur
ed New mental assumptions for managing performance and change Disciplines of lea
rning and performing organizations Work as process and competition as time based
Characteristics of high performance organizations The trend toward impermanent
organizations and alliances
2.1.
Forces of Change
If you are young enough, the world of change is all you have. For others, managi
ng performance is very different today than in years past. Regardless of your ag
e and experience, you should have a picture of the fundamental forces at work th
at shape and determine the challenges ahead of yourselves and your organizations
. One of our favorite frameworks comes from John Kotter, a professor at Harvard
Business School. Figure 1 shows Professor Kotters summary of the economic and soc
ial forces driving the need for major change. These forces are fundamental becau
se they will not be going away any time soon and many market responses to them a
re irreversible. Globalization and internet technologies are but two examples of
changes with permanent and lasting impact. New market and competitive pressures
PERFORMANCE MANAGEMENT
Technological Change Communication is faster and better Transportation is faster
and better More information networks International Economic Integration Fewer t
ariffs Currencies linked More global capital flows Maturation of Markets in Deve
loped Countries Slower domestic growth More aggressive exporters More deregulati
on Fall of Communist and Socialist Regimes More countries linked to capital syst
em More privatization
997
The Globalization of Markets and Competition
More Hazards
More competition Increased speed
More Opportunities
Bigger markets Fewer barriers
More Large-scale Change in Organizations To avoid hazards and / or capitalize on
opportunities, firms must become stronger competitors. Typical transformation m
ethods include: Reengineering Mergers and acquisitions Restructuring Strategy ch
ange Culture change Quality programs
Figure 1 Economic and Social Forces Driving the Need for Major Change in Organiz
ations. (Adapted from Kotter 1996)
2.2.
A Changing View of Performance Itself: The Balanced Scorecard
The forces of change have altered what performance itself means. Gone are the da
ys when nancial results were all that mattered. In todays world, organizations mus
t deliver a combination of nancial and non nancial performance outcomes. Readers wh
o do not understand the new scorecard of performance will fall into the three ma
in traps of a nancial-only approach to performance management: 1. Unsustainabilit
y: Achieving sustained organizational performance demands outcomes and results t
hat bene t all of the constituencies that matter. Shareholders provide opportuniti
es and rewards to people of the enterprise to deliver value to customers, who ge
nerate returns to shareholders, who in turn provide opportunities to the people
of the organization, and so on. If you substitute citizens or bene ciaries for cus
tomers of the enterprise, you can apply this concept to pro t and not-for-pro t orga
nizations. Focusing solely on nancial measures will create shortfalls in customer
service, employee satisfaction, or product / process quality. The wise executiv
e looks at nancial measures as lagging measures and seeks other more direct measu
res of contributing performance to serve as leading measures. 2. Demotivation: E
xecutives at the top are motivated by performance measures because they also rec
eive big paydays from achieving them. But todays competitive environment requires
tremendous energy and contribution from people throughout the organization. For
most people in the organization, nancial measures are a too-distant indicator of
success or failure, to which many have contributed. Concentrating solely on the
nancial dimension of measurement is not motivating. It can go further toward bei
ng demotivating if people suspect that leaders are concentrating on nancial measu
res out of self-interest. 3. Confusion: People need to see how and why their con
tributions make a difference. Financial goals alone will not reach or connect wi
th very many individuals or groups. These people can become confused and resort
to activity-based goals to ll their void. This new scorecard was rst popularized b
y Kaplan and Norton (1995). All organizations have multiple constituencies such
as shareholders, customers, employees, and strategic partners. Each of these con
stituencies has performance needs and concerns that must be met if the organizat
ion hopes to survive and thrive. Kaplan and Nortons scorecard emphasizes the need
to convert an organizations
998
PERFORMANCE IMPROVEMENT MANAGEMENT
strategy into a series of linked performance metrics and outcomes across a 4D lo
gic that suggests that a rms nancial results directly arise from results that matte
r to customers, which in turn arise from results of internal processes, which in
turn arise from results that matter to the people of the organization in terms
of learning and growth. Smith (1999) fundamentally improved on Kaplan and Nortons
thinking by suggesting that the balanced scorecard can be both more balanced an
d more integrated by replacing the linear logic of people-to-process-to-customer
-to-shareholder with a reinforcing, integrated logic wherein results for each co
nstituency both leads and lags results for others. Accordingly, managing perform
ance in a sustainable way looks more like Figure 2, which depicts a philosophy f
or never-ending success. When viewed in this way, nancial and non nancial goals all
reinforce and link to one another. Moreover, the goals support a narrative or s
tory of success that will not fall victim to unsustainability, demotivation, and
confusion.
2.3.
New Mental Assumptions for Mastering Performance and Change
Managing both nancial and non nancial performance in a world of change demands that
readers know how to manage change. Having said that, the rst and foremost princi
ple of managing change (see Smith 1996) is to keep performance the primary objec
tive of managing change, not change. Far too many people and organizations do th
e opposite. If readers are to avoid this trap, they must work hard to connect re
al and sustainable performance achievements to the changes underway in their org
anizations. With clear and compelling performance objectives in mind, readers mu
st avoid a variety of worldviews that do not respond to the challenges at hand i
n todays fast moving world. Kanter (1983) foresaw many of the new and different w
ays for people and organizations to picture and respond to change. She called fo
r a necessary shift from segmentalist to integrative assumptions. Today, we might pa
raphrase her thoughts as shifting from stovepipe to horizontal views of work and orga
ation (see Smith 1996). Here are what Kanter described as old vs. new assumptions:
Old assumption #1: Organizations and their subunits can operate as closed system
s, controlling
whatever is needed for their operation. They can be understood on their own term
s, according to their internal dynamics, without much reference to their environ
ment, location in a larger social structure, or links to other organizations or
individuals. Old assumption #2: Social entities, whether collective or individua
l, have relatively free choice, limited only by their own abilities. But since t
here is also consensus about the means as well as the ends of these entities, th
ere is clarity and singularity of purpose. Thus, organizations can have a clear
goal; for the corporation, this is pro t maximization.
Shareholders who provide opportunities and rewards to
Customers who generate returns to
People of the Enterprise who deliver value to
Figure 2 The Performance Cycle. (Adapted from Smith 1999)
PERFORMANCE MANAGEMENT
999
Old assumption #3: The individual, taken alone, is the critical unit as well as
the ultimate actor.
Problems in social life therefore stem from three individual characteristics: fa
ilure of will (or inadequate motivation), incompetence (or differences in talent
), and greed (or the single-minded pursuit of self interest). There is therefore
little need to look beyond these individual characteristics, abilities, or moti
ves to understand why the coordinated social activities we call institutional pa
tterns do not always produce the desired social goods. Old assumption #4: Differ
entiation of organizations and their units is not only possible but necessary. S
pecialization is desirable for both individuals and organizations; neither shoul
d be asked to go beyond their primary purposes. The ideal organization is divide
d into functional specialties clearly bounded from one another, and managers dev
elop by moving up within a functional area. Here are the new assumptions that Ka
nter puts forward as alternatives that are more responsive to the external press
ures in our changing world:
New assumption #1: Organizations and their parts are in fact open systems, neces
sarily depending on others to supply much of what is needed for their operations. Their b
ehavior can best be understood in terms of their relationships to their context,
their connectionsor nonconnectionswith other organizations or other units. New as
sumption #2: The choices of social entities, whether collective or individual, a
re constrained by the decisions of others. Consensus about both means and ends i
s unlikely; there will be multiple views re ecting the many others trying to shape
organizational purposes. Thus, singular and clear goals are impossible; goals a
re themselves the result of bargaining processes. New assumption #3: The individ
ual may still be the ultimateor really the onlyactor, but the actions often stem f
rom the context in which the individual operates. Leadership therefore consists
increasingly of the design of settings that provide tools for and stimulate cons
tructive, productive individual actions. New assumption #4: Differentiation of a
ctivities and their assignment to specialists is important, but coordination is
perhaps even more critical a problem, and thus it is important to avoid overspec
ialization and to nd ways to connect specialists and help them to communicate. Th
e contrast between old and new is sharp. In the old-assumption world, the manage
r was in control of both the external and the internal. In the new assumption-ba
sed world, uncertainty dominates and the need to be uid and lead by in uence rather
than control has become the norm. An organization cannot become a strong 21st-c
entury performer if it remains dominated by the old assumptions. It will not be
able to change to adapt to new market needs, new technologies, or new employee m
indsets. The old model is too slow and costly because it produces unnecessary hi
erarchy and unneeded specialization. It takes a very different mindset and persp
ective to thrive in a world dominated by the new assumptions. One cannot be succ
essful at both in the same ways. Success in each requires a different mental mod
el.
2.4.
Disciplines of the Performing and Learning Organization
A variety of new mental models and disciplines have arisen in response to the sh
ifting assumptions so well described by Kanter. Peter Senge is perhaps best know
n for triggering the search for new disciplines. He suggests ve disciplines that
distinguish the learning organization from old-world organizations that do not l
earn (or perform) (Senge et al. 1992):
Personal mastery: learning to expand personal capacity to create the results we
1000
PERFORMANCE IMPROVEMENT MANAGEMENT
Each of these disciplines requires a commitment to practice to improve our views
and skills in each area. Critically, readers who seek to master these disciplin
es must do so with an eye on performance itself. Readers and their organizations
gain nothing when efforts seek to make people and organizations become learning
organizations in the absence of a strong link to performance. Organizations mus
t be both learning and performing organizations.
2.5.
Work as Process and Competition as Time Based
Capacity to reconfigure
Figure 3 Contrasting Traditional and High-Performance Organizations. (Source: Na
dler et al. 1992)
PERFORMANCE MANAGEMENT
High- Performing Organizations Informal Experimental Action-ori
ation Low defensiveness High levels of trust Little second-guessing Few trapping
s of power High respect for learning Few rules and high flexibility Low levels o
f anxiety and fear Empowered team members Failures seen as problems to solve Dec
isions made at the action point People easily cross organizational lines Much in
formal problem solving Willingness to take risks Low-Perfor
nk is right Little risk taking Formal relationships Privileges and perks Many st
atus symbols Rules rigidly enforced Slow action / great care Much protective pap
erwork Decision making at the top High levels of fear and anxiety Your problem i
s yours, not ours Well-defined chain of command Learning limited to formal train
ing Many information-giving meetings Trouble puts people on defensive Little pro
blem solving below top Crossing organizational lines forbidden
1001
Figure 4 High-Performing vs. Low-Performing Team Characteristics. (Source: Synec
tics, Inc.)
Synectics, an innovation rm, has captured the above list in a slightly different
way. Figure 4 contrasts the spirit of innovation in high-performing vs. low perf
orming organizations.
2.7.
The Impermanent Organization
Finally, we wish to comment on the trend toward impermanent organizations and al
liances. More and more often, organizations respond to performance and change ch
allenges by setting up temporary alliances and networks, both within and beyond
the boundaries of the formal organization. It could turn out that this model of
the temporary organization and alliance formed to bring a new innovation to mark
et will in fact become the norm. Some have suggested this as one very viable sce
nario, which Malone and Laubacher (1998) dub the e-Lance economy. Malone and Laubach
er put forward the idea of many small temporary organizations forming, reforming
, and recombining as a way of delivering on customer needs in the future. While
they concede that this may be an extreme scenario, it is not an impossible one.
Their research is part of an ongoing and signi cant series of projects at MIT arou
nd the 21st-century organization. Another research theme is the continued import
ance of process management in the future, putting processes alongside products i
n terms of performancemanagement importance. One view is certain: 20 years from
now, very different business models than the ones we know today will have become
the norm.
3.
PERFORMANCE SUCCESS: GOAL SETTING AND METRICS
Lets rst look at a few fundamental aws in how many organizations approach performan
ce. As stressed by Smith (1999), Performance begins with focusing on outcomes inst
ead of activities. Yet most people in most organizations do the reverse. With the
exception of nancial results, most goals
1002
PERFORMANCE IMPROVEMENT MANAGEMENT
are activity based instead of outcome based. Such goals read like develop plans to
reduce errors or research what customers want. These are activities, not outcomes. Th
ey do not let the people involved know when they have succeeded, or even how the
ir efforts matter to their own success and that of their organizations.
3.1.
Performance Fundamentals and Obstacles
A variety of obstacles and bad habits explain this misplaced emphasis on activit
ies instead of outcomes. At their root lie the old assumptions, nancial-focus, in
ternal orientation, and silo organization models we reviewed above. These obstac
les and bad habits include:
3.1.1.
Natural Human Anxieties
Most people get nervous about the speci city with which their personal success or
failure will be measured. We like some exibility to say we did the right things a
nd that any lack of desired outcome is due to extenuating circumstances. A commo
n tactic is to declare any outcome outside our complete control as unachievable.
The problem with this is that for most people in an organization this leaves a
narrow set of activities. The further you are from the front line to the custome
r, the more tempting and common this tactic becomes.
3.1.2.
Dif culty Expressing Non nancial Outcomes
It is not easy to state non nancial goals in an outcome-based fashion. Yet so many
performance challenges are rst and best measured in non nancial ways. It is hard w
ork and personally risky to move beyond the goal of completing the activities an
d expose your performance to a measure of how effective that activity is where i
t counts, in the eyes of customers, employees, and strategic partners. The basic
anxiety and aversion to setting real outcomes as goals will always be around, p
articularly when new and different challenges confront us. A key to success is t
o control the anxiety rather than letting it control you.
3.1.3.
Flawed Assumptions
In many instances, people falsely assume performance outcome-based goals exist w
hen they dont. People in organizations, especially the ones who have achieved a d
egree of success, often claim they already know what the critical outcomes are a
nd how to articulate them, when in reality they dont. Or people will elude the re
sponsibility to state outcomes by claiming the outcomes themselves are implied i
n the activities or plans afoot. Or they will refer to the boss, expecting he or
she has it all under control. All of these excuses are mere ruses to avoid the
responsibility to speci cally and expressly articulate the outcomes by which any e
ffort can be monitored for success.
3.1.4.
The Legacy of Financial Management
The nancial scorecard has dominated performance business measurement in the moder
n corporation. As reviewed in the Section 1, the nancial-only approach to perform
ance management fails to account for performance outcomes that matter to custome
rs, employees, and strategic partners. It produces organizational cultures that
are short-term focused and have dif culty breaking out of the silo approach to wor
k. Why? Because functional organizations are uniquely suited to cost accounting.
With the introduction of activity-based accounting by Cooper and Kaplan, organi
zations were given the chance to move toward a process view of work and still ke
ep their numbers straight. That can help, but it is not enough. Until organizati
ons seriously set and achieve outcome-based goals that are both nancial and non nan
cial and link to one another, those organizations will continue to manage perfor
mance suboptimally.
3.1.5.
The Intrusive Complexity of the Megaproject or Megaprogram
Also standing in the way of outcome-based performance management is the grand il
lusion of a complete solution to a rms information systems. The promise of informa
tion technology systems that provide organizations with an integrated approach t
o transaction management and performance reporting has been a major preoccupatio
n of management teams ever since computers, and personal computers in particular
, have become both accessible and affordable (most recently in the form of enter
prise resource planning [ERP] systems). SAP, Oracle, and PeopleSoft are a few of
the more popular ERP software providers who have experienced phenomenal success
in the 1990s. While the drive to implement new systems was accelerated by the n
ow-infamous Y2K problem, the promise of integrated and exible information ow throu
ghout an organization had great appeal. These systems were also very much a part
of the broader transformation programs that many organizations were pursuing at the
same time. Many comprehensive transformation frameworks and methodologies that
have emerged over the past decade were
PERFORMANCE MANAGEMENT
1003
built on the success that business process reengineering had in the early 1990s.
These programs redesigned the people, process and technology of an organization
to bring about the performance promise of transformation. Reengineering program
s require at least two years to complete and are delivered through massive teams
following very detailed methodology scripts. Completing the activities alone is
often exhausting and risky. But their promised paybacks are huge, ranging from
industry leadership to a chance to survive (and hopefully thrive once again). Be
cause of the long timeframes associated with the effort and with being able to s
ee the reward, the value received from the effort is dif cult to measure. There is
a time lag between the teams implementation activities and the outcome. This is
true of many things strategic. Consulting teams (and executive sponsors) are oft
en onto their next assignments long before outcomes can be realized as they were
de ned in the business case. A focus on performance outcomes for strategic initia
tives most often gets lost or mired in the operational systems that are used in
most companies. These systems are designed to support the tactics of an organiza
tion, which are very often bounded by the time cycles inherent in the formal bud
geting and planning systems. All of these realities overwhelm the manager trying
to create performance change. The bigger and more complex the organization, the
more complicated the improvement of formal performance-management systems. Many
of the large consulting rms (certainly the ones showing annual growth rates in t
he 30 35% range during the past decade) play to the formal side of organization p
erformance, bringing frameworks and methodologies that require large consulting
teams that provide comprehensive solutions to performance management. At the sam
e time, many corporate executives and managers are in need of having it all integr
ated for the promise of accelerated decision making and improved information ow. Ea
ch of these goals has merit and the results can provide large payback. The probl
em is that in far too many situations, the payoff does not come because of the s
heer complexity of the solutions. Much of the implementation cost and business c
ase payback for these endeavors deals with taking activities out of the process.
With the advent of the Internet, completely new business models are being pursu
ed for connecting products or services with customers. The sheer size and cost o
f these approaches require a focus on the formal systems. So to the list of obst
acles to making performance measurable we add this signi cant pressure to focus on
large, complex projects. As consulting rms and their clients have gained experie
nce with transformation over the past decade, they have added more emphasis on the i
nformal systems and the people aspect of change. However, their business models
still require large teams that will continue to have a bias toward changing the
formal systems rather than working at the informal.
3.2.
Overcoming the Obstacles: Making Performance Measurable
So what can you do to overcome this formidable list of obstacles? Getting focuse
d on performance outcomes rather than activities is the place to begin. But it i
s not enough on its own. You will need more to sustain your focus. There are thr
ee additional aspects to performance management: 1. Picking relevant and speci c m
etrics 2. Using the four yardsticks 3. Articulating SMART goals Lets take a brief loo
k at the most important aspects behind each of these attributes.
3.2.1.
Picking Relevant and Speci c Metrics
Sometimes metrics are obvious; other times, the best measures seem elusive. Reve
nues, pro ts, and market share are universally recognized as effective metrics of
1004
PERFORMANCE IMPROVEMENT MANAGEMENT
learn from results, and motivate improvement for both parties in the relationshi
p. Using these types of subjective measures is acceptable and may in fact provid
e superior results. But you must understand the dif culties associated with such u
se and overcome them. Here are several additional pieces of guidance about selec
ting measures that, while obvious, are often sources of frustration.
Many metrics require extra work and effort. This will be true for almost all mea
sures that do
not already exist in an organization. So if the challenge is new, the best measu
res will most likely also be new. If the measure is new, you will not have a bas
eline. Organizations and managers must be willing to use their gut feeling as to
their baseline performance level. Researching ranges of normal or best-in-class
measures can give clues. It is not important to be exact, only to have a suf cien
t measure to allow the group to move. Some measurement criteria will demand cont
ributions from people or groups who are not under your control or authority. In
fact, most serious challenges in an organization will require all or many depart
ments to contribute. It will take extra work to get all groups or departments al
igned with both the goals and the measures. Some metrics are leading indicators
of success, while others are lagging indicators. Revenues, pro ts, and market shar
e are common examples of lagging indicators and therefore are routinely overused
. Leading indicators must also be developed to get at the drivers behind the nanc
ial success. The key is to work hard enough at it to make good measurement choic
es and then stick with them long enough to learn from them. Organizations and ma
nagers must overcome the anxieties and frustrations that come with outcome-based
performance measures and learn how to select and use the best measures. The fol
lowing section on the four yardsticks can help you become increasingly comfortab
le in choosing and sticking with the best measures of progress.
3.2.2.
Using the Four Yardsticks
All performance challenges are measurable by some combination of the following:
Speed / time Cost On-spec / expected quality Positive yields
The rst two are quantitative and objective and the second two a blend of objectiv
e / subjective and quantitative / qualitative. Becoming adept at the use of thes
e yardsticks will take you a long way toward overcoming the anxieties and obstac
les inherent in performance outcome-based goals. 3.2.2.1. Speed / Time Process m
anagement is the most common application of this metric. We use it anytime we ne
ed to measure how long it takes to complete some activity or process. It is one
of the measures that usually requires extra work. Most processes in an organizat
ion cross multiple department boundaries, but not neatly. The extra work comes i
n the need to be speci c about beginning and ending points. The scope you place on
the process end points will depend on your goal and level of ambition behind th
e process. For example, an order-generation and ful llment process that is designe
d to be both the fastest and totally customer-driven will need to go beyond rece
ipt of delivery and include process steps that measure your customers use and sat
isfaction levels. If the process you want to measure is complex, you also must d
e ne the speci c steps to the process so that you will understand where to concentra
te efforts and where your efforts are paying off. There are six choices you need
to make when applying a speed / time metric: 1. 2. 3. 4. 5. What is the process
or series of work steps you wish to measure? What step starts the clock? What s
tep stops the clock? What unit of time makes the most sense? What number and fre
quency of items going through the process must meet your speed requirements? 6.
What adjustments to roles and resources (e.g., systems) are needed to do the wor
k of measurement and achieve the goals?
PERFORMANCE MANAGEMENT
1005
Some of the more fundamental processes in organizations in addition to order ful l
lment include new product / service development and introduction, customer servi
ce, integrated supply chain, and the hiring / development / retention of people.
3.2.2.2. Cost Cost is clearly the most familiar of the four yardsticks. But her
e too we can point to nuances. Historically, organizations have focused mostly o
n the unit costs of materials or activities. These units paralleled organization
silo structures. This approach to costing still makes sense for certain functio
nally sensitive performance challenges. On the other hand, many of todays process
based performance challenges demand the use of activity-based costing instead of
unit costing. 3.2.2.3. On-Spec / Expected Quality Product and service speci catio
ns generally derive from production, operational, and service-level standards, l
egal and regulatory requirements, and customer and competitive demands. Another
way of viewing speci cations is through a companys value proposition, which include
s the functions, features, and attributes invested in a product or service in or
der to win customer loyalty. Some dimensions of quality are highly engineered an
d easily de ned and measured. Others can become abstract and very hard to measure
unless the speci cations are stated and well de ned. This is key when using this fam
ily of metrics. When customer expectations are unknown or poorly de ned, they cann
ot be intentionally achieved. You cannot set or achieve goals related to aspects
of performance that you cannot even de ne. The approach to quality and continuous
improvement reviewed above therefore, emphasizes the need to let the customer d
e ne quality, to consider any deviation from that a defect, and to set speci c goals
about reducing defects on a continual basis. 3.2.2.4. Positive Yields This nal y
ardstick category is designed to deal with more abstract or unknown dimensions t
o customer expectations. It is also a catch-all category whose measures re ect pos
itive and constructive output or yield of organizational effort. Yields are ofte
n prone to subjective or qualitative measures, and their purpose is to get at th
e measurement of newer performance challenges such as alliances or strategic par
tnering, delighting customers or core competencies. While it is hard to reduce these
aspirations to speci c or quanti able measurement, the subjective or qualitative me
asures in this area can be very effective as long as they can be assessed and tr
acked with effective candor and honesty. (Note how this brings us back to Kanters
new assumptions and Synectics high-performance organization attributes.) Good perfor
mance goals nearly always re ect a combination of two or more of the four yardstic
ks. Moreover, the rst two yardsticks (speed / time and cost) measure the effort o
r investment put into organizational action, while the second two (on-spec / exp
ected quality and positive yields) measure bene ts you get out of that effort or i
nvestment. The best goals typically have at least one performance outcome relate
d to effort put in and at least one outcome related to the bene ts produced by tha
t effort.
3.2.3.
Articulating SMART Performance Goals
People setting outcome-based goals can bene t from using the SMART acronym as a ch
ecklist of items that characterize goals that are speci c, relevant, aggressive ye
t achievable, relevant to the challenge at hand, and time bound. Thus, goals are
SMART when they are:
Speci c: Answers questions such as at what? for whom? and by how much? Measur
quires feedback, which requires measurement. Metrics might be objective or subjective, as long as they are assessable.
Aggressive (yet Achievable): Each A is signi cant. Aggressiveness suggests stretch, wh
ich
provides inspiration. Achievable allows for a more sustained pursuit because mos
t people will not stay the course for long if goals are not credible. Setting go
als that are both aggressive and achievable allows people and organizations to g
ain all the advantages of stretch goals without creating illusions about what is
possible. Relevant: Goals must relate to the performance challenge at hand. Thi
s includes a focus on leading indicators, which are harder to de ne and riskier to
achieve than the more commonly relied-on lagging indicators around nancial perfo
rmance (i.e. revenue, pro ts). Time bound: The nal speci c measure relates to time an
d answering the question by when? You cannot de ne success without knowing when time i
s up.
4. COMBINING THE FORMAL AND INFORMAL ORGANIZATIONS: THE CHALLENGE OF ALIGNMENT I
N A FAST-CHANGING WORLD
Organizations can hardly be called organized if people throughout them are pursu
ing goals that are random, unaligned, and con icting. Managing performance, then,
must include an effort to align
1006
PERFORMANCE IMPROVEMENT MANAGEMENT
goals, both against performance challenges and across the various parts of the o
rganization (and increasingly, effort beyond the organization itself, e.g., by s
trategic partners). Until a decade ago, alignment was considered a simple task,
one conducted mostly at budgeting and planning time to be certain all the number
s added up. Today, alignment is far more subtle and challenging. To meet that ch
allenge, readers must be certain they are paying attention to only the relevant
challenges, the relevant metrics, and the relevant parts of both the formal and
the informal organization. The formal organization equates to formal hierarchy.
It re ects the of cial directions, activities, and behavior that leaders want to see
. The informal organization relates to the actual organizational behavior. It re e
cts the behaviors that individuals and teams exhibit regardless of of cial leaders
hip. Both are necessary for truly successful performance management and both are
real in contributing to outcomes. Readers will fall into a trap if they worry o
nly about alignment among the of cial, formal organization.
4.1.
Identifying the Working Arenas That Matter to the Challenge at Hand
To avoid that trap, readers need to learn about working arenas, which consist of any
part of an organization, whether formal or informal, and whether inside the org
anization or beyond it (e.g., strategic partner), where work happens that matter
s to performance. Working arenas are where people make performance happen. They
include and go well beyond any single individuals job. Todays organizations exhibi
t much more variety in how work gets structured, as Figure 5 shows. A real shift
has occurred from hierarchy-based work structures to more horizontal and open w
ork structures. This reinforces our message earlier about the pace of change and
how it forces us to consider these newer forms of structuring work to create co
nnection and speed. That is exactly what you and your colleagues need to do with
your performance goalscreate connections so that you can increase your speed dur
ing implementation. The absolute is that you must set outcome-based goals that t
all the relevant working arenas, not just the jobs, of the people who must achie
ve those goals. Fit is not a new concept. It has been attached for a long time t
o the time-honored managerial maxim that formal accountability matches formal re
sponsibility. However, today this maxim is impossible to apply. Performance chal
lenges have too much overlap across formal structures. And they change far too o
ften to live within any set of formal control processes around all we do. Instea
d, you should apply a new version of twhere accountability for outcome-based goals
must t the working arenas of those involved in achieving the goals. The right co
lumn in Figure 5 implies both formal (departments, businesses, jobs) and informa
l (teams, initiatives, processes) working arenas. Fit still makes sense; it just
needs to cover a broader spectrum of how work actually gets done. Our argument
is that more and more high-performing organizations are learning to apply t to a
broader array of
1950s 1980s
Jobs Departments Functions Projects Business Corporate headquarters
PERFORMANCE MANAGEMENT
1007
informal approaches. Its still all about being effective and ef cientbut with greate
r speed and nimbleness. The concept of working arenas can help you divide up wor
k in ways that include but go beyond the formal jobdepartmentfunctionbusiness model
. It frees you to ask a critical series of six questions that make it easier to
apply performance-based outcome goals, including: 1. 2. 3. 4. 5. 6. What is the
performance challenge at hand? What outcomes would indicate success at this chal
lenge? What are the working arenas relevant to this challenge? To which of those
working arenas do I (or we) contribute? What metrics make the most sense for th
ese working arenas? What SMART outcome-based goals should we set and pursue for
each of these working arenas?
If you were to think through any speci c performance challenge in your organizatio
n, several patterns would emerge. First, most people contribute to only two or t
hree working arenas at any one time. Second, most contributions come rst and fore
most in the context of the individual or the team. Third, without identifying th
e working arena that makes the most sense for any given performance challenge, i
t is hard for people to con dently believe they are organized for success.
4.2.
Using Logic to Achieve Alignment across the Relevant Working Arenas
In the silo or pyramid model of organization that dominated for most of the 20th
century, alignment was a matter of adding up costs and revenues to be certain t
hat budgets and plans made sense. Given the entirely formal character of organiz
ations and the relatively small number of working arenas (see Figure 5), this ma
de sense. However, in the fast-moving, exible, real world of today, there are far too
many different kind of working arenas, far too many different kind of performan
ce challenges, and (as we discussed in Section 3.2), far more kinds of metrics f
or success. Budgeting- and planningdriven approaches to alignment that focus sol
ely on being sure the numbers add up do not work in this world. Instead, readers
must get comfortable with two approaches to alignment, quantitative and qualita
tive. Both of these are logical. If a team is working to increase the speed of a
critical process such as new product development, they might measure success by
speed and quality. Those metrics will not add up or roll up to quantitatively s
upport the entire businesss revenue and pro t goals. But those metrics do logically
reinforce the entire businesss strategy to grow through innovation. In the new w
orld of alignment, then, it is most critical to ask whether the outcome-based go
als set across a relevant series of working arenas are logically aligned, not ju
st arithmetically aligned. Table 1 lists many of todays most compelling performan
ce challenges and suggests whether readers will nd it easier to discover logical
alignment quantitatively, qualitatively, or through some combination of both.
TABLE 1
Aligning Performance Challenges Quantitative Alignment X X X X X X X X X X X X Q
ualitative Alignment X X X X X X X X X X X
Todays Performance Challenges Core competencies Customer service Diversity Electr
onic commerce Growth Innovation Mergers / acquisitions Pro tability Reengineering
Relationship-based marketing Speed Strategy Teams Technology Total quality Value
s / behaviors / best place to work
Source: Smith 1999.
1008
PERFORMANCE IMPROVEMENT MANAGEMENT
4.3. Leadership of Both the Formal and Informal Organization During Periods of C
hange
Leaders who must manage performance in the face of change will be more likely to
succeed if they attend to both the formal and informal aspects of the organizat
ion. Those who focus solely on the formal, of cial organization will fail. Not onl
y will they fall into the trap of believing that formal goal alignment is in pla
ce so long as the nancial numbers add up, but they will also fail to address the
most critical performance-management challenge of all: how to get existing emplo
yees and managers to take the risk to do things differently. When performance it
self depends on lots and lots of existing people learning new skills, behaviors
and working relationships, leaders are faced with behavior-driven change. By con
trast, if a new strategy or direction can be accomplished based on existing skil
ls, then leaders are faced with decision-driven change. Many of the old assumpti
ons reviewed in Section 2 work better for decisiondriven change than for behavio
r-driven change. However, an ever-increasing number of performance challenges no
w require leaders to master the disciplines for managing behavior-driven change.
Here are four questions that will help you tell the difference between decision
-driven and behavioral-driven change: 1. Does all or any signi cant part of your o
rganization have to get very good at one or more things that it is not good at t
oday? 2. Do lots of already employed people have to change speci c skills, behavio
rs, and / or working relationships? 3. Does your organization have a positive re
cord of success with changes of the type you are considering? 4. Do those people
who must implement the new decisions and directions understand what they need t
o do and urgently believe the time to act is now? If the answer is no to the rst
two questions and yes to the second two, you can employ a decisiondriven change
approach. If the answer is yes to the rst two questions and no to the second two,
you are facing behavior-driven change. If you do face decision-driven change, w
e suggest following the following best practices (Kotter 1996): 1. Establishing
a sense of urgency Examining the market and competitive realities Identifying an
d discussing crises, potential crises, or major opportunities 2. Creating the gu
iding coalition Putting together a group with enough power to lead the change Ge
tting the group to work together like a team 3. Developing a vision and strategy
Creating vision to help direct the change effort Developing strategies for achi
eving that vision 4. Communicating the change vision Using all available avenues
to communicate vision constantly Having the guiding coalition role model the be
havior expected of employees 5. Empowering broad-based action Getting rid of obs
tacles Changing systems or structures that undermine the change vision Encouragi
ng risk taking and nontraditional ideas, activities, and actions 6. Generating s
hort-term wins Planning for visible improvements in performance, or wins Creating th
ose wins Visibly recognizing and rewarding people who made the wins possible 7.
Consolidating gains and producing more change Using increased credibility to cha
nge all systems, structures, and policies that dont t together and dont t the transf
ormation vision Hiring, promoting, and developing people who implement the chang
e vision Reinvigorating the process with new projects, themes, and change agents
PERFORMANCE MANAGEMENT
1009
8. Anchoring new approaches in the culture Creating better performance through c
ustomer- and productivity-oriented behavior, more and better leadership, and mor
e effective management Articulating the connections between new behaviors and or
ganizational success Developing means to ensure leadership development and succe
ssion. If, however, you are confronted with behavior-driven change, Kotters trans
formational leadership approaches will not necessarily work. Indeed, as extensiv
ely discussed in Smith (1996), study after study shows that up to four out of ve
change efforts either fail or seriously suboptimize. And the root cause of these
failures lie in leaders who follow decision-driven approaches like those sugges
ted by Kotter when, in fact, they face behavior-driven change. Managing performa
nce through a period of behavior-driven change demands a different approach. Her
e is a series of best practices: 1. Keep performance, not change, as the primary
objective of behavior and skill change. 2. Focus on continually increasing the
number of people taking responsibility for their own performance and change. 3.
Ensure that each person always knows why his or her performance and change matte
r to the purpose and results of the whole organization. 4. Put people in a posit
ion to learn by doing and provide them with the information and support needed j
ust in time to perform. 5. Embrace improvisation; experiment and learn; be willi
ng to fail. 6. Use performance to drive change whenever demanded. 7. Concentrate
organization designs on the work people do, not on the decision-making authorit
y they have. 8. Create and focus your two scarcest resources during behavioral-d
riven changeenergy and meaningful language. 9. Harmonize and integrate the change
initiatives in your organization, including those that are decision driven as w
ell as behavior driven. 10. Practice leadership based on the courage to live the
change you wish to bring aboutwalk the talk. Please take these and practice. Get
others to try. They will make a huge and lasting difference for you personally
and for your organizations.
4.4.
Bringing It All Together
We will close with a few suggestions on how your organization can put it all tog
ether into an integrated performance outcomes-management system. The concepts, f
rameworks, and techniques presented in this chapter can be deployed to establish
an outcomes-management system in your organization. The objective is performanc
e. You should seek to establish a system and set of practices to help the people
of your enterprise routinely set and update the SMART outcome-based goals that
matter most to success as well as to choose which management disciplines to use
to achieve their goals. The outcomes-management system will enable everyone to s
ee how the goals in your organization t together and make sense from a variety of
critical perspectives, from top management (the whole) to each small group, wor
king arena, and performance challenge and each individual throughout the organiz
ation. Figure 6 presents an overview of the design of what a business outcomes-m
anagement system might look like. Figure 6 summarizes the skeletal design of an
outcomes-management system for any single business. It brings visibility to the
outcomes that matter most to customers, shareholders, and people of the business
. It then links these outcomes to the most critical functions, processes, and in
itiatives that contribute to overall success. This design can be extended to the
multibusiness corporation and can be driven down through the organization to ev
ery critical shared service or working arena that is critical to success. See Sm
ith (1999) for more illustrations of how this cascading performance outcomes-man
agement system model can work. An integrated performance model requires that you
create the critical linkages, bring visibility to the interdependencies among w
orking arenas, and drive performance through aggressive planning and execution.
But avoid the trap of spending all the organization energy around the activity o
f creating the integrated plan. Make sure you remember the paramount rule: focus
on performance outcomes, not activity.
1010
PERFORMANCE IMPROVEMENT MANAGEMENT
Initiative Key initiative outcomes for initiative #1
n Key individual / team outcomes
Key business Outcomes for customers, shareholders, people of the business and th
eir partners
Function n Key function outcomes for function #1 Key individual / team outcomes
Process n Key process outcomes for process #1 Key individual / team outcomes
Figure 6 Business Outcomes Management System: Design. (Adapted from Smith 1999)
5.
CONCLUDING THOUGHTS
We have presented a view of how organizations should view performance in order t
o improve it. We recommend that you combine the approaches that effect the forma
l and informal systems in an organization. Success comes not from a single progr
am, slogan, or initiative, but from working the total picture. We have covered a
lot of ground around our original three themes: change, measurement, and combin
ing informal and formal organizational approaches to achieve performance success
. We have tried to share with you our experiences and to give you ideas and tech
niques that will enhance your perspectives on performance management and will al
so allow you try new things today. We hope we have succeeded.
REFERENCES
Kanter, R. M. (1983), The Change Masters, Simon & Schuster, New York. Kaplan, R.
, and Norton, D. (1995), The Balanced Scorecard, Harvard Business School Press,
Boston. Kotter J. P. (1996), Leading Change, Harvard Business School Press, Bost
on. Malone, T. W., and Laubacher, R. J. (1998), The Dawn of the E-Lance Economy, Har
vard Business Review, Vol. 76, SeptemberOctober, pp. 145152. Nadler, D. A., Gerste
in, M. S., Shaw, R. B. and Associates (1992), Organizational Architecture, Josse
y-Bass, San Francisco. Senge, P., Ross, R., Smith, B., Roberts, C., and Kleiner,
A. (1992), The Fifth Discipline Fieldbook, Doubleday, New York. Smith, D. K. (1
996), Taking Charge of Change, Addison-Wesley, Reading, MA. Smith, D. K. (1999),
Make Success Measurable, John Wiley & Sons, New York. Stalk, G., and Hout, T. M
. (1990), Competing against Time: Time-Based Competition is Reshaping Global Mar
kets, Free Press, New York.
III.B
Human Factors and Ergonomics
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
omponents are more demanding and crucial for the task performance than the physi
cal components. We call these tasks cognitive. Design, managerial and production
planning, computer programming, medical diagnosis, process control, air traf c co
ntrol, and fault diagnosis in technological systems are typical examples of cogn
itive tasks. Traditionally, cognitive tasks have been carried out by white-colla
r workers, middle and upper cadres of enterprises, as well as several freelance
professionals. With the advent of information technology and automation in moder
n industrial settings, however, the role of blue-collar workers has changed from
manual controllers to supervisors and diagnosticians. In other words, developme
nts in technology are likely to affect the requirements about knowledge and skil
ls and the way operators interact with systems. Consequently, cognitive tasks ab
ound in modern industries.
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
1013
1014
PERFORMANCE IMPROVEMENT MANAGEMENT
In view of these changes in cognitive and collaborative demands, ergonomics must
play a crucial role in matching technological options and user requirements. Co
gnitive ergonomics or cognitive engineering is an emerging branch of ergonomics
that places particular emphasis on the analysis of cognitive processesfor example
, diagnosis, decision making, and planningrequired of operators in modern industr
ies. Cognitive ergonomics aims to enhance performance of cognitive tasks by mean
s of several interventions, including:
User-centered design of humanmachine interaction Design of information technology
systems that support cognitive tasks (e.g., cognitive aids) Development of trai
ning programs Work redesign to manage cognitive workload and increase human reli
ability
Successful ergonomic interventions in the area of cognitive tasks require a thor
ough understanding not only of the demands of the work situation, but also of us
er strategies in performing cognitive tasks and of limitations in human cognitio
n. In some cases, the artifacts or tools used to carried out a task may impose t
heir own constraints and limitations (e.g., navigating through a large number of
VDU pages), which add up to the total work demands. In this sense, the analysis
of cognitive tasks should examine the interaction of users both with their work
environment and with artifacts or tools; the latter is very important as modern
artifacts (e.g., control panels, electronic procedures, expert systems) become
increasingly sophisticated. As a result, this chapter puts particular emphasis o
n how to design manmachine interfaces and cognitive aids so that human performanc
e is sustained in work environments where information may be unreliable, events
be dif cult to predict, goals have con icting effects, and performance be time const
rained. Other types of situations are also considered, as they entail performanc
e of cognitive tasks. Variations from everyday situations are often associated w
ith an increase in human errors as operators are required to perform several cog
nitive tasks, such as detecting variations, tailoring old methods or devising ne
w ones, and monitoring performance for errors. Sections 2 and 3 introduce human
performance models that provide the basic framework for a cognitive analysis of
the work environment, artifact constraints, and user strategies. Design principl
es and guidelines derived from these models are also presented in these sections
. Section 4 describes the methodology of cognitive task analysis. Finally, Secti
on 5 discusses two case studies presenting the cognitive analysis carried out fo
r the design of cognitive aids for complex tasks.
2.
MODELS OF HUMAN COGNITION AND DESIGN PRINCIPLES
Models of human cognition that have been extensively used in ergonomics to devel
op guidelines and design principles fall into two broad categories. Models in th
e rst category have been based on the classical paradigm of experimental psycholo
gyalso called the behavioral approachfocusing mainly on information-processing sta
ges. Behavioral models view humans as fallible machines and try to determine the lim
itations of human cognition in a neutral fashion independent from the context of
performance, the goals and intentions of the users, and the background or histo
ry of previous actions. On the other hand, more recent models of human cognition
have been developed mainly through eld studies and the analysis of real-world si
tuations. Quite a few of these cognitive models have been inspired by Gibsons (19
79) work on ecological psychology, emphasizing the role of a persons intentions,
goals, and history as central determinants of human behavior. The human factors
literature is rich in behavioral and cognitive models of human performance. Beca
use of space limitations, however, only three generic models of human performanc
e will be presented here. They have found extensive applications. Section 2.1 pr
esents a behavioral model developed by Wickens (1992), the human information-pro
cessing model. Sections 2.2 and 2.3 present two cognitive models, the action-cyc
le model of Norman (1988) and the skill-, rule-, and knowledgebased model of Ras
mussen (1986).
2.1. 2.1.1.
The Human Information-Processing Model The Model
The experimental psychology literature is rich in human performance models that
focus on how humans perceive and process information. Wickens (1992) has summari
zed this literature into a generic model of information processing (Figure 1). T
his model draws upon a computer metaphor whereby information is perceived by app
ropriate sensors, help-up and processed in a temporal memory (i.e., the working
memory or RAM memory), and nally acted upon through dedicated actuators. Long-ter
m memory (corresponding to a permanent form of memory, e.g., the hard disk) can
be used to store well-practiced work methods or algorithms for future use. A bri
ef description of the human information-processing model is given on page 1015.
COGNITIVE TASKS
Attention Resources
1015
Work Environment
Receptors
Perception
Short-Term Sensory Memory
decision-making diagnosis response selection ....
Response Execution
Working Memory
Long-Term Memory
Figure 1 The Human Information-Processing Model. (Adapted from Wickens 1992)
Information is captured by sensory systems or receptors, such as the visual, aud
itory, vestibular, gustatory, olfactive, tactile, and kinesthetic systems. Each
sensory system is equipped with a central storage mechanism called short-term se
nsory store (STSS) or simply short-term memory. STSS prolongs a representation o
f the physical stimulus for a short period after the stimulus has terminated. It
appears that environmental information is stored in the STSS even if attention
is diverted elsewhere. STSS is characterized by two types of limitations: (1) th
e storage capacity, which is the amount of information that can be stored in the
STSS, and (2) the decay, which is how long information stays in the STSS. Altho
ugh there is strong experimental evidence about these limitations, there is some
controversy regarding the numerical values of their limits. For example, experi
ments have shown that the short-term visual memory capacity varies from 7 to 17
letters and the auditory capacity from 4.4 to 6.2 letters (Card et al. 1986). Wi
th regard to memory decay rate, experiments have shown that it varies from 70 to
1000 msec for visual short-term memory whereas for the auditory short-term memo
ry the decay varies from 900 to 3500 msec (Card et al. 1986). In the next stage,
perception, stimuli stored in STSS are processed further, that is, the perceive
r becomes conscious of their presence, he / she recognizes, identi es, or classi es
them. For example, the driver rst sees a red traf c light, then detects it, recognizes
that it is a signal related to the driving task, and identi es it as a stop sign.
A large number of different physical stimuli may be assigned to a single percept
ual category. For example, a, A, a, a and the sound a all generate the same categori
cal perception: the letter a. At the same time, other characteristics or dimensions
of the stimulus are also processed, such as whether the letter is spoken by a ma
le or female voice, whether it is written in upper- or lowercase, and so on. Thi
s example shows that stimulus perception and encoding is dependent upon availabl
e attention resources and personal goals and knowledge as stored in the long-ter
m memory (LTM). This is the memory where perceived information and knowledge acq
uired through learning and training are stored permanently. As in working memory
, the information in long-term memory can have any combination of auditory, spat
ial, and semantic characteristics. Knowledge can be either procedural (i.e., how
to do things) or declarative (i.e., knowledge of facts). The goals of the perso
n provide a workspace within which perceived stimuli and past experiences or met
hods retrieved from LTM are combined and processed further. This workspace is of
ten called working memory because the person has to be conscious of and remember
all presented or retrieved information. However, the capacity of working memory
also seems to be limited. Miller (1956) was the rst to de ne the capacity of worki
1016
PERFORMANCE IMPROVEMENT MANAGEMENT
At the nal stage of information processing (response execution), the responses ch
osen in the previous stages are executed. A response can be any kind of action,
such as eye or head movements, hand or leg movement, and verbal responses. Atten
tion resources are also required at this stage because intrinsic (e.g., kinesthe
tic) and / or extrinsic feedback (e.g., visual, auditory) can be used to monitor
the consequences of the actions performed. A straightforward implication of the
information-processing model shown in Figure 1 is that performance can become f
aster and more accurate when certain mental functions become automated through incre
ased practice. For instance, familiarity with machine drawings may enable mainte
nance technicians to focus on the cognitive aspects of the task, such as trouble
shooting; less-experienced technicians would spend much more time in identifying
technical components from the drawings and carrying out appropriate tests. As p
eople acquire more experience, they are better able to time-share tasks because
well-practiced aspects of the job become automated (that is, they require less a
ttention and effort).
2.1.2.
Practical Implications
The information-processing model has been very useful in examining the mechanism
s underlying several mental functions (e.g., perception, judgment, memory, atten
tion, and response selection) and the work factors that affect them. For instanc
e, signal detection theory (Green and Swets 1988) describes a human detection me
chanism based on the properties of response criterion and sensitivity. Work fact
ors, such as knowledge of action results, introduction of false signals, and acc
ess to images of defectives, can increase human detection and provide a basis fo
r ergonomic interventions. Other mental functions (e.g., perception) have also b
een considered in terms of several processing modes, such as serial and parallel
processing, and have contributed to principles for designing manmachine interfac
es (e.g., head-up displays that facilitate parallel processing of data superimpo
sed on one another). In fact, there are several information-processing models lo
oking at speci c mental functions, all of which subscribe to the same behavioral a
pproach. Because information-processing models have looked at the mechanisms of
mental functions and the work conditions that degrade or improve them, ergonomis
ts have often turned to them to generate guidelines for interface design, traini
ng development, and job aids design. To facilitate information processing, for i
nstance, control panel information could be cast into meaningful chunks, thus in
creasing the amount of information to be processed as a single perceptual unit.
For example, dials and controls can be designed and laid out on a control panel
according to the following ergonomic principles (Helander 1987; McCormick and Sa
nders 1987):
Frequency of use and criticality: Dials and controls that are frequently used, o
r are of special
importance, should be placed in prominent positions, for example in the center o
f the control panel. Sequential consistency: When a particular procedure is alwa
ys executed in a sequential order, controls and dials should be arranged accordi
ng this order. Topological consistency: Where the physical location of the contr
olled items is important, the layout of dials should re ect their geographical arr
angement. Functional grouping: Dials and controls that are related to a particul
ar function should be placed together. Many systems, however, fail to adhere to
these principlesfor example, the layout of stove burner controls fails to conform
to the topological consistency principle (see Figure 2). Controls located besid
e their respective burners (Figure 2[b] and 2[d ]) are compatible and will elimi
nate confusions caused by arrangements shown in Figure 2(a) and 2(c). Informatio
n-processing models have also found many applications in the measurement of ment
al workload. The limited-resource model of human attention (Allport 1980) has be
en extensively used to examine the degree to which different tasks may rely upon
similar psychological resources or mental functions. Presumably, the higher the
reliance upon similar mental functions, the higher the mental workload experien
ced by the human operator. Measures of workload, hence, have tended to rely upon
the degree of sharing similar mental functions and their limited capacities. In
formation-processing models, however, seem insuf cient to account for human behavi
or in many cognitive tasks where knowledge and strategy play an important role.
Recent studies in tactical decision making (Serfaty et al. 1998), for instance,
have shown that experienced operators are able to maintain their mental workload
at low levels, even when work demands increase, because they can change their s
trategies and their use of mental functions. Under time pressure, for instance,
crews may change from an explicit mode of communication to an implicit mode wher
eby information is made available without previous requests; the crew leader may
keep all members informed of the big picture so that they can volunteer information
when necessary without excessive communications. Performance of experienced per
sonnel may become adapted to the demands of the situation,
COGNITIVE TASKS
1017
Figure 2 Alternative Layouts of Stove Burner Controls.
thus overcoming high workload imposed by previous methods of work and communicat
ion. This interaction of expert knowledge and strategy in using mental functions
has been better explored by two other models of human cognition that are descri
bed below.
2.2. 2.2.1.
The Action-Cycle Model The Model
Many artifacts seem rather dif cult to use, often leading to frustrations and huma
n errors. Norman (1988) was particularly interested in how equipment design coul
d be bene t from models of human performance. He developed the action-cycle model
(see Figure 3), which examines how people set themselves and achieve goals by ac
ting upon the external world. This actionperception cycle entails two main cognit
ive processes by which people implement goals (i.e., execution process) and make
further adjustments on the basis of perceived changes and evaluations of goals
and intentions (i.e., evaluation process). The starting point of any action is s
ome notion of what is wanted, that is, the goal to be achieved. In many real tas
ks, this goal may be imprecisely speci ed with regard to actions that would lead t
o the desired goal, such as, I want to write a letter. To lead to actions, human goa
ls must be transformed into speci c statements of what is to be done. These statem
ents are called intentions. For example, I may decide to write the letter by usi
ng a pencil or a computer or by dictating it to my secretary. To satisfy intenti
ons, a detailed sequence of actions must be thought of (i.e., planning) and exec
uted by manipulating several objects in the world. On the other hand, the evalua
tion side of
Goals/Functions Evaluation of interpretations Evaluation Interpreting the percep
tions
Intention Execution
Planning of actions
Perceiving the Execution of actions sequence state of the world
The World
Figure 3 The Action-Cycle Model. (Adapted from Norman 1988)
1018
PERFORMANCE IMPROVEMENT MANAGEMENT
things has three stages: rst, perceiving what happened in the world; second, maki
ng sense of it in terms of needs and intentions (i.e., interpretation); and nally
, comparing what happened with what was wanted (i.e., evaluation). The action-cy
cle model consists of seven action stages: one for goals (forming the goal), thr
ee for execution (forming the intention, specifying actions, executing actions),
and three for evaluation (perceiving the state of the world, interpreting the s
tate of the world, evaluating the outcome). As Norman (1988, p. 48) points out,
the action-cycle model is an approximate model rather than a complete psychologi
cal theory. The seven stages are almost certainly not discrete entities. Most be
havior does not require going through all stages in sequence, and most tasks are
not carried out by single actions. There may be numerous sequences, and the who
le task may last hours or even days. There is a continual feedback loop, in whic
h the results of one action cycle are used to direct further ones, goals lead to
subgoals, and intentions lead to subintentions. There are action cycles in whic
h goals are forgotten, discarded, or reformulated.
2.2.2.
The Gulfs of Execution and Evaluation
The action-cycle model can help us understand many dif culties and errors in using
artifacts. Dif culties in use are related to the distance (or amount of mental wo
rk) between intentions and possible physical actions, or between observed states
of the artifact and interpretations. In other words, problems arise either beca
use the mappings between intended actions and equipment mechanisms are insuf cient
ly understood, or because action feedback is rather poor. There are several gulf
s that separate the mental representations of the person from the physical state
s of the environment (Norman 1988). The gulf of execution re ects the difference b
etween intentions and allowable actions. The more the system allows a person to
do the intended actions directly, without any extra mental effort, the smaller t
he gulf of execution is. A small gulf of execution ensures high usability of equ
ipment. Consider, for example, faucets for cold and hot water. The user intentio
n is to control two things or parameters: the water temperature and the volume.
Consequently, users should be able to do that with two controls, one for each pa
rameter. This would ensure a good mapping between intentions and allowable actio
ns. In conventional settings, however, one faucet controls the volume of cold wa
ter and the other the volume of hot water. To obtain water of a desired volume a
nd temperature, users must try several combinations of faucet adjustments, hence
losing valuable time and water. This is an example of bad mapping between inten
tions and allowable actions (a large gulf of execution). The gulf of evaluation
re ects the amount of effort that users must exert to interpret the physical state
of the artifact and determine how well their intentions have been met. The gulf
of evaluation is small when the artifact provides information about its state t
hat is easy to get and easy to interpret and matches the way the user thinks of
the artifact.
2.2.3.
Using the Action-Cycle Model in Design
According to the action-cycle model, the usability of an artifact can be increas
ed when its design bridges the gulfs of execution and evaluation. The seven-stag
e structure of the model can be cast as a list of questions to consider when des
igning artifacts (Norman 1988). Speci cally, the designer may ask how easily the u
ser of the artifact can: 1. 2. 3. 4. 5. 6. 7. Determine the function of the arti
fact (i.e., setting a goal) Perceive what actions are possible (i.e., forming in
tentions) Determine what physical actions would satisfy intentions (i.e., planni
ng) Perform the physical actions by manipulating the controls (i.e., executing)
Perceive what state the artifact is in (i.e., perceiving state)? Achieve his or
her intentions and expectations (i.e., interpreting states) Tell whether the art
ifact is in a desired state or intentions should be changed (i.e., evaluating in
tentions and goals)
A successful application of the action-cycle model in the domain of humancomputer
interaction regards direct manipulation interfaces (Shneiderman 1983; Hutchins
et al. 1986). These interfaces bridge the gulfs of execution and evaluation by i
ncorporating the following properties (Shneiderman 1982, p. 251):
Visual representation of the objects of interest Direct manipulation of objects,
instead of commands with complex syntax Incremental operations that their effec
ts are immediately visible and, on most occasions, reversible
COGNITIVE TASKS
1019
The action-cycle model has been translated into a new design philosophy that vie
ws design as knowledge in the world. Norman (1988) argues that the knowledge req
uired to do a job can be distributed partly in the head and partly in the world.
For instance, the physical properties of objects may constrain the order in whi
ch parts can be put together, moved, picked up, or otherwise manipulated. Apart
from these physical constraints, there are also cultural constraints, which rely
upon accepted cultural conventions. For instance, turning a part clockwise is t
he culturally de ned standard for attaching a part to another while counterclockwi
se movement usually results in dismantling a part. Because of these natural and
arti cial or cultural constraints, the number of alternatives for any particular s
ituation is reduced, as is the amount of knowledge required within human memory.
Knowledge in the world, in terms of constraints and labels, explains why many a
ssembly tasks can be performed very precisely even when technicians cannot recal
l the sequence they followed in dismantling equipment; physical and cultural con
straints reduce the alternative ways in which parts can be assembled. Therefore,
cognitive tasks are more easily done when part of the required knowledge is ava
ilable externallyeither explicit in the world (i.e., labels) or readily derived t
hrough constraints.
2.3. 2.3.1.
The Skill-, Rule-, and Knowledge-Based Model The Model
In a study of troubleshooting tasks in real work situations, Rasmussen (1983) ob
served that people control their interaction with the environment in different m
odes. The interaction depends on a proper match between the features of the work
domain and the requirements of control modes. According to the skill-, rule-, a
nd knowledge-based model (SRK model), control and performance of human activitie
s seem to be a function of a hierarchically organized control system (Rasmussen
et al. 1994). Cognitive control operates at three levels: skill-based, or automa
tic control; rule-based, or conditional control; and knowledge-based, or compens
atory control (see Figure 4). At the lowest level, skill-based behavior, human p
erformance is governed by patterns of preprogrammed behaviors represented as ana
log structures in a timespace domain in human memory. This mode of behavior is ch
aracteristic of well-practiced and routine situations whereby open-loop or feedf
orward control makes performance faster. Skill-based behavior is the result of e
xtensive practice where people develop a repertoire of cueresponse patterns suite
d to speci c situations. When a familiar situation is recognized, a response is ac
tivated, tailored, and applied to the situation. Neither any conscious analysis
of the situation nor any sort of deliberation of alternative solutions is requir
ed.
Knowledge-based behavior
Goals
Interpretation
Decision, choice of task
Planning
Rule-based behavior
Recognition
Association state/task
Stored rules for tasks
Skill-based behavior
Situations feature formation
Automated sensorimotor patterns
Sensory inputs
Signals Actions
Figure 4 The Skill-, Rule-, and Knowledge-Based Model. (Adapted from Rasmussen 1
986)
1020
PERFORMANCE IMPROVEMENT MANAGEMENT
At the middle level, rule-based behavior, human performance is governed by condi
tional rules of the type: if state X then
action Y
Initially, stored rules are formulated at a general level. They subsequently are
supplemented by further details from the work environment. Behavior at this lev
el requires conscious preparation: rst recognition of the need for action, follow
ed by retrieval of past rules or methods and nally, composition of new rules thro
ugh either self-motivation or through instruction. Rule-based behavior is slower
and more cognitively demanding than skill-based behavior. Rule-based behavior c
an be compared to the function of an expert system where situations are matched
to a database of production rules and responses are produced by retrieving or co
mbining different rules. People may combine rules into macrorules by collapsing
conditions and responses into a single unit. With increasing practice, these mac
rorules may become temporalspatial patterns requiring less conscious attention; t
his illustrates the transition from rule-based to skill-based behavior. At the h
ighest level, knowledge-based behavior, performance is governed by a thorough an
alysis of the situation and a systematic comparison of alternative means for act
ion. Goals are explicitly formulated and alternative plans are compared rational
ly to maximize ef ciency and minimize risks. Alternatives are considered and teste
d either physically, by trial and error, or conceptually, by means of thought ex
periments. The way that the internal structure of the work system is represented
by the user is extremely important for performance. Knowledge-based behavior is
slower and more cognitively demanding than rule-based behavior because it requi
res access to an internal or mental model of the system as well as laborious com
parisons of work methods to nd the most optimal one. Knowledge-based behavior is
characteristic of unfamiliar situations. As expertise evolves, a shift to lower
levels of behavior occurs. This does not mean, however, that experienced operato
rs always work at the skill-based level. Depending on the novelty of the situati
on, experts may move at higher levels of behavior when uncertain about a decisio
n. The shift to the appropriate level of behavior is another characteristic of e
xpertise and graceful performance in complex work environments.
2.3.2.
Using the SRK Model
The SRK model has been used by Reason (1990) to provide a framework for assignin
g errors to several categories related to the three levels of behavior. Deviatio
ns from current intentions due to execution failures and / or storage failures,
for instance, are errors related to skill-based behavior (slips and lapses). Mis
classi cations of the situation, leading to the application of the wrong rule or i
ncorrect procedure, are errors occurring at the level rule-based behavior. Final
ly, errors due to limitations in cognitive resourcesbounded rationalityand incomplete
or incorrect knowledge are characteristic of knowledge-based behavior. More subt
le forms of errors may occur when experienced operators fail to shift to higher
levels of behavior. In these cases, operators continue skill- or rule-based beha
vior although the situation calls for analytical comparison of options and reass
essment of the situation (e.g., knowledge-based processing). The SRK model also
provides a useful framework for designing manmachine interfaces for complex syste
ms. Vicente and Rasmussen (1992) advance the concept of ecological interfaces th
at exploit the powerful human capabilities of perception and action, at the same
time providing appropriate support for more effortful and error-prone cognitive
processes. Ecological interfaces aim at presenting information in such a way as n
ot to force cognitive control to a higher level than the demands of the task req
uire, while at the same time providing the appropriate support for all three lev
els (Vicente and Rasmussen 1992, p. 598). In order to achieve these goals, interfa
ces should obey the following three principles: 1. Support skill-based behavior:
The operator should be able act directly on the display while the structure of
presented information should be isomorphic to the partwhole structure of eye and
hand movements. 2. Support rule-based behavior: The interface should provide a c
onsistent one-to-one mapping between constraints of the work domain and cues or
signs presented on the interface. 3. Support knowledge-based behavior: The inter
face should represent the work domain in the form of an abstraction hierarchy* t
o serve as an externalized mental model that supports problem solving.
* Abstraction hierarchy is a framework proposed by Rasmussen (1985; Rasmussen et
al. 1994) that is useful for representing the cognitive constraints of a comple
x work environment. For example, the constraints related to process control have
been found to belong to ve hierarchically ordered levels (Vicente and Rasmussen
1992, p. 592): the purpose for which the system was designed (functional purpose
); the intended causal structure of the process in terms of mass, energy, inform
ation, or value ows (abstract purpose); the basic functions that the plant is des
igned to achieve (generalized function); the characteristics of the components a
nd the connections between them (physical function); and the appearance and spec
ial location of these components (physical form).
COGNITIVE TASKS
1021
Figure 5 An Ecological Interface for Monitoring the Operation of a Nuclear Power
Plant (From Lindsay and Staffon 1988, by permission. Work performed for U.S. De
partment of Energy at Argonne National Laboratory, Contract No. W-31,109-ENG-38)
An example of an ecological interface that supports direct perception of higherlevel functional properties is shown in Figure 5. This interface, designed by Li
ndsay and Staffon (1988), is used for monitoring the operation of a nuclear powe
r plant. Graphical patterns show the temperature pro les of the coolant in the pri
mary and secondary circulation systems as well as the pro les of the feedwater in
the steam generator, superheater, turbine, and condenser. Note that while the di
splay is based on primary sensor data, the functional status of the system can b
e directly read from the interrelationships among data. The Rankine cycle displa
y presents the constraints of the feedwater system in a graphic visualization fo
rmat so that workers can use it in guiding their actions; in doing this, workers
are able to rely on their perceptual processes rather than analytical processes
(e.g., by having to solve a differential equation or engage in any other abstra
ct manipulation). Plant transients and sensor failures can be shown as distortio
ns of the rectangles representing the primary and secondary circulation, or as t
emperature points of the feedwater system outside the Rankine cycle. The display
can be perceived, at the discretion of the observer, at the level of the physic
al implications of the temperature readings, of the state of the coolant circuit
s, and of the ow of the energy.
3.
DIAGNOSIS, DECISION MAKING AND ERGONOMICS
After this brief review of behavioral and cognitive models, we can now focus on
two complex cognitive processes: diagnosis and decision making. These cognitive
processes are brought into play in many problem-solving situations where task go
als may be insuf ciently speci ed and responses may not bene t from past knowledge. Th
ese characteristics are common to many problem-solving scenarios and affect how
people shift to different modes of cognitive control. Problem solvers may use ex
periences from similar cases in the past, apply generic rules related to a whole
category of problems, or try alternative courses of actions and assess their re
sults. In other words, optimizing problem solving on the basis of knowledge-base
d behavior may be time consuming and laborious. People tend to use several heuri
stics to regulate their performance between rule-based and knowledgebased proces
sing. This increases task speed but may result in errors that are dif cult to reco
ver from.
1022
PERFORMANCE IMPROVEMENT MANAGEMENT
In following, we present a set of common heuristics followed by experienced diag
nosticians and decision makers, potential biases and errors that may arise, and n
ally ergonomic interventions how to support human performance.
3.1.
Diagnosis
Diagnosis is a cognitive process whereby a person tries to identify the causes o
f an undesirable event or situation. Technical failures and medical problems are
two well-known application areas of human diagnosis. Figure 6 presents some typ
ical stages of the diagnosis process. Diagnosis starts with the perception of si
gnals alerting one to a systems failure or malfunction. Following this, diagnosti
cians may choose whether to search for more information to develop a mental repr
esentation of the current system state. At the same time, knowledge about system
structure and functioning can be retrieved from the long-term memory. On the ba
sis of this evidence, diagnosticians may generate hypotheses about possible caus
es of the failure. Pending upon further tests, hypotheses may be con rmed, complet
ing the diagnosis process, or rejected, leading to selection of new hypotheses.
Compensation for failures may start when the diagnostician feels con dent that a c
orrect interpretation has been made of the situation. Fault diagnosis is a deman
ding cognitive activity whereby the particular diagnostic strategy will be in uenc
ed by the amount of information to be processed in developing a mental model of
the situation, the number of hypotheses consistent with available evidence, and
the activities required for testing hypotheses. In turn, these factors are in uenc
ed by several characteristics of the system or the work environment, including:
The The The The
number of interacting components of the system degrees of freedom in the operati
on of the system number of system components that can fail simultaneously transp
arency of the mechanisms of the system
Time constraints and high risk may also add up, thus increasing the dif culty of f
ault diagnosis. A particularly demanding situation is dynamic fault management (
Woods 1994), whereby operators have to maintain system functions despite technic
al failures or disturbances. Typical elds of practice
Figure 6 Typical Processes in Fault Diagnosis.
COGNITIVE TASKS
1023
where dynamic fault management occurs are ight deck operations, control of space
systems, anesthetic management, and process control. When experienced personnel
perform fault diagnosis, they tend to use several heuristics to overcome their c
ognitive limitations. Although heuristics may decrease mental workload, they may
often lead to cognitive biases and errors. It is worth considering some heurist
ics commonly used when searching for information, interpreting a situation, and
making diagnostic decisions. Failures in complex systems may give rise to large
amounts of information to be processed. Experienced people tend to lter informati
on according to its informativenessthat is, the degree to which it helps distingu
ish one failure from another. However, people may be biased when applying heuris
tics. For instance, diagnosticians may accord erroneous or equal informative val
ue to a pattern of cues (as-if bias: Johnson et al. 1973). In other cases, salient c
ues, such as noises, bright lights, and abrupt onsets of intensity or motion, ma
y receive disproportionate attention (salience bias: Payne 1980). Diagnosis may
be carried out under psychological stress, which can impose limitations in enter
taining a set of hypotheses about the situation (Rasmussen 1981; Mehle 1982; Lus
ted 1976). To overcome this cognitive limitation, experienced diagnosticians may
use alternative strategies for hypothesis generation and testing. For instance,
diagnosticians may start with a hypothesis that is seen as the most probable on
e; this probability may be either subjective, based on past experience, or commu
nicated by colleagues or the hierarchy. Alternatively, they may start with a hyp
othesis associated with a high-risk failure, or a hypothesis that can be easily
tested. Finally, they may start with a hypothesis that readily comes to mind (av
ailability bias: Tversky and Kahneman 1974). Another heuristic used in diagnosis
is anchoring, whereby initial evidence provides a cognitive anchor for the diag
nosticians belief in this hypothesis over the others (Tversky and Kahneman 1974).
Consequently, people may tend to seek information that con rms the initial hypoth
esis and avoid any discon rming evidence (Einhorn and Hogarth 1981; DeKeyser and W
oods 1990). This bias is also known as cognitive tunnel vision (Sheridan 1981).
Finally, Einhorn and Hogarth (1981), Schustack and Sternberg (1981), and DeKeyse
r and Woods (1990) describe situations where diagnosticians tend to seekand there
fore ndinformation that con rms the chosen hypothesis and to avoid information or te
sts whose outcomes could reject it. This is known as con rmation bias. Two possibl
e causes for the con rmation bias have been proposed: (1) people seem to have grea
ter cognitive dif culty dealing with negative information than with positive infor
mation (Clark and Chase 1972); (2) abandon a hypothesis and reformulating a new
one requires more cognitive effort than does searching for and acquiring informa
tion consistent with the rst hypothesis (Einhorn and Hogarth 1981; Rasmussen 1981
).
3.2.
Decision Making
Decision making is a cognitive process whereby a person tries to choose a goal a
nd a method that would stabilize the system or increase its effectiveness. In re
al-world situations, goals may be insuf ciently de ned, and thus goal setting become
s part of decision making. Furthermore, evaluation criteria for choosing among o
ptions may vary, including economic, safety, and quality considerations. In such
cases, optimizing on most criteria may become the basis for a good decision. Ma
nagerial planning, political decisions, and system design are typical examples o
f decision making. Traditional models of decision making have adopted a rational
approach that entailed: 1. 2. 3. 4. 5. Determining the goal(s) to be achieved a
nd the evaluation criteria Examining aspects of the work environment Developing
alternative courses of action Assessing alternatives Choosing an optimal course
of action
1024
PERFORMANCE IMPROVEMENT MANAGEMENT
To overcome limitations in human cognition, experienced decision makers may use
a number of heuristics. As in diagnosis, heuristics may allow decision makers to
cope with complexity in realworld situations but on the other hand, they may le
ad to cognitive biases and errors. For instance, decision makers may use informa
tion selectively and make speculative inferences based on limited data. Hayes-Ro
th (1979) has called this heuristic opportunistic thinking, which is similar to Simo
ns (1978) concept of satisfying and search cost; in other words, decision makers may
pursue a tradeoff between seeking more information and minimizing the cost in ob
taining information. Although this strategy can simplify decision making, import
ant data may be neglected, leading to erroneous or suboptimal decisions. To cope
with complexity and time pressure, decision makers tend to acquire suf cient evid
ence to form a mental representation of the situation and examine a rather limit
ed set of alternative actions (Marmaras et al. 1992). Instead of generating a co
mplete set of alternatives at the outset and subsequently performing an evaluati
on based on optimizing several criteria, decision makers may start with an optio
n that has been incompletely assessed. This heuristic could be attributed to the
fact that experienced decision makers possess a repertoire of well-practiced re
sponses accessed through recognition rather than conscious search (Simon 1978).
Limited consideration of alternatives, however, can lead to ineffective practice
s. For instance, if the situation at hand differs in subtle ways from previous o
nes, suboptimal solutions may be adopted. Furthermore, in domains such as system
design and managerial planing, new innovative solutions and radical departures
may be of great advantage. In dynamic systems, the distinguishing features of a
situation may change over time and new events may add up. Operators should re ect
on their thinking and revise their assessment of the situation or their earlier
decisions in order to take account of new evidence, interruptions, and negative
feedback (Weick 1983; Schon 1983; Lindblom 1980). Under stress, however, experie
nced decision makers may xate on earlier decisions and fail to revise them at lat
er stages. Thinking / acting cycles may compensate to some extent for cognitive x
ation on earlier decisions. That is, initial decisions can be effected on the ba
sis of related experiences from similar situations, but their suitability can be
evaluated after the rst outcomes; in this way, decision makers can undertake cor
rective actions and tailor earlier decisions to new circumstances.
3.3.
Supporting Diagnosis and Decision Making
To overcome these weaknesses of human cognition, many engineering disciplines, s
uch as arti cial intelligence, operations research, and supervisory control, have
pursued the development of standalone expert systems that support humans in diag
nosis and decision making. Although these systems have made a signi cant contribut
ion to system design, their performance seems to degrade in unfamiliar situation
s because humans nd it rather awkward to cooperate. Operators are forced to repea
t the whole diagnostic or decision-making process instead of taking over from th
e computer advisor. There is a need, therefore, to develop cognitive advisors th
at enhance the cognitive processes of operators rather than to construct compute
r advisors capable of independent performance (Woods and Hollnagel 1987; Roth et
al. 1987). To combat several biases related to diagnosis and decision making, s
upport can be provided to human operators in the following ways:
Bring together all information required to form a mental representation of the s
ituation. Present information in appropriate visual forms. Provide memory aids.
Design cognitive aids for overcoming biases. Make systems transparent to facilit
ate perception of different system states. Incorporate intelligent facilities fo
COGNITIVE TASKS
1025
limitations and use constraints imposed by the tool itself (e.g., an interface o
r a computer program). On the other hand, user strategies can range from informa
l heuristics, retrieved from similar situations experienced in the past, to opti
mization strategies for unfamiliar events. The analysis of work constraints and
use strategies is the main objective of cognitive task analysis (CTA). Cognitive
task analysis involves a consideration of user goals, means, and work constrain
ts in order to identify the what, how, and why of operators work (Rasmussen et al
. 1994; Marmaras and Pavard 1999). Speci cally, CTA can be used to identify:
The problem-solving and self-regulation strategies* adopted by operators The pro
blem-solving processes followed (heuristics) The speci c goals and subgoals of ope
rators at each stage in these processes The signs available in the work environm
ent (both formal and informal signs), the information carried by them, and the s
igni cance attached to them by humans The regulation loops used The resources of t
he work environment that could help manage workload The causes of erroneous acti
ons or suboptimal performance CTA differs from traditional task analysis, which
describes the performance demands imposed upon the human operator in a neutral f
ashion (Drury et al. 1987; Kirwan and Ainsworth 1992) regardless of how operator
s perceive the problem and how they choose their strategies. Furthermore, CTA di
ffers from methods of job analysis that look at occupational roles and positions
of speci c personnel categories (Davis and Wacker 1987; Drury et al. 1987).
4.1.
A Framework for Cognitive Task Analysis
Cognitive task analysis deals with how operators respond to tasks delegated to t
hem either by the system or by their supervisors. Tasks is used here to designat
e the operations undertaken to achieve certain goals under a set of conditions c
reated by the work system** (Leplat 1990). CTA looks at several mental activitie
s or processes that operators rely upon in order to assess the current situation
, make decisions, and formulate plans of actions. CTA can include several stages
: 1. Systematic observation and recording of operators actions in relation to the
components of the work system. The observed activities may include body movemen
ts and postures, eye movements, verbal and gestural communications, etc. 2. Inte
rviewing the operators with the aim of identifying the why and when of the obser
ved actions. 3. Inference of operators cognitive activities and processes. 4. For
mulation of hypotheses about operators competencies,*** by interpreting their cog
nitive activities with reference to work demands, cognitive constraints and poss
ible resources to manage workload. 5. Validation of hypotheses by repeating stag
es 1 and 2 as required. Techniques such as video and tape recording, equipment m
ock-ups, and eye tracking can be used to collect data regarding observable aspec
ts of human performance. To explore what internal or cognitive processes underli
e observable actions, however, we need to consider other techniques, such as thi
nking aloud while doing the job (i.e., verbal protocols) and retrospective verba
lizations. For high-risk industries, event scenarios and simulation methods may
be used when on-site observations are dif cult or impossible. CTA should cover a r
ange of work situations representing both normal and degraded conditions. As the
analysts develop a better image of the investigated scenario, they may become i
ncreasingly aware of the need to explore other unfamiliar situations. Cognitive
analysis should also be expanded into how different operating crews respond to t
he same work situation. Presumably, crews may differ in their performance becaus
e of varying levels of expertise, different decision-making styles, and differen
t coordination patterns (see, e.g., Marmaras et al. 1997).
1026
PERFORMANCE IMPROVEMENT MANAGEMENT
The inference of cognitive activities and formulation of hypotheses concerning u
ser competencies requires familiarity with models of human cognition offered by
cognitive psychology, ethnology, psycholinguistics, and organizational psycholog
y. Several theoretical models, such as those cited earlier, have already found p
ractical applications in eliciting cognitive processes of expert users. Newell a
nd Simons (1972) human problem-solving paradigm, for instance, can be used as a b
ackground framework for inferring operators cognitive processes when they solve p
roblems. Hutchinss (1990; 1992) theory of distributed cognition can support analy
sts in identifying operator resources in managing workload. Rasmussens (1986) lad
der model of decision making can be used to examine how people diagnose problems
and evaluate goals. Normans (1988) action-cycle model can be used to infer the c
ognitive activities in control tasks. Finally, Reasons (1990) model of human erro
rs can be used to classify, explain, and predict potential errors as well as und
erlying error-shaping factors. Human performance models, however, have a hypothe
tical rather than a normative value for the analyst. They constitute his or her
background knowledge and may support interpretation of observable activities and
inference of cognitive activities. Cognitive analysis may con rm these models (to
tally or partially), enrich them, indicate their limits, or reject them. Consequ
ently, although the main scope of cognitive analysis is the design of artifacts
and cognitive advisory systems for complex tasks, it can also provide valuable i
nsights at a theoretical level. Models of human cognition and behavior can provi
de practical input to ergonomics interventions when cast in the form of cognitiv
e probes or questions regarding how operators search their environment, assess t
he situation, make decisions, plan their actions, and monitor their own performa
nce. Table 1 shows a list of cognitive probes to help analysts infer these cogni
tive processes that underlie observable actions and errors.
TABLE 1 Cognitive Activities Recognition
Examples of Cognitive Probes Probes
What features were you looking at when you recognized that a problem What was th
e most important piece of information that you used in Were you reminded of prev
ious experiences in which a similar situation
existed?
recognizing the situation? was encountered?
Interpretation
At any stage, were you uncertain about either the reliability or the Did you use
all the information available to you when assessing the Was there any additiona
l information that you might have used to assist in
relevance of the information that you had available? situation?
assessing the situation? Decision making
What were your speci c goals at the various decision points? Were there any other
alternatives available to you other than the decision Why were these alternative
s considered inappropriate? At any stage, were you uncertain about the appropria
teness of the How did you know when to make the decision?
you made? decision?
Planning
Are there any situations in which your plan of action would have turned When you
do this task, are there ways of working smart (e.g., combine Can you think of a
n example when you improvised in this task or noticed Have you thought of any si
de effects of your plan and possible steps to
out differently?
procedures) that you have found especially useful? an opportunity to do somethin
g better?
prevent them or minimize their consequences? Feedback and self-monitoring
What would this result tell you in terms of your assessment of the At this point
, do you think you need to change the way you were
situation or ef ciency of your actions? performing to get the job done?
COGNITIVE TASKS
1027
This need to observe, interview, test, and probe operators in the course of cogn
itive analysis implies that the role of operators in the conduct of analysis is
crucial. Inference of cognitive activities and elicitation of their competencies
cannot be realized without their active participation. Consequently, explicit p
resentation of the scope of analysis and commitment of their willingness to prov
ide information are prerequisites in the conduct of CTA. Furthermore, retrospect
ive verbalizations and online verbal protocols are central to the proposed metho
dology (Ericsson and Simon 1984; Sanderson et al. 1989). CTA permits the develop
ment of a functional model of the work situation. The functional model should: 1
. Describe the cognitive constraints and demands imposed on operators, including
multiple goals and competing criteria for the good completion of the task; unre
liable, uncertain, or excessive information to make a decision; and time restric
tions. 2. Identify situations where human performance may become ineffective as
well as their potential causese.g., cognitive demands exceeding operator capaciti
es and strategies that are effective under normal situations but may seem inappr
opriate for the new one. 3. Identify error-prone situations and causes of errors
or cognitive biasese.g., irrelevant or super uous information, inadequate work org
anization, poor workplace design, and insuf cient knowledge. 4. Describe the main
elements of operators competencies and determine their strengths and weaknesses.
5. Describe how resources of the environment can be used to support the cognitiv
e processes. The functional model of the work situation can provide valuable inp
ut into the speci cation of user requirements, prototyping, and evaluation of cogn
itive aids. Speci cally, based on elements 1, 2, and 3 of the functional model, si
tuations and tasks for which cognitive aid would be desirable and the ways such
aid must be provided can be determined and speci ed. This investigation can be mad
e responding to questions such as:
What other information would be useful to the operators? Is there a more appropr
iate form in which to present the information already used as well as
the additional new information? Is it possible to increase the reliability of in
formation? Could the search for information be facilitated, and how? Could the t
reatment of information be facilitated, and how? Could we provide memory support
s, and how? Could we facilitate the complex cognitive activities carried out, an
d how? Could we promote and facilitate the use of the most effective diagnosis a
nd decision-making strategies, and how? Could we provide supports that would dec
rease mental workload and mitigate degraded performance, and how? Could we provi
de supports that would decrease human errors occurrence, and how?
Cognitive aids can take several forms, including memory aids, computational tool
s, decision aids to avoid cognitive biases, visualization of equipment that is d
if cult to inspect, and situationassessment aids. The functional model is also use
ful in designing manmachine interfaces to support retrieval of solutions and gene
ration of new methods. By representing system constraints on the interface, oper
ators may be supported in predicting side effects stemming from speci c actions. O
n the other hand, the functional model can also be useful in specifying the comp
etencies and strategies required in complex tasks and hence providing the conten
t of skill training. Furthermore, based on the information provided by elements
4 and 5 of the functional model, the main features of the humanmachine interface
can also be speci ed, ensuring compatibility with operators competencies. The way t
asks objects should be represented by the system, the type of manmachine dialogues
to be used, the procedures to be proposed, and generic or customizable elements
of the system are examples of humancomputer interface features that can be speci e
d using the acquired data. Close cooperation among ergonomics specialists, infor
mation technology specialists, and stakeholders in the design project is require
d in order to examine what system functions should be supported by available inf
1028
PERFORMANCE IMPROVEMENT MANAGEMENT
4.2.
Techniques for Cognitive Task Analysis
CTA can be carried out using a variety of techniques, which, according to Reddin
g and Seamster (1994), can include cognitive interviewing, analysis of verbal pr
otocols, multi-dimensional scaling, computer simulations of human performance, a
nd human error analysis. For instance, Rasmussen (1986) has conducted cognitive
interviews to examine the troubleshooting strategies used by electronics technic
ians. Roth et al. (1992) have used cognitive environment simulation to investiga
te cognitive activities in fault management in nuclear power plants. Seamster et
al. (1993) have carried out extensive cognitive task analyses to specify instru
ctional programs for air traf c controllers. These CTA techniques have been used b
oth to predict how users perform cognitive tasks on prototype systems and to ana
lyze the dif culties and errors in already functioning systems. The former use is
associated with the design and development of user interfaces in new systems, wh
ile the latter use is associated with the development of decision support system
s or cognitive aids and training programs. The results of CTA are usually cast i
n the form of graphical representations that incorporate the work demands and us
er strategies. For cognitive tasks that have been encountered in the past, opera
tors may have developed well-established responses that may need some modi cations
but nevertheless provide a starting framework. For unfamiliar tasks that have n
ot been encountered in the past or are beyond the design-basis of the system, op
erators are required to develop new methods or combine old methods in new ways.
To illustrate how the results of CTA can be merged in a graphical form, two tech
niques are presented: hierarchical task analysis and the critical decision metho
d.
4.2.1.
Hierarchical Task Analysis
The human factors literature is rich in task analysis techniques for situations
and jobs requiring rulebased behavior (e.g., Kirwan and Ainsworth 1992). Some of
these techniques can also be used for the analysis of cognitive tasks where wel
l-practiced work methods must be adapted to task variations and new circumstance
s. This can be achieved provided that task analysis goes beyond the recommended
work methods and explores task variations that can cause failures of human perfo
rmance. Hierarchical task analysis (Shepherd 1989), for instance, can be used to
describe how operators set goals and plan their activities in terms of work met
hods, antecedent conditions, and expected feedback. When the analysis is expande
d to cover not only normal situations but also task variations or changes in cir
cumstances, it would be possible to record possible ways in which humans may fai
l and how they could recover from errors. Table 2 shows an analysis of a process
control task where operators start up an oil re nery furnace. This is a safety-cr
itical task because many safety systems are on manual mode, radio communications
between control room and on-site personnel are intensive, side effects are not
visible (e.g., accumulation of fuel in the re box), and errors can lead to furnac
e explosions. A variant of hierarchical task analysis has been used to examine s
everal cognitive activities, such as goal setting and planning, and failures due
to slips and mistakes. Variations in human performance were examined in terms o
f how teams in different shifts would perform the same task and how the same tea
m would respond to changes in circumstances. A study by Kontogiannis and Embrey
(1997) has used this technique to summarize ndings from online observations of pe
rformance, interviews with process operators about their work methods, near-miss
reviews, and critical incident analysis. The task analysis in Table 2 has provi
ded valuable input in revising the operating procedures for start-up: the sequen
ce of operations was reorganized, contingency steps were included for variations
in circumstances, check boxes were inserted for tracking executed steps, and wa
rnings and cautions were added to prevent human errors or help in their detectio
n and recovery. In addition, the task analysis in Table 2 has been used to redes
ign the computer-based process displays so that all information required for the
same task could be grouped and presented in the same screen. For instance, the
oxygen content in ue gases is an indicator of the ef ciency of combustion (see last
row in Table 2) and should be related to the ow rates of air and fuel; this impl
ies that these parameters are functionally related in achieving the required fur
nace temperature and thus should be presented on the same computer screen. The a
nalysis of cognitive tasks, therefore, may provide input into several forms of h
uman factors interventions, including control panel design, revision of operatin
g procedures, and development of job aids and training.
4.2.2.
Critical Decision Method
The Critical Decision Method (CDM) (Klein et al. 1989) is a retrospective cognit
ive task analysis based on cognitive interviews for eliciting expert knowledge,
decision strategies and cues attended to, and potential errors. Applications of
the CDM technique can be found in reground command, tactical decision making in n
aval systems, ambulance emergency planning, and incident control in offshore oil
industries. The technique relies on subject matter experts (SMEs) recalling a p
articularly memorable incident they have experienced in the course of their work
. The sequence of events and actions are organized on a timeline that can be rea
rranged as SMEs remember other details of the
Extract from the Analysis of Starting an Oil Re nery Furnace (a variant of hierarc
hical task analysis) Common Errors and Problem Detection
Selects wrong equipment to test furnace
Repeated failures to ignite burners may
require Forgets to light and insert torch before removing
Goal Setting Furnace has been purged with air and free of residual fuel Flame ha
s not been extinguished more than twice before.
Work Method (planning)
2. Establish ames on all burners.
shutting down the furnace.
2.1. Light selected burner.
Light rst burner (2.1) and take safety precautions (2.2) if ame is not healthy. Procee
d with other burners. Remove slip plate, open burner valve, and ignite burner.
2.2. Take safety precautions. Burner ame has been extinguished.
Close burner valve and inject steam.
Steam injection for 23 minutes
ow of fuel
Adjust fuel ow (3.1) and air ow (3.2) in cycles until thermal load is achieved. Mo
nitor fuel supply indicator and adjust fuel valve.
All burners remain lighted throughout operation. Burners are tted with suitable f
uel diffusion plugs.
slip plate from fuel pipe; explosive mixture could be created in the event of a
leaking block valve. Tries to light burner by ashing off from adjacent burners. D
oes not check and reset safety trips, which gives rise to burner failing to igni
te. Forgets to put air / fuel ratio control and furnace temperature control on m
anual; it can cause sub-stoichiometric ring Forgets to investigate problem if a b
urner does not light after two attempts. In the event of a burner failure (e.g.,
ame put out), compensates by increasing fuel to other burners. Does not consult
criterion table for changing plugs at different fuel supplies Temperature of cru
de oil on target Temperature of crude oil on target Oxygen content above limit
Risk of explosion if oxygen below limit (may
3.2. Adjust supply of combustion air.
Monitor oxygen in
1030
PERFORMANCE IMPROVEMENT MANAGEMENT
incident. The next stage is to probe SMEs to elicit more information concerning
each major decision point. Cognitive probes address the cues attended to, the kn
owledge needed to make a decision, the way in which the information was presente
d, the assessment made of the situation, the options considered, and nally the ba
sis for the nal choice. The third stage of the CDM technique involves comparisons
between experts and novices. The participants or SMEs are asked to comment on t
he expected performance of a less-experienced person when faced with the same si
tuation. This is usually done in order to identify possible errors made by less
experienced personnel and potential recovery routes through better training, ope
rating procedures, and interface design. The results of the cognitive task analy
sis can be represented in a single format by means of a decision analysis table.
One of the most important aspects of applying the CDM technique is selecting ap
propriate incidents for further analysis. The incidents should refer to complex
events that challenged ordinary operating practices, regardless of the severity
of the incident caused by these practices. It is also important that a no-blame cult
ure exist in the organization so that participants are not constrained in their
description of events and unsuccessful actions. To illustrate the use of the CDM
technique, the response of the operating crew during the rst half hour of the Gi
nna nuclear power incident is examined below, as reported in Woods (1982) and IN
PO (1982). The operating crew at the Ginna nuclear plant encountered a major eme
rgency in January 1982 due to a tube rupture in a steam generator; as a result,
radioactive coolant leaked into the steam generator and subsequently into the at
mosphere. In a pressurized water reactor such as Ginna, water coolant is used to
carry heat from the reactor to the steam generator (i.e., the primary loop); a
secondary water loop passes through the steam generator and the produced steam d
rives the turbine that generates electricity. A water leak from the primary to t
he secondary loop can be a potential risk when not isolated in time. In fact, th
e delayed isolation of the faulted or leaking steam generator was one of the con
tributory factors in the evolution of the incident. Woods (1982) uses a variant
of the CDM technique to analyze the major decisions of the operating crew at the
Ginna incident. Figure 7 shows a timeline of events and human actions plotted a
gainst a simpli ed version of the SRK model of Rasmussen. Detection refers to the
collection of data from control room instruments and the recognition of a famili
ar pattern; for instance, the initial plant symptoms leads to an initial recogni
tion of the problem as a steam generator tube rupture (i.e., a SGTR event). Howe
ver, alternative events may be equally plausible, and further processing of info
rmation at the next stage is required. Interpretation then involves identifying
other plausible explanations of the problem, predicting the criticality of the s
ituation, and exploring options for intervention. The formulation of
Figure 7 Analysis of the Response of the Crew During the First Half Hour at the
Ginna Incident (an example of the critical decision method).
COGNITIVE TASKS
1031
speci c actions for implementing a plan and executing actions are carried out at t
he following stage of control. The large number of actions at the control stage
provides a context for the work conditions that may affect overall performance;
for instance, high workload at this stage could prevent the crew from detecting
emerging events. The feedback stage is similar to the detection stage because bo
th are based on the collection of data; however, in the detection stage, observa
tion follows an alarm, while in the feedback stage, observation is a follow-up t
o an action. The decision owchart in Figure 7 has been developed on the basis of
cognitive interviews with the operators involved and investigations of the opera
ting procedures in use. To understand the cognitive processes involved at critic
al decision points, analysts should employ several cognitive probes similar to t
hose shown in Table 1. It is worth investigating, for instance, the decision of
the crew to seek further evidence to identify the tube rupture in one of the two
steam generators (Figure 7, column 3). At this point in time, the crew was awar
e that the level in the B steam generator was increasing more rapidly than the l
evel of the A steam generator (i.e., due to tube rupture) with auxiliary feedwat
er ows established to both steam generators. However, the crew needed additional
evidence to conclude their fault diagnosis. Was this delay in interpretation cau
sed by insuf cient plant knowledge? Did the crew interpret the consequences of the
problem correctly? Were the instructions in the procedures clear? And in genera
l, what factors in uenced this behavior of the crew? These are some cognitive prob
es necessary to help analysts explore the way that the crew interpreted the prob
lem. As it appeared, the crew stopped the auxiliary feedwater ow to the B steam g
enerator and came to a conclusion when they observed that its level was still in
creasing with no input; in addition, an auxiliary operator was sent to measure r
adioactivity in the suspect steam generator (Figure 7, column 3). Nevertheless,
the crew isolated the B steam generator before feedback was given by the operato
r with regard to the levels of radioactivity. The cognitive interviews establish
ed that a plausible explanation for the delayed diagnosis was the high cost of m
isinterpretation. Isolation of the wrong steam generator (i.e., by closing the m
ain steam isolation valve [MSIV]) would require a delay to reopen it in order to
repressurize and establish this steam generator as functional. The crew also th
ought of a worse scenario in which the MSIV would stick in the closed position,
depriving the crew of an ef cient mode of cooldown (i.e., by dumping steam directl
y to the condensers); in the worst-case scenario, the crew would have to operate
the atmospheric steam dump valves (ASDVs), which had a smaller cooling function
and an increased risk of radiological release. Other contextual factors that sh
ould be investigated include the presentation of instructions in the procedures
and the goal priorities established in training regimes. Notes for prompt isolat
ion of the faulty steam generator were buried in a series of notes placed in a s
eparate location from the instructions on how to identify the faulty steam gener
ator. In addition, the high cost of making an incorrect action (an error of comm
ission) as compared to an error of omission may have contributed to this delay.
In other words, experience with bad treatment of mistaken actions on the part of
the organization may have prompted the crew to wait for redundant evidence. Nev
ertheless, the eight-minute delay in diagnosis was somewhat excessive and reduce
d the available time window to control the rising water level in the B steam gen
erator. As a result, the B steam generator over owed and contaminated water passed
through the safety valves; subsequent actions (e.g., block ASDV, column 5) due
to inappropriate wording of procedural instructions also contributed to this rel
ease. Cognitive analysis of the diagnostic activity can provide valuable insight
into the strategies employed by experienced personnel and the contextual factor
s that led to delays and errors. It should be emphasized that the crew performed
well under the stressful conditions of the emergency (i.e., safety-critical eve
nt, inadequate procedures, distractions by the arrival of extra staff, etc.) giv
en that they had been on duty for only an hour and a half. The analysis also rev
ealed some important aspects of how people make decisions under stress. When eve
nts have high consequences, experienced personnel take into account the cost of
misinterpretation and think ahead to possible contingencies (e.g., equipment stu
ck in closed position depriving a cooling function). Intermingling fault interpr
etation and contingency planning could be an ef cient strategy under stress, provi
ded that people know how to switch between them. In fact, cognitive analysis of
incidents has been used as a basis for developing training programs for the nucl
ear power industry (Kontogiannis 1996). The critical decision method has also be
en used in eliciting cognitive activities and problemsolving strategies required
in ambulance emergency planning (OHara et al. 1998) and re- ghting command (Klein e
t al. 1986). Taxonomies of operator strategies in performing cognitive tasks in
emergencies have emerged, including maintaining situation assessment, matching r
esources to situation demands, planning ahead, balancing workload among crew mem
bers, keeping track of unsuccessful or interrupted tasks, and revising plans in
the light of new evidence. More important, these strategies can be related to sp
eci c cognitive tasks and criteria can be derived for how to switch between altern
ative strategies when the context of the situation changes. It is conceivable th
at these problem-solving strategies can become the basis for developing cognitiv
e aids as well as training courses for maintaining them under psychological stre
ss.
1032
PERFORMANCE IMPROVEMENT MANAGEMENT
5.
DESIGNING COGNITIVE AIDS FOR COMPLEX COGNITIVE TASKS
Recent developments in information technology and system automation have provide
d challenging occasions for developing computerized artifacts aiming to support
cognitive processes such as diagnosis, decision making, and planning. This enthu
siasm with the capabilities of new technology has resulted in an overreliance on
the merits of technology and the development of stand-alone systems capable of
undertaking fault diagnosis, decision making, and planning. Recent reviews of su
ch independent computer consultants (e.g., Roth et al. 1997), however, have foun
d them to be dif cult to use and brittle in the face of novel circumstances. Woods
et al. (1990) argue that many of these systems cannot anticipate operator respo
nses, provide unsatisfactory accounts of their own goals, and cannot redirect th
eir line of reasoning in cases of misperception of the nature of the problem. To
make cognitive aids more intelligent and cooperative, it is necessary to examin
e the nature of human cognition in complex worlds and its coupling with the wide
r context of work (Marmaras and Pavard 2000). Because of the increasing interest
in developing cognitive aids for complex tasks, this section presents two case
studies that address this issue in an applied context.
5.1.
The Case of a Cognitive Aid for CNC Lathe Programming
The scope of this study was to design a cognitive aid for CNC lathe programming
(Marmaras et al. 1997) by adopting a problem-driven approach. This approach comb
ines an analysis of the task in terms of constraints and cognitive demands with
an analysis of user strategies to cope with the problem (Marmaras et al. 1992; W
oods and Hollnagel 1987). A cognitive task analysis was undertaken in the rst pla
ce in order to identify the cognitive demands of the task and investigate likely
problem-solving strategies leading to optimal and suboptimal solutions. On the
basis of the cognitive task analysis and the results of a follow-up experiment,
an information technology cognitive aid was developed for CNC lathe programming.
A prototype of the cognitive aid was evaluated in a second experiment.
5.1.1.
Cognitive Task Analysis
Programming a CNC lathe requires the development of a program that will guide th
e lathe to transform a simple cylindrical part into a complex shape. This cognit
ive task requires planning and codi cation, and it is very demanding due to severa
l interdependent constraints, competing criteria for task ef ciency, delayed feedb
ack, and lack of memory aids for maintaining a mental image of the manufacturing
process. A cognitive analysis of the activities and strategies was conducted on
the basis of real-time observations of experienced operators programming their
CNC lathes and off-the-job interviews. The analysis identi ed three main cognitive
tasks: 1. Planning the whole process of manufacturing, which entails deciding u
pon: The order of cutting several elements of the complex shape The cutting tool
to be used at each stage of the cutting process The movement of the tool at eac
h stage of the cutting process (e.g., starting and ending points, type of the mo
vement, and speed) The number of iterative movements of the tool at each stage T
he speed of rotation of the part 2. Codi cation of the manufacturing process to th
e programming language of the CNC machine. The operator has to consider the voca
bulary of this language to designate the different objects and functions of the
manufacturing process, as well as the grammar and syntax of the language. 3. Int
roduction of the program to the machine by using various editing facilities and
by starting the execution of the program. It is worth noting that at this stage
certain omission and syntax errors can be recognized by the computer logic and t
he operator may be called upon to correct them. Table 3 summarizes the results o
f the cognitive analysis for the planning task. Experienced operators must take
a number of constraints into account when deciding upon the most suitable strate
gy to manufacture the part. These include:
The shape of the part and the constraints imposed by its material. The constrain
ts of the machine tool, e.g., the rotating movement of the part, the area and
movement possibilities of the cutting tool, its dimensions, and its cutting capa
bilities.
The product quality constraints derived from the speci cation; these are affected
by the speeds
of the part and tools, the type of tools, and the designation of their movements
.
COGNITIVE TASKS TABLE 3 Tasks Planning the manufacturing process Decision Table
for the Task of Planning How to Cut a Part in a CNC Lathe Decision Choices
Order of Cutting tools
1033
Constraints
Part shape Tool
Sources of Dif culty
Interdependent Competing
Strategies
Serial Optimization
cutting
and material constraints quality time
constraints
strategy strategy
at each phase Movement of tools Iterative tool movements Rotation speed of the p
art
Product
Manufacturing
criteria for task ef ciency Lack of realtime feedback Lack of aids for mental imag
ery of manufacturing process
The manufacturing time, which is affected by the number of tool changes, the mov
ements the
tools make without cutting (idle time of the tool) and the speeds of the part an
d cutting tools.
The safety considerations, such as avoiding breaking or destroying the part or t
he tools; safety
rules concerning the area of tool movement and the speeds of the part and cuttin
g tools. An analysis of how experienced operators decide to use the CNC lathe in
cutting parts into complex shapes revealed a variety of problem strategies rang
ing from serial to optimized strategies. The simplest strategy, at one end of th
e spectrum, involved cutting each component of the part having a different shape
separately and in a serial order (e.g., from right to left). A number of heuris
tics have been introduced in the problem-solving process resulting in such a ser
ial strategy, including:
Decomposition of the part to its components (problem decomposition) Speci cation a
nd prioritization of several criteria affecting task ef ciency (criteria prioritiz
ation) Simpli cation of mental imagery of the manufacturing process (mental repres
entation) Use of a unique frame of reference regarding the part to be manufactur
ed (frames of reference minimization)
At the other end of the spectrum, a more complex strategy was adopted that relie
d on optimizing a number of criteria for task ef ciency, such as product quality,
manufacturing time, and safety considerations. A case in point is a cutting stra
tegy that integrates more than one different shape into one tool movement, which
alters the serial order of cutting the different shapes of the part. This strat
egy would decrease the time-consuming changes of tools and the idle time, thus m
inimizing the manufacturing time. Optimized cutting strategies require complex p
roblem-solving processes, which may include:
Adopting a holistic view of the part to be manufactured Considering continuously
all the criteria of task ef ciency Using dynamic mental imagery of the cutting pr
ocess that includes all intermediate phases of
the whole part
Adopting two frames of reference regarding the part to be manufactured and the c
utting tools
in order to optimize their use Two hypotheses have been formulated with respect
to the observed differences in the performance of CNC lathe programmers: (1) ver
y good programmers will spend more time on problem formulation before proceeding
to the speci cation of the program than good programmers, and (2) very good progr
ammers will adopt an optimized cutting strategy while good programmers would set
tle for a serial strategy. Consequently, very good programmers will adopt a more
complex problem-solving process than good programmers. To test these hypotheses
, an experiment was designed that would also provide valuable input into the cog
nitive task analysis utilized in this study. Details on the design of the experi
ment and the results obtained can be found in Marmaras et al. (1997).
1034
PERFORMANCE IMPROVEMENT MANAGEMENT
5.1.2.
User Requirements for a Cognitive Aid Supporting CNC Lathe Programming
The cognitive task analysis of programming the CNC lathe provided useful insight
s into the design of an information technology system that supports CNC programm
ers. The scope of this cognitive aid was twofold. On the one hand, the aid shoul
d guide programmers in using ef cient cutting strategies, and on the other hand, i
t should alleviate the cognitive demands that are often associated with optimize
d problem-solving strategies. Speci cally, the cognitive aid could have the follow
ing features:
Support users in deciding upon a cutting strategy at the front-end stages of the
programming
process
Generate several suggestions about ef cient strategies, e.g., try to integrate as ma
ny different
shapes as possible into one movement of the cutting tool and avoid many changes of t
he cutting tool Support human memory throughout the whole programming process Prov
ide real-time feedback by showing programmers the effects of their intermediate
decisions through real-time simulation Facilitate the standard actions of the wh
ole programming process A description of the prototype cognitive aid that was de
veloped in accordance with the user requirements can be found in Marmaras et al.
(1997). Another experiment showed that the quality of the CNC lathe programs wa
s improved and the time required to program the lathe was decreased by approxima
tely 28% when operators used the proposed cognitive aid. Not only did the perfor
mance of good CNC lathe programmers improve, but so did the performance of the v
ery good programmers.
5.2.
The Case of a Cognitive Aid for Managerial Planning
This section presents a study of user requirements in the design of an informati
on technology system that supports high-level managerial planning tasks in small
to medium-sized enterprises (Laios et al. 1992; Marmaras et al. 1992). High-lev
el managerial planning tasks, usually referred to as strategic or corporate plan
ning, involve a series of cognitive processes preceding or leading to crucial de
cisions about the future course of a company. Managerial planning entails: 1. St
rategies or high-level decisions that have a long-lasting in uence on the enterpri
se 2. Tactics or lower-level strategies regarding policies and actions 3. Action
plans that specify detailed courses of action for the future In large companies
, strategic planning is an iterative, lengthy, and highly formalized process inv
olving many managers at several levels in the organization. In contrast, in smal
l companies, the managerowner is the sole, most important actor and source of in
formation on planning issues; in this case, planning is an individual cognitive
process without the formalism required in larger organizations. Elements that af
fect planning decisions are past, present, and expected future states of the ext
ernal environment of the rm, which have a complex relationship with the internal
environment. Managers may perceive these elements as threats or opportunities in
the marketplace (external environment) and strengths or weaknesses of the enter
prise (internal environment). Effective planning decisions should neutralize thr
eats from the external environment and exploit its opportunities, taking into ac
count the strengths and weaknesses of the company. At the same time, effective p
lanning decisions should increase the strengths of the company and decrease its
weaknesses. A literature review and three case studies were undertaken in order
to identify the main elements of the external and internal environments that may
constrain managerial planning. The main constraints identi ed were:
Complexity / multiplicity of factors: The external and internal environments of
an enterprise
include a large number of interacting factors. Market demand, intensity of compe
tition, socioeconomic situation, labor market, and technology are examples of th
e external environment factors. Product quality, process technology, distributio
n channels, and nancial position are examples of the internal environment factors
. Change / unpredictability: The world in which rms operate is constantly changin
g. Very often these changes are dif cult to predict with regard to their timing, i
mpact, and size effects. Managers therefore face a great degree of uncertainty.
Limited knowledge with respect to the nal impact of planning decisions and action
s. For example, what will be the sales increase resulting from an advertising ca
mpaign costing X, and what from a product design improvement costing Y?
COGNITIVE TASKS
1035
Interrelation between goals and decisions: Planning decisions made in order to a
chieve a concrete goal may refuteif only temporarilysome other goals. For example, renewal of
production equipment aiming at increasing product quality may refute the goal of
increased pro ts for some years. Risk related to planning decisions: The potentia
l outcomes of planning decisions are crucial to the future of the enterprise. In
accurate decisions may put the enterprise in jeopardy, often leading to importan
t nancial losses. Providing valuable input to cognitive analysis was a series of
structured interviews with managers from 60 small- to medium-sized enterprises i
n order to collect taxonomic data about typical planning tasks in various enterp
rises, environmental and internal factors, and personality characteristics. In o
ther words, the aim of these interviews was to identify the different situations
in which managerial planning takes place. The scenario method was used to condu
ct a cognitive analysis of managerial planning because of the dif culties involved
in direct observations of real-life situations. The scenario presented a hypoth
etical rm, its position in the market, and a set of predictions regarding two ext
ernal factors: an increase in competition for two of the three types of products
and a small general increase in market demand. The description of the rm and its
external environment was quite general, but realistic so that managers could cr
eate associations with their own work environment and therefore rely on their ow
n experience and competencies. For example, the scenario did not specify the pro
ducts manufactured by the hypothetical rm, the production methods and information
systems used, or the tactics followed by the rm. The knowledge elicited from the
three case studies and the structured interviews was used to construct realisti
c problem scenarios. Scenario sessions with 21 practicing managers were organize
d in the following way. After a brief explanation about the scope of the experim
ent, managers were asked to read a two-page description of the scenario. Manager
s were then asked to suggest what actions they would take had they owned the hyp
othetical rm and to specify their sequence of performance. Additional information
about the scenario (e.g., nancial data, analysis of pro ts and losses, production
and operating costs) were provided in tables in case managers would like to use
them. Managers were asked to think aloud and were allowed to take notes and make
as many calculations as needed. The tape recordings of the scenario sessions we
re transcribed and verbal protocols were analyzed using a 10-category codi cation
scheme (see Table 4). The coded protocols indicated the succession of decision-m
aking stages and the semantic content of each stagefor example, the sort of infor
mation acquired, the speci c goals to be attained, or the speci c tactic chosen. Det
ails of the verbal protocol analysis can be found in Marmaras et al. (1992). A b
rief presentation of the main conclusions drawn from the cognitive analysis foll
ow, with particular emphasis on the managers cognitive strategies and heuristics
used in managerial planning.
Limited generation of alternative tactics: Most managers did not generate at the
outset an
extensive set of alternative tactics to select the most appropriate for the situ
ation. Instead, as
TABLE 4 Code C1 C2 C3 C4 C5 C6 C7 C8 C9 C10
Ten-Category Scheme for Interpreting the Verbal Protocols of Managers Descriptio
n
Information acquisition from the data sheets of the scenario or by the experimen
ter Statement about the tactic followed by the manager (e.g., I will reduce the pr
oduction of product C and I will focus on product A . . .) Reference to threats or
1036
PERFORMANCE IMPROVEMENT MANAGEMENT
soon as they acquired some information and formed a representation of the situat
ion, they de ned a rather limited set of tactics that they would apply immediately
(e.g., C1C2, C1C7C3C2, C1C3C2, C1C3C1C2). This nding could be attributed to the fa
experienced managers possess an extensive repertoire of experiences accessed th
rough recognition rather than conscious search. Optimization strategies are time
consuming and dif cult to sustain under the constraints of the work environment a
nd the limited knowledge regarding the impact of decisions. In addition, the hig
h risks associated with different decisions would probably push managers to adop
t already tested tactics. Limited generation of alternative tactics, however, ma
y lead to ineffective practices. For instance, a past solution may be inappropri
ate if the current situation has some subtle differences from others experienced
in the past. In the case of managerial planning, new innovative solutions and r
adical departures may be of great importance. Acting / thinking cycles: After de
ciding upon a rst set of actions, managers would wait for immediate feedback befo
re proceeding to corrective actions or applying other tactics. This behavior com
pensates to some extent for the potential negative consequences of the previous
strategy. That is, early decisions may be based on past experiences, but their s
uitability can be evaluated later on when feedback becomes available; thus, mana
gers may undertake other corrective actions. However, these cycles of acting / t
hinking behavior may lead to delays in acting, increased costs, or money loss. L
imited use of information and analysis: Most managers avoided speculating on the
available predictive quantitative account data and did not make any projections
in their evaluation of selected tactics. Instead, they quickly came up with ide
as about what to do and stated that they would base their evaluation and future
actions on speci c outcomes of their initial actions. Decisions were justi ed by mak
ing references to environmental factors (C2C3, C3C2), other superordinate goals (C
2C6, C6C2, C8C2, C2C8), and feedback from previous actions. This behavior may be att
ributed to the constraints of the environment, the uncertainty in making predict
ions, and the interleaving goals that may render a quantitative evaluation of ta
ctics rather dif cult or even impossible. The possible negative consequences of th
is behavior are that data important for planning decisions may be neglected duri
ng the planning process. Lack of quantitative goals and evaluation: The results
of the analysis suggested that, in general, managers do not set themselves quant
itative goals that will in uence the selection and evaluation of their actions. In
stead, their planning decisions seem to be based mainly on assessments of extern
al and internal environment factors and certain generic goals related to past ex
periences. This observation provides evidence supporting the criticism of severa
l authors (e.g., Lindblom 1980) with respect to the relevance of goal-led models
; it also suggests that cognitive aids should not be based on optimization model
s of managerial planning. Drawing on the results of the cognitive analysis, an i
nformation technology system that supports managerial planning in small- to medi
um-sized enterprises was speci ed and designed. The system took the form of an act
ive framework that supports cognitive processes in the assessment of environment
al factors and generation, selection, and evaluation of strategies and tactics.
At the same time, the support system would limit some negative consequences of t
he managers cognitive strategies and heuristics. With the use of appropriate scre
ens and functions, the proposed cognitive aid should provide support in the foll
owing ways:
Assisting managers in considering additional data in the planning process and id
entifying similarities to and differences from other experiences in the past. This could be d
one by presenting menus of external and internal factors in the planning environ
ment and inviting managers to determine future states or evolutions of system st
ates. Enriching the repertoire of planning decisions by presenting menus of cand
idate options and tactics relevant to the current situation. Supporting the acti
COGNITIVE TASKS
1037
the choice and sequence of these steps. Although this procedure imposes a certai
n formalism in managerial planning (this is inevitable when using an information
technology system), suf cient compatibility has been achieved with the cognitive
processes entailed in managerial planning. A prototype of the proposed cognitive
aid was evaluated by a number of managers of small- to mediumsized enterprises,
and its functionality and usability were perceived to be of high quality.
6.
CONCLUDING REMARKS
Advances in information technology and automation have changed the nature of man
y jobs by placing particular emphasis on cognitive tasks and, on the other hand,
by increasing their demands, such as coping with more complex systems, higher v
olume of work, and operating close to safety limits. As a result, successful erg
onomic interventions should consider the interactions in the triad of users, tools
or artifacts, work environment. These interactions are affected by the user strat
egies, the constraints of the work environment, and the affordances and limitati
ons of the work tools. In this respect, techniques of cognitive task analysis ar
e valuable because they explore how users cognitive strategies are shaped by thei
r experience and interaction with the work environment and the tools. This expla
ins why a large part of this chapter has been devoted to illustrating how cognit
ive task analysis can be used in the context of applied work situations. Cogniti
ve analysis provides a framework for considering many types of ergonomic interve
ntions in modern jobs, including the design of manmachine interfaces, cognitive t
ools, and training programs. However, cognitive analysis requires a good grasp o
f human cognition models in order to understand the mechanisms of cognition and
develop techniques for eliciting them in work scenarios or real-life situations.
The cognitive probes and the critical decision method, in particular, demonstra
te how models of human cognition can provide direct input into cognitive task an
alysis. Further advances in cognitive psychology, cognitive science, organizatio
nal psychology, and ethnography should provide more insights into how experience
d personnel adapt their strategies when work demands change and how they detect
and correct errors before critical consequences are ensued. The chapter has also
emphasized the application of cognitive analysis in the design of information t
echnology cognitive aids for complex tasks. The last two case studies have shown
that cognitive aids should not be seen as independent entities capable of their
own reasoning but as agents interacting with human users and work environments.
Because unforeseen situations beyond the capabilities of cognitive aids are bou
nd to arise, human operators should be able to take over. This requires that hum
ans be kept in the loop and develop an overall mental picture of the situation b
efore taking over. In other words, user-aid interactions should be at the heart
of system design. Recent advantages of technology may increase the conceptual po
wer of computers so that both users and cognitive aids can learn from each others
strategies.
REFERENCES
Allport, D. A. (1980), Attention, in New Directions in Cognitive Psychology, G. L. C
laxton, Ed., Routledge & Kegan Paul, London. Card, K., Moran, P., and Newell, A.
(1986), The Model Human Processor, in Handbook of Perception and Human Performance,
Vol. 2, K. R. Boff, L. Kaufman, and J. P. Thomas, Eds., John Wiley & Sons, New
York. Clark, H. H., and Chase, W. G. (1972), On the Process of Comparing Sentences
against Pictures, Cognitive Psychology, Vol. 3, pp. 472517. Davis, L., and Wacker,
G. (1987), Job Design, in Handbook of Human Factors, G. Salvendy, Ed., John Wiley &
Sons, New York, pp. 431452. DeKeyser, V., and Woods, D. D. (1990), Fixation Errors
: Failures to Revise Situation Assessment in Dynamic and Risky Systems, in Systems
1038
PERFORMANCE IMPROVEMENT MANAGEMENT
Hayes-Roth, B. (1979), A Cognitive Model of Action Planning, Cognitive Science, Vol.
3, pp. 275310. Helander, M. (1987), Design of Visual Displays, in Handbook of Human
Factors, G. Salvendy, Ed., John Wiley & Sons, New York, pp. 507548. Hutchins, E.
(1990), The Technology of Team Navigation, in Intellectual Teamwork, J. Galegher, R.
E. Kraut, and C. Edigo, Eds., Erlbaum, Hillsdale, NJ. Hutchins, E., and Klausen
, T. (1992), Distributed Cognition in an Airline Cockpit, in Communication and Cogni
tion at Work, D. Middleton and Y. Engestrom, Eds., Cambridge University Press, C
ambridge. Hutchins, E., Hollan, J., and Norman, D. (1986), Direct Manipulation Int
erfaces, in User-Centered System Design, D. Norman and S. Draper, Eds., Erlbaum, H
illsdale, NJ, pp. 87124. Johnson, E. M., Cavanagh, R. C., Spooner, R. L., and Sam
et, M. G. (1973), Utilization of Reliability Measurements in Bayesian Inference: M
odels and Human Performance, IEEE Transactions on Reliability, Vol. 22, pp. 176183.
INPO (1982), Analysis of Steam Generator Tube Rupture Events at Oconee and Ginna, R
eport 83-030, Institute of Nuclear Power Operations, Atlanta. Kirwan, B., and Ai
nsworth, L. (1992), A Guide to Task Analysis, Taylor & Francis, London. Klein, G
. (1997), An Overview of Naturalistic Decision Making Applications, in Naturalistic
Decision Making, C. Zambok and G. Klein, Eds., Erlbaum, NJ, pp. 4960. Klein, G. A
., Calderwood, R., and Clinton-Cirocco, A. (1986), Rapid Decision Making on the Fi
re Ground, in Proceedings of the 30th Annual Meeting of Human Factors Society, Vol
. 1 (Santa Monica, CA), pp. 576580. Klein, G. A., Calderwood, R., and MacGregor,
D. (1989), Critical Decision Method for Eliciting Knowledge, IEEE Transactions on Sy
stems, Man, and Cybernetics, Vol. 19, pp. 462472. Kontogiannis, T. (1996), Stress a
nd Operator Decision Making in Coping with Emergencies, International Journal of H
uman-Computer Studies, Vol. 45, pp. 75104. Kontogiannis, T., and Embrey, D. (1997
), A User-Centred Design Approach for Introducing Computer-Based Process Informati
on Systems, Applied Ergonomics, Vol. 28, No. 2, pp. 109 119. Laios, L., Marmaras, N
., and Giannakourou, M. (1992), Developing Intelligent Decision Support Systems Th
rough User-Centred Design, in Methods and Tools in User-Centred Design for Informa
tion Technology, M. Galer, S. Harker, and J. Ziengler, Eds., Elsevier Science, A
msterdam, pp. 373412. Leplat, J. (1990), Relations between Task and Activity: Eleme
nts for Elaborating a Framework for Error Analysis, Ergonomics, Vol. 33, pp. 138914
02. Lindblom, C. E. (1980), The Science of Muddling Thought, in Readings in Managerial
Psychology, H. Leavit, L. Pondby, and D. Boje, Eds., University of Chicago Pres
s, Chicago, pp. 144 160. Lindsay, R. W., and Staffon, J. D. (1988), A Model Based D
isplay System for the Experimental Breeder Reactor-II, Paper presented at the Join
t Meeting of the American Nuclear Society and the European Nuclear Society (Wash
ington, DC). Lusted, L. B. (1976), Clinical Decision Making, in Decision Making and
Medical Care, F. T. De Dombal and J. Gremy, Eds., North-Holland, Amsterdam. Marm
aras, N., and Pavard, B. (1999), Problem-Driven Approach for the Design of Informa
tion Technology Systems Supporting Complex Cognitive Tasks, Cognition, Technology
and Work, Vol. 1, No. 4, pp. 222236. Marmaras, N., Lioukas, S., and Laios, L. (19
92), Identifying Competences for the Design of Systems Supporting Decision Making
Tasks: A Managerial Planning Application, Ergonomics, Vol. 35, pp. 12211241. Marmar
as, N., Vassilakopoulou, P., and Salvendy, G. (1997), Developing a Cognitive Aid f
or CNCLathe Programming through Problem-Driven Approach, International Journal of
Cognitive Ergonomics, Vol. 1, pp. 267289. McCormick, E. S., and Sanders, M. S. (1
987), Human Factors in Engineering and Design, McGrawHill, New York. Mehle, T. (
1982), Hypothesis Generation in an Automobile Malfunction Inference Task, Acta Psych
ologica, Vol. 52, pp. 87116. Miller, G. (1956), The Magical Number Seven Plus or Mi
nus Two: Some Limits on our Capacity for Processing Information, Psychological Rev
iew, Vol. 63, pp. 8197.
COGNITIVE TASKS
1039
Norman, D. (1988), The Design of Everyday Things, Doubleday, New York. Newell, A
., and Simon, H. (1972), Human Problem Solving, Prentice Hall, Englewood Cliffs,
NJ. O Hara, D., Wiggins, M., Williams, A., and Wong, W. (1998), Cognitive Task Ana
lysis for Decision Centred Design and Training, Ergonomics, Vol. 41, pp. 16981718.
Payne, J. W. (1980), Information Processing Theory: Some Concepts and Methods Appl
ied to Decision Research, in Cognitive Processes in Choice and Decision Behavior,
T. S. Wallsten, Ed., Erlbaum, Hillsdale, NJ. Rasmussen, J. (1981), Models of Menta
l Strategies in Process Control, in Human Detection and Diagnosis of Systems Failu
res, J. Rasmussen and W. Rouse, Eds., Plenum Press, New York. Rasmussen, J. (198
3), Skill, Rules and Knowledge: Signals, Signs and Symbols, and other Distinctions
in Human Performance Models, IEEE Transactions on Systems, Man and Cybernetics, S
MC-13, pp. 257267. Rasmussen, J. (1985), The Role of Hierarchical Knowledge Represe
ntation in Decision Making and System Management, IEEE Transactions on Systems, Ma
n and Cybernetics, SMC-15, pp. 234 243. Rasmussen, J. (1986), Information Process
ing and HumanMachine Interaction: An Approach to Cognitive Engineering, North-Hol
land, Amsterdam. Rasmussen, J., Pejtersen, M., and Goodstein, L. (1994), Cogniti
ve Systems Engineering, John Wiley & Sons, New York. Reason, J. (1990), Human Er
ror, Cambridge University Press, Cambridge. Redding, R. E., and Seamster, T. L.
(1994), Cognitive Task Analysis in Air Traf c Control and Aviation Crew Training, in A
viation Psychology in Practice, N. Johnston, N. McDonald, and R. Fuller, Eds., A
vebury Press, Aldershot, pp. 190222. Roth, E. M., Bennett, K. B., and Woods, D. D
. (1987), Human Interaction with an Intelligent Machine, International Journal of Ma
nMachine Studies, Vol. 27, pp. 479525. Roth, E. M., Woods, D. D., and Pople, H. E.
, Jr. (1992), Cognitive Simulation as a Tool for Cognitive Task Analysis, Ergonomics
, Vol. 35, pp. 11631198. Roth E. M., Malin, J. T., and Schreckenghost, D. L. (199
7), Paradigms for Intelligent Interface Design, in M. Helander, T. K. Landauer, and
P. Phabhu, Eds., Handbook of HumanComputer Interaction, Elsevier Science, Amsterd
am, pp. 11771201. Sanderson, P. M., James, J. M., and Seidler, K. S. (1989), SHAPA:
An Interactive Software Environment for Protocol Analysis, Ergonomics, Vol. 32, p
p. 463470. Schon, D. (1983), The Re ective Practitioner, Basic Books, New York. Sch
ustack, M. W., and Sternberg, R. J. (1981), Evaluation of Evidence in Causal Infer
ence, Journal of Experimental Psychology: General, Vol. 110, pp. 101120. Seamster,
T. L., Redding, R. E., Cannon, J. R., Ryder, J. M., and Purcell, J. A. (1993), Cog
nitive Task Analysis of Expertise in Air Traf c Control, International Journal of
Aviation Psychology, Vol. 3, pp. 257283. Serfaty, D., Entin, E., and Hohnston, J.
H. (1998), Team Coordination Training, in J. A. CannonBowers and E. Salas, Eds., Ma
king Decisions under Stress: Implications for Individual and Team Training, Amer
ican Psychological Association, Washington, DC, pp. 221245. Shepherd, A. (1989), An
alysis and Training of Information Technology Tasks, in Task Analysis for Human Co
mputer Interaction, D. Diaper, Ed., Ellis Horwood, Chichester. Sheridan, T. (198
1), Understanding Human Error and Aiding Human Diagnosis Behavior in Nuclear Power
Plants, in Human Detection and Diagnosis of Systems Failures, J. Rasmussen and W.
Rouse, Eds., Plenum Press, New York. Shneiderman, B. (1982), The Future of Intera
ctive Systems and the Emergence of Direct Manipulation, Behavior and Information T
echnology, Vol. 1, pp. 237256. Shneiderman, B. (1983), Direct Manipulation: A Step
Beyond Programming Languages, IEEE Computer, Vol. 16, pp. 5769. Simon, H. (1978), Rat
ionality as Process and Product of Thought, American Economic Review, Vol. 68, pp.
116. Tversky, A., and Kahneman, D. (1974), Judgement under Uncertainty: Heuristics
and Biases, Science, Vol. 185, pp. 11241131. Vicente, K., and Rasmussen, J. (1992)
, Ecological Interface Design: Theoretical Foundations, IEEE Transactions on Systems
, Man and Cybernetics, SMC-22, pp. 589606. Weick, K. E. (1983), Managerial Thought
in the Context of Action, in The Executive Mind, S. Srivastra, Ed., Jossey-Bass, S
an Francisco, pp. 221242.
1040
PERFORMANCE IMPROVEMENT MANAGEMENT
Wickens, C. (1992), Engineering Psychology and Human Performance, HarperCollins,
New York. Woods, D. D. (1982), Operator Decision Behavior During the Steam Genera
tor Tube Rupture at the Ginna Nuclear Power Station, Research Report 82-1057-CONRM
-R2, Westinghouse Research and Development Center, Pittsburgh. Woods, D., and Ho
llnagel, E. (1987), Mapping Cognitive Demands in Complex and Dynamic Worlds, Interna
tional Journal of ManMachine Studies, Vol. 26, pp. 257275. Woods, D. D., Roth, E.
M., and Bennett, K. B. (1990), Explorations in Joint HumanMachine Cognitive Systems
, in Cognition, Computing, and Cooperation, S. P. Robertson, W. Zachary, and J. B.
Black, Eds., Ablex, Norwood, NJ.
1.
INTRODUCTION
According to the Board of Certi cation in Professional Ergonomics (BCPE 21000), er
gonomics is a body of knowledge about human abilities, human limitations, and ot
her human characteristics that are relevant to design. Ergonomic design is the a
pplication of this body of knowledge to the design of tools, machines, systems,
tasks, jobs, and environments for safe, comfortable, and effective human use. Th
e underlying philosophy of ergonomics is to design work systems where job demand
s are within the capacities of the workforce. Ergonomic job design focuses on tti
ng the job to capabilities of workers by, for example, eliminating occurrence of
nonnatural postures at work, reduction of excessive strength requirements, impr
ovements in work layout, design of hand tools, or optimizing work / rest require
ments (Karwowski 1992; Karwowski and Salvendy 1998; Karwowski and Marras 1999).
Ergonomics is seen today as a vital component of the value-adding activities of
the company, with well-documented costbene t aspects of the ergonomics management p
rograms (GAO 1997). A company must be prepared to accept a participative culture
and utilize participative techniques in implementation of work design principle
s. The job design-related problems and consequent interven-
er to or further from the top of the body. The anterior (ventral) and posterior
(dorsal) positions refer to in front of or behind another structure. The super cia
l and deep positions refer to nearer to and farther from the body surface, respe
ctively. Nearer to or farther from the trunk positions are called proximal and d
istal. Terms of body movements are de ned in Table 4.
2.2.
The Statistical Description of Anthropometric Data
The concept of normal distribution can used to describe random errors in the mea
surement of physical phenomena (Pheasant 1989). If the variable is normally dist
ributed, the population may be completely described in terms of its mean (x) and
its standard deviation (s), and speci c percentile (Xp) values can be calculated,
where: Xp x
sz, where z (the standard normal deviate) is a factor for the perce
ntile concerned. Values of z for some commonly used percentiles (Xp) are given i
n Table 5. Figure 2 depicts data from Humanscale calculated for different percen
tiles of U.S. females. A word of caution: anthropometric data are not necessaril
y normally distributed in any given population (Kroemer 1989).
2.3.
The Method of Design Limits
The recommendations for workplace design with respect to anthropometric criteria
can be established by the principle of design for the extreme, also known as th
e method of limits (Pheasant 1989). The basic idea behind this concept is to est
ablish speci c boundary conditions (percentile value of the
1044 Men 50th Percentile SD 6.9 6.6 6.1 5.8 3.2 3.7 3.6 4.0 3.0 2.9 2.8 1.7 1.9
2.2 3.0 3.0 5.0 4.6 2.8 0.59 1.68 0.39 0.93 0.47 1.09 0.21 12.6 31.5 31.2 13.6 5
2.3 5.1 16.4 7.0 16.9 2.5 46.2 38.4 36.4 14.54 54.9 5.83 17.95 7.66 18.36 2.77 6
1.1 149.5 138.3 121.1 93.6 64.3 78.2 67.5 49.2 18.1 45.2 35.5 10.6 21.4 38.5 51.
8 43.0 64.0 160.5 148.9 131.1 101.2 70.2 85.0 73.3 55.7 23.3 49.8 39.8 13.7 24.2
42.1 56.9 48.1 71.0 173.6 162.4 142.8 109.9 75.4 90.6 78.6 59.4 24.3 54.3 44.2
14.4 24.2 47.9 59.4 49.5 82.5 41.7 35.4 15.42 56.8 6.20 19.05 8.88 21.55 2.76 74
.0 20.6 9.8 23.5 3.1 97.1 50.6 40.6 16.4 59.3 6.8 184.4 172.7 152.4 119.0 80.4 9
6.7 84.4 65.8 29.4 59.3 48.8 17.7 27.6 51.4 64.2 54.8 88.3 171.3 159.3 141.9 108
.8 75.9 90.7 78.5 61.7 28.1 54.5 44.3 17.5 29.7 56.0 62.5 53.5 79.0 49.1 43.7 15
.5 57.7 6.5 19.08 8.4 19.9 3.1 89.9 95th Percentile 5th Percentile 50th Percenti
le 95th Percentile Women
TABLE 1
US Civilian Body Dimensions (cm) for Ages 2060 Years
Dimension
5th Percentile
SD 6.6 6.4 6.1 4.6 3.5 3.5 3.3 3.8 2.9 2.7 2.6 1.8 2.5 2.2 3.1 3.1 4.5 5.4 3.7 0
.57 1.63 0.44 1.04 0.41 0.89 0.18 13.80
161.8 151.1 132.3 100.0 69.8 84.2 72.6 52.7 19.0 49.3 39.2 11.4 21.4 44.1 54.0 4
4.2 76.3
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 2
3. 24. 25. 26. 27. 28. 29.
Stature (height) f Eye height f Shoulder (acromion height) f Elbow height f Knuc
kle height Height, sitting s Eye height, sitting s Shoulder height, sitting s El
bow rest height, sitting s Knee height, sitting f Popliteal height, sitting f Th
igh clearance height s Chest depth Elbow ngertip distance Buttockknee distance, sitt
ing Buttock (popliteal distance, sitting) Forward reach, functional Breadths Elb
ow-to-elbow breadth Hip breadth, sitting Head dimensions Head circumference Inte
rpupillary distance Hand dimensions Hand length Breadth, metacarpal Circumferenc
e, metacarpal Thickness, meta III Weight (kg)
35.0 30.8 14.4 53.8 5.5
17.6 8.2 19.9 2.4 56.2
Adapted from Kroemer 1989, with permission from Taylor & Francis Ltd., London, h
ttp: / / www.tandf.co.uk. f above oor, s above seat surface.
TABLE 2 Men 50th Percentile SD 95th Percentile 5th Percentile 50th Percentile 95
th Percentile Women
Anthropometric Estimates for Elderly People (cm)
Dimension
5th Percentile
SD
1045
1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 2
3. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36.
Stature Eye height Shoulder height Elbow height Hip height Knuckle height Finger
tip height Sitting height Sitting eye height Sitting shoulder height Sitting elb
ow height Thigh thickness Buttockknee length Buttockpopliteal length Knee height P
opliteal height Shoulder breadth (bideltoid) Shoulder breadth (biacromial) Hip b
readth Chest (bust) depth Abdominal depth Shoulderelbow length Elbow ngertip length
Upper limb length Shoulder-grip length Head length Head breadth Hand length Hand
breadth Foot length Foot breadth Span Elbow span Vertical grip reach (standing)
Vertical grip reach (sitting) Forward grip reach
151.5 141.0 122.5 93.5 78.5 64.0 55.0 78.5 67.5 49.5 16.0 12.0 51.0 41.0 45.5 36
.5 38.0 33.5 29.0 21.5 23.0 30.5 41.0 67.0 57.0 17.0 7.5 16.0 7.5 22.5 8.0 154.0
80.5 177.0 106.5 67.5
164.0 153.5 134.5 102.5 87.5 71.5 62.0 85.0 74.0 55.5 21.5 14.5 56.5 47.0 51.5 4
1.5 43.0 36.5 34.0 25.5 29.0 34.5 45.0 73.5 62.5 18.5 13.5 18.0 8.0 25.0 9.0 169
.0 89.0 191.5 117.5 73.5
176.5 166.0 146.5 112.0 96.5 78.5 69.0 92.0 80.5 61.5 27.0 17.5 62.0 53.0 57.0 4
7.0 48.0 40.0 39.5 29.0 35.5 38.0 48.5 80.0 68.5 20.0 15.5 19.5 9.0 27.5 10.5 18
4.0 97.5 206.0 128.0 79.5
7.7 7.6 7.2 5.7 5.5 4.5 4.2 4.2 4.0 3.7 3.4 1.7 3.4 3.6 3.5 3.2 3.1 2.0 3.2 2.3
3.9 2.2 2.3 3.9 3.5 0.8 0.7 1.1 0.5 1.5 0.7 9.1 5.2 8.8 6.6 3.8
140.0 130.5 113.0 86.0 70.0 61.0 51.5 71.0 61.0 44.5 15.0 10.5 49.0 40.5 43.0 33
.0 32.5 30.5 28.5 20.5 20.5 28.0 37.0 60.5 51.0 15.5 12.5 14.5 6.5 20.0 7.5 138.
0 72.0 164.0 98.5 60.5
151.5 142.0 123.5 94.5 78.0 68.0 59.0 78.5 68.5 51.5 20.5 14.0 54.5 46.0 48.0 38
.0 37.0 33.5 35.5 25.5 16.0 31.0 40.5 66.5 56.5 17.0 13.5 16.5 7.0 22.5 8.5 151.
5 80.0 177.0 108.5 66.0
163.0 153.5 134.0 103.0 86.0 74.5 66.0 86.5 75.5 58.5 25.5 17.0 60.0 51.5 53.0 4
3.0 41.5 37.0 42.5 30.5 32.0 34.5 44.0 72.5 62.0 18.0 14.5 18.0 8.0 24.5 9.5 164
.5 88.0 190.0 118.0 72.0
7.0 6.9 6.5 5.2 4.9 4.1 4.3 4.8 4.5 4.2 3.2 1.9 3.4 3.4 3.0 3.1 2.7 2.0 4.3 3.0
3.5 2.0 2.2 3.6 3.3 0.8 0.6 1.0 0.5 1.4 0.6 8.0 4.8 8.0 6.0 3.5
Adapted from Pheasant 1986.
1046 TABLE 3 Movement Shoulder exion Shoulder extension Shoulder abduction Should
er adduction Shoulder medial rotation Shoulder lateral rotation Elbow exion Forea
rm supination Forearm pronation Wrist exion Wrist extension Hip abduction Hip add
uction Hip medial rotation (prone) Hip lateral rotation (prone) Hip medial rotat
ion (sitting) Hip lateral rotation (sitting) Knee exion, voluntary (prone) Knee ex
ion, forearm (prone) Knee exion, voluntary (standing) Knee exion forced (kneeling)
Knee medial rotation (sitting) Knee lateral rotation (sitting) Ankle exion Ankle
extension Foot inversion Foot eversion
PERFORMANCE IMPROVEMENT MANAGEMENT Range of Joint Mobility Values Corresponding
to Postures in Figure 1 Mean 188 61 134 48 97 34 142 113 77 90 113 53 31 39 34 3
1 30 125 144 113 159 35 43 35 38 24 23 S.D. 12 14 17 9 22 13 10 22 24 12 13 12 1
2 10 10 9 9 10 9 13 9 12 12 7 12 9 7 5 Percentile 168 38 106 33 61 13 126 77 37
70 92 33 11 23 18 16 15 109 129 92 144 15 23 23 18 9 11 95 Percentile 208 84 162
63 133 55 159 149 117 110 134 73 51 56 51 46 45 142 159 134 174 55 63 47 58 39
35
Adapted from Chaf n and Andersson, Occupational Biomechanics, 3rd Ed. Copyright
99. Reprinted by permission of John Wiley & Sons, Inc., New York. a Measurement
technique was photography. Subjects were college-age males. Data are in angular
degrees.
Figure 1 Illustration of Joint Mobility. (Adapted from Chaf n et al., Occupational
Biomechanics, 3rd Ed. Copyright 1999. Reprinted by permission of John Wiley & S
ons, Inc., New York.)
19
exion Coronal
1050
PERFORMANCE IMPROVEMENT MANAGEMENT
Figure 4 Illustration of the CREW CHIEF Design Capabilities: Leftremoving a reces
sed bolt from a jet engine; rightmodi cation of ratchet wrench interaction with han
dles on a box. (Reproduced with permission from McDaniel, copyright by Taylor &
Francis, 1990)
should make the analysis and application of biomechanical principles at work les
s complicated and more useful. For a review of the state of the art in ergonomic
models of anthropometry, human biomechanics and operatorequipment interfaces, se
e Kroemer et al. (1988). Other developments in computer-aided ergonomics, speci ca
lly computer models of man and computer-assisted workplace design, are discussed
by Karwowski et al. (1990). According to Schaub and Rohmert (1990), man model d
evelopment originated with SAMMIE (System for Aiding ManMachine Interaction Evalu
ation) in England (see Figure 3) (Porter et al. 1995). Examples of computer mode
ls developed in the United States include BOEMAN (Ryan 1971) for aircraft design
, COMBIMAN and CREW CHIEF (McDaniel 1990) (see Figure 4), Deneb / ERGO (Nayar 19
95) and JACK (Badler et al. 1995) Other computer-aided man models developed in E
urope include ERGOMAN (France), OSCAR (Hungary), ADAPTS (Netherlands), APOLINEX
(Poland), and WERNER, FRANKY, and ANYBODY (Germany). A comprehensive 3D man mode
l for workplace design, HEINER, was developed by Schaub and Rohmert (1990). Adva
nces in applied arti cial intelligence made it possible to develop knowledge-based
expert systems for ergonomic design and analysis (Karwowski et al. 1987; Jung a
nd Freivalds 1990). Examples of such models include SAFEWORK (Fortin et al. 1990
), ERGONEXPERT (Rombach and Laurig1990), and ERGOSPEC (Brennan et al. 1990). Oth
er models, such as CAD-video somotograph (Bullinger and Lorenz 1990) and AutoCAD
-based anthropometric design systems (Grobelny 1990), or ergonomic databases (La
ndau et al. 1990), were also developed. The computer-aided systems discussed abo
ve serve the purpose of biomechanical analysis in workplace design. For example,
COMBIMAN, developed in the Human Engineering Division of Armstrong Laboratory s
ince 1975, is both illustrative and analytic software. It allows the analysis of
physical accessibility (reach and t capabilities), strength for operating contro
ls, and visibility accessibility. CREW CHIEF, a derivative of COMBIMAN, also all
ows the user with similar analyses. Another important development is Denebs ERGO,
a system capable of rapid prototyping of human motion, analyzing human joint ra
nge of motion, reach, and visual accessibility. In a recent study by Schaub et a
l. (1997), the authors revised the models and methods of ERGOMAN and reported ad
ded capabilities to predict maximum forces / moments of relevant posture, evalua
te stress of human body joints, and carry out a general risk assessment. Probabl
y the most advanced and comprehensive computer-aided digital human model and des
ign / evaluation system today is JACK, from Transform Technologies (2000).
3.
DESIGN FOR HUMAN STRENGTH
Knowledge of human strength capabilities and limitations can be used for ergonom
ic design of jobs, workplaces, equipment, tools, and controls. Strength measurem
ents can also be used for worker preemployment screening procedures (Chaf n et al.
1978; Ayoub 1983). Human strengths can be assessed under static (isometric) or
dynamic conditions (Kroemer 1970; Chaf n et al. 1977). Dynamic strengths can be me
asured isotonically, isokinetically, and isoinertially. Isometric muscle strengt
hs are the capacity of muscles to produce force or moment force by a single maxi
mal voluntary exertion; the body segment involved remains stationary and the len
gth of the muscle does not change. In
TABLE 6 Static Strengths Demonstrated by Workers when Lifting, Pushing, and Pull
ing with Both Hands on a Handle Placed at Different Locations Relative to the Mi
dpoint between the Ankles on Floor Male Strengths (N) Sample Size 673 1141 1276
309 119 309 309 170 309 309 309 205 54 309 205 903 480 383 227 529 538 890 320 4
32 605 311 253 311 303 418 325 205 125 71 222 156 245 125 93 102 80 62 195 76 17
8 Mean SD 165 246 234 35 20 35 35 20 35 35 35 52 27 35 52 Female Strengths (N) S
ample Size Mean 427 271 214 129 240 285 547 200 325 449 244 209 226 214 276 SD 1
87 125 93 36 84 102 182 71 71 107 53 62 76 49 120 Horizontal 0 38 38 51 38 25 25
38 38 33 33 0 35 25 64
Location of Handle (cm)a
Test Description
Vertical
Liftleg partial squat Lifttorso stooped over Liftarms exed Liftshoulder high and arms
out Liftshoulder high and arms exed Liftshoulder high and arms close Lift oor level,
close (squat) Lift oor level, out (stoop) Push downwaist level Pull downabove shoulde
rs Pull inshoulder level, arms out Pull inshoulder level, arms in Pushout waist lev
el, stand erect Pushout chest level, stand erect Push-outshoulder level, lean forw
ard
38 38 114 152 152 152 15 15 118 178 157 140 101 124 140
Adapted from Chaf n et al., Occupational Biomechanics, 3rd Ed. Copyright
1999. Rep
rinted by permission of John Wiley & Sons, Inc., New York. a The location of the
handle is measured in midsagittal plane, vertical from the oor and horizontal fr
om the midpoint between the ankles.
1051
1052
PERFORMANCE IMPROVEMENT MANAGEMENT
dynamic muscular exertions, body segments move and the muscle length changes (Ay
oub and Mital 1989). The static strengths demonstrated by industrial workers on
selected manual handling tasks are shown in Table 6. Maximum voluntary joint str
engths are depicted in Table 7.
3.1.
Occupational Strength Testing
The main goal of worker selection is to screen the potential employee on the bas
is of his or her physical capability and match it with job demands. In order to
evaluate an employees capability, the following criteria should be applied when s
electing between alternative screening methods (NIOSH 1981): 1. 2. 3. 4. 5. Safe
ty in administering Capability of giving reliable, quantitative values Relation
to speci c job requirements Practicality Ability to predict the risk of future inj
ury or illness
Isometric strength testing has been advocated as a means to predict the risk of
future injuries resulting from jobs that require a high amount of force. Chaf n et
al. (1977) reported that both frequency and severity rates of musculoskeletal p
roblems were about three times greater for workers placed in jobs requiring phys
ical exertion above that demonstrated by them in isometric strength tests when c
ompared with workers placed in jobs having exertion requirements well below thei
r demonstrated capabilities. The literature on worker selection has been reviewe
d by NIOSH (1981), Ayoub (1983), Chaf n et al. (1999), and Ayoub and Mital (1989).
Typical values for the static strengths are shown in Figure 5.
3.2.
Static vs. Dynamic Strengths
The application of static strength exertion data has limited value in assessing
workers capability to perform dynamic tasks that require application of force thr
ough a range of motions (Ayoub and Mital 1989). Mital et al. (1986) found that t
he correlation coef cients between simulated job dynamic strengths and maximum acc
eptable weight of lift in horizontal and vertical planes were substantially high
er than those between isometric strengths and weights lifted. Two new studies of
fer design data based on dynamic strengths (Mital and Genaidy 1989; Mital and Fa
ard 1990).
TABLE 7
Maximum Voluntary Joint Strengths (Nm) Range of Moments (Nm) of Subjects from Se
veral Studies Joint Angle (degrees) 90 90 90 90 60 0 0 0 0 0 90 90 0 0
Joint strength Elbow exor Elbow extensor Shoulder exor Shoulder extensor Shoulder
adductor Trunk exor Trunk extensor Trunk lateral exor Hip extensor Hip abductor Kn
ee exor Knee extensor Ankle plantar exor Ankle dorsi exor
Men 50120 25100 60100 40150 104 145515 143 150290 110505 65230 50130 100260 75230
Women 1585 1560 2565 1060 47 85320 78 80170 60130 40170 35115 70150 35130 2545
Variation with Joint Angle Peak at about 90 Peak between 50 and 100 Weaker at exed a
ngles Decreases rapidly at angles less than 30 As angle decreases, strength incre
ases then levels at 30 to 30 Pattern differ among authors Increases with trunk exion
Decreases with joint exion Increases with joint exion Increases as angle decrease
s In general, decreases some disagreement with this, depending on hip angle Mini
ma at full exion and extension Increases with dorsi exion Decreases from maximum pl
antar exion to maximum dorsi exion
Adapted from Tracy 1990.
1054
PERFORMANCE IMPROVEMENT MANAGEMENT
3.3.
Computer Simulation of Human Strength Capability
The worker strength exertion capability in heavy manual tasks can be simulated w
ith the help of a microcomputer. Perhaps the best-known microcomputer system dev
eloped for work design and analysis concerning human strength is the Three Dimen
sional Static Strength Program (3D SSPP), developed by the Center for Ergonomics
at the University of Michigan and distributed through the Intellectual Properti
es Of ce (University of Michigan, 1989). The program can aid in the evaluation of
the physical demands of a prescribed job, and is useful as a job design / redesi
gn and evaluation tool. Due to its static nature, the 3D SSPP model assumes negl
igible effects of accelerations and momentums and is applicable only to slow mov
ements used in manual handling tasks. It is claimed that the 3D SSPP results cor
relate with average population static strengths at r
0.8, and that the program s
hould not be used as the sole determinant of worker strength performance (Univer
sity of Michigan, 1989). In their last validation study, Chaf n and Erig (1991) re
ported that if considerable care is taken to ensure exactness between simulated
and actual postures, the prediction error standard deviation would be less than
6% of the mean predicted value. However, 3D SSPP does not allow simulation of dy
namic exertions. The body posture, in 3D SSPP, is de ned through ve different angle
s about the joints describing body link locations. The input parameters, in addi
tion to posture data, include percentile of body height and weight for both male
and female populations, de nition of force parameters (magnitude and direction of
load handled in the sagittal plane), and the number of hands used. The output f
rom the model provides the estimation of the percentage values of the population
capable of exerting the required muscle forces at the elbow, shoulder, lumbosac
ral (L5 / S1), hip, knee and ankle joints, and calculated back compression force
on L5 / S1 in relation to NIOSH action limit and maximum permissible limit. The
body balance and foot / hip potential is also considered. An illustration of th
e model output is given in Figure 6.
3.4.
PushPull Force Limits
Safe pushpull force exertion limits may be interpreted as the maximum force magni
tudes that people can exert without injuries (for static exertions) or CTD (for
repeated exertions) of the upper extremities under a set of conditions.
Figure 6 Illustration of Results from the 3D Static Strength Prediction Program
of the University of Michigan.
135
57
From Snook and Ciriello (1991) with permission from Taylor & Francis Ltd., Londo
n, http: / / www.tandf.co.uk. Height
vertical distance from oor to handobject (han
dle) contact Percent percentage of industrial workers capable of exerting the st
ated forces in work situations.
1056 TABLE 9
PERFORMANCE IMPROVEMENT MANAGEMENT Maximum Acceptable Forces of Pull for Females
(kg) 2.1 m Pull 6 12 s 14 17 20 24 26 11 14 16 19 21 6 9 12 16 18 5 7 10 12 15
15 18 22 25 28 12 15 17 20 23 8 12 16 20 23 6 9 13 16 19 17 21 25 29 33 14 17 20
23 26 10 14 19 24 28 8 11 15 19 23 18 22 26 30 34 14 17 21 24 27 10 14 20 25 29
8 12 16 20 23 1 2 min Initial Forces 90 75 50 25 10 90 75 50 25 10 90 75 50 25
10 90 75 50 25 10 20 24 29 33 38 16 19 23 27 30 11 16 21 27 32 9 13 17 22 26 21
25 30 35 39 17 20 24 28 31 12 17 23 29 34 9 14 18 23 28 22 27 32 37 41 18 21 25
30 33 14 21 28 36 42 12 17 23 29 34 12 15 18 20 23 11 13 15 18 20 5 7 9 11 14 5
7 9 11 13 13 16 19 22 25 12 14 17 19 22 5 8 10 13 15 5 7 10 13 15 14 17 21 24 27
12 15 18 21 23 5 8 11 13 16 5 8 10 13 16 15 19 22 26 29 13 16 19 22 25 6 8 11 1
4 17 6 8 11 14 16 17 21 25 29 33 15 18 22 25 28 8 11 15 19 22 7 11 15 19 22 5 30
8 h 1 2 min 45.7 m Pull 5 30 8 h
Height
Percent
135
57
Sustained Forces
135
57
From Snook and Ciriello (1991) with permission from Taylor & Francis Ltd., Londo
n, http: / / www.tandf.co.uk. Height
vertical distance from oor to handobject (han
dle) contact Percent percentage of industrial workers capable of exerting the st
ated forces in work situations.
3.4.3.
One-Handed Force Magnitudes
One-handed forces vary considerably among studies with similar variables and wit
hin individual studies depending on test conditions or variables. Generalization
s about recommended forces, therefore, are not easy to make. Average static stan
ding-pull forces have ranged from 70 to 134 N and sitting forces from 350 to 540
N. Dynamic pull forces, in almost all studies, have ranged from 170 to 380 N in
females and from 335 to 673 N in males when sitting. Average pull forces in mal
es while lying down prone have ranged from 270 to 383 N and push forces from 285
to 330 N (Hunsicker and Greey 1957).
3.4.4.
PinchPull Force Magnitudes
Pinching and pulling with one hand while stabilizing the object with the other h
and has been observed in male adults to yield forces of 100, 68, and 50 N when u
sing the lateral, chuck, and pulp pinches, respectively (Imrhan and Sundararajan
1992; Imrhan and Alhaery 1994).
4.
4.1.
STATIC EFFORTS AND FATIGUE
PHYSICAL TASKS: ANALYSIS, DESIGN, AND OPERATION TABLE 10 Maximum Acceptable Forc
es of Pull for Males (kg) 2.1 m Pull 6 Height Percent 90 75 50 25 10 90 75 50 25
10 90 75 50 25 10 90 75 50 25 10 14 17 20 24 26 22 27 32 37 42 8 10 13 15 17 11
14 17 20 23 s 16 19 23 27 30 25 30 36 42 48 10 13 16 20 22 14 19 23 27 31 18 22
26 31 34 28 34 41 48 54 12 16 20 24 27 17 23 28 33 38 18 22 26 31 34 28 34 41 4
8 54 13 17 21 25 28 18 23 29 35 40 12 1 2 min Initial Forces 19 23 28 32 36 30 3
7 44 51 57 15 19 23 28 32 20 26 32 39 45 19 24 28 33 37 30 37 44 51 58 15 20 24
29 33 21 27 34 40 46 23 28 33 39 44 36 44 53 61 69 18 23 28 34 39 25 32 40 48 54
10 12 15 17 20 16 19 23 27 30 6 7 9 11 12 8 10 13 15 17 11 14 16 19 22 18 22 26
30 34 7 9 11 13 14 9 12 15 18 20 13 16 19 22 25 21 25 30 35 39 8 10 12 15 17 11
14 17 21 24 13 16 19 22 25 21 25 30 35 39 9 11 14 17 19 12 16 20 24 27 5 30 8 h
1 2 min 45.7 m Pull 5 30
1057
8 h 16 20 24 28 31 26 31 37 43 49 10 14 17 20 23 15 19 23 28 32
144
64
Sustained Forces
144
64
From Snook and Ciriello (1991). Used with permission by Taylor & Francis Ltd., L
ondon, http: / / www.tandf.co.uk. Height
vertical distance from oor to handobject
(handle) contact Percent percentage of industrial workers capable of exerting th
e stated forces in work situations.
three studies indicate that the limit time approaches in nity at a force of 810% ma
ximum voluntary contraction and converges to zero at 100% of the maximum strengt
h. As discussed by Kahn and Monod (1989), the maximum duration of static effort
or the maximum maintenance time (limit time) varies inversely with the applied f
orce and may be sustained for a long time if the force does not exceed 1520% of t
he maximum voluntary contraction (MVC) of the muscle considered. The relation be
tween force and limit time has been de ned by Monod (1965) as: t k (F )n
where t is the limit time (min), F the relative force used (%), the force (%) fo
r which t tends to in nity (called the critical force), and k and n are constants.
Rohmert (1960) subsequently proposed a more elaborate equation: t
1.5
2.1 0.6 0.1
F F2 F3
In both cases the maximum maintenance time is linked to the force developed by a
hyperbolic relation, which applies to all muscles.
4.2.
Intermittent Static Work
In the case of intermittent static efforts during which the contraction phases a
re separated by rest periods of variable absolute and relative duration, the max
imum work time, given the relative force used and the relative duration of contr
action, can be predicted as follows (Rohmert 1973):
1058 TABLE 11
PERFORMANCE IMPROVEMENT MANAGEMENT Maximum Acceptable Forces of Pull for Males (
kg) 2.1 m Pull 6 12 s 20 26 32 38 44 19 25 31 38 43 10 13 17 21 25 10 4 18 22 26
22 29 36 43 49 22 28 35 42 48 13 17 22 27 31 13 18 23 28 32 25 32 40 47 55 24 3
1 39 46 53 15 21 27 33 38 16 21 28 34 39 25 32 40 47 55 24 31 39 46 53 16 22 28
34 40 16 22 29 35 41 1 2 min Initial Forces 90 75 50 25 10 90 75 50 25 10 90 75
50 25 10 90 75 50 25 10 26 34 42 50 58 25 33 41 49 57 18 24 31 38 45 18 25 32 39
46 26 34 42 51 58 26 33 41 50 57 18 25 32 40 46 19 26 33 41 48 31 41 51 61 70 3
1 40 50 59 68 22 30 38 47 54 23 31 39 48 56 13 16 20 24 28 12 16 20 24 27 7 10 1
2 15 18 7 9 12 14 17 14 18 23 27 31 14 18 22 27 31 8 11 14 18 21 8 11 14 17 20 1
6 21 26 32 36 16 21 26 31 36 10 13 17 21 24 9 12 16 20 23 16 21 26 32 36 16 21 2
6 31 36 11 15 19 24 28 11 14 18 23 26 20 26 33 39 45 20 26 32 39 44 13 18 23 28
33 13 17 22 27 31 5 30 8 h 1 2 min 45.7 m Pull 5 30 8 h
Height
Percent
144
64
Sustained Forces
144
64
From Snook and Ciriello (1991). Used with permission by Taylor & Francis Ltd., L
ondon, http: / / www.tandf.co.uk. Height
vertical distance from oor to handobject
(handle) contact Percent percentage of industrial workers capable of exerting th
e stated forces in work situations.
t
k (F
)np
where p is the static contraction time as a percentage of the total time. Rohmer
t (1962) had devised another method for estimating the minimum duration of rest
periods required to avoid fatigue during intermittent static work: tr
18
t
1.4
tmax
0.15
100% Fmax
where tr is the rest time as a percentage of t, which is the duration of contrac
tion (min). Kahn and Monod (1989) concluded that the main causal factor in the o
nset of fatigue due to static efforts (isometrically contracting muscles) is loc
al muscle ischemia. Furthermore, the onset of local muscle fatigue can be delaye
d if changes in recovery time are suf cient to allow restoration of normal blood
through the muscle.
ow
4.3.
Static Efforts of the Arm
As of this writing, only limited guidelines regarding the placement of objects t
hat must be manipulated by the arm have been proposed (Chaf n et al. 1999). Figure
8 depicts the effect of horizontal reach on shoulder muscle fatigue endurance t
imes. This gure illustrates that the workplace must be designed to allow for the
upper arm to be held close to the torso. In general, any load-holding tasks shou
ld be minimized by the use of xtures and tool supports. Strasser et al. (1989) sh
owed experimentally that the local muscular strain of the handarmshoulder system i
s dependent upon the direction of horizontal arm movements. Such strain, depende
nt on the direction of repetitive manual movements, is of great importance for w
orkplace layout. The authors based their study on the premise
1060
Figure 9 Illustration of Changes in Muscular Effort with the Direction of Contra
ction at Horizontal Plane. (Reproduced with permission from Strasser et al., cop
yright
by Taylor & Francis, http: / / www.tandf.co.uk. 1989.)
) is provided. This categorization results in the speci cation of four action cate
gories. The observed posture combinations are classi ed according to the OWAS meth
od into ordinal scale action categories. The four action categories described he
re are based on experts estimates on the health hazards of each work posture or p
osture combination in the OWAS method on the musculoskeletal system: 1. Work pos
tures are considered usually with no particular harmful effect on the musculoske
letal system. No actions are needed to change work postures. 2. Work postures ha
ve some harmful effect on the musculoskeletal system. Light stress, no immediate
action is necessary, but changes should be considered in future planning. 3. Wo
rk postures have a distinctly harmful effect on the musculoskeletal system. The
working methods involved should be changed as soon as possible. 4. Work postures
have an extremely harmful effect on the musculoskeletal system. Immediate solut
ions should be found to reduce these postures. OWASCA, a computer-aided visualiz
ing and training software for work posture analysis, was developed using OWAS. O
WASCA is intended as OWAS training software (Vayrynen et al. 1990).
1062
PERFORMANCE IMPROVEMENT MANAGEMENT
The system is also suitable for visualizing the work postures and for the basic
analysis of the postures and their loads. The posture is presented with parametr
ic vector using 2D graphics, OWAS codes, and texts. The posture of the back, arm
s and legs, posture combination, force or effort used, additional postures, and
action categories can be studied interactively step by step. The required OWAS s
kills can be tested by OWASCA. The program shows a random work posture, and the
user is asked to identify it. OWASCA describes the errors and gives the numbers
of test postures and correct answers (Mattila et al. 1993).
5.3.2.
Standard Posture Model
A standard system for analyzing and
lders, and lower extremities during
of Michigan (Keyserling 1990) (see
deviations were also de ned (Table
PHYSICAL TASKS: ANALYSIS, DESIGN, AND OPERATION TABLE 12 Classi cation of Working
Postures That Deviate from Neutral Neutral Posture Vertical with no axial twisti
ng Deviated Posture Description Extended (bent backward more than 20)
1063
Body Segment Trunk
Mildly exed (bent forward between 20 and 40) Severely exed (bent forward more than
45) Bent sideways or twisted more than 20 from the neutral position Extended (bent
backward more than 20) Mildly exed (bent forward between 20 and 45) Severely exed (
bent forward more than 90) Bent sideways or twisted more than 20 Neutral (included
angle less than 45) Mild elevation (included angle between 45 and 90) Severe elev
ation (included angle more than 45)
Neck
Shoulder
Lower extremities
(standard postures)
Walk (locomotion while maintaining an upright posture) Stand (free standing with
no support other than the feet) Lean (weight partially supported by an external
object such as a wall or railing) Squat (knees bent with the included angle bet
ween thigh and calf 90180) Deep squat (included angle between thigh and calf less
than 90) Kneel (one or both knees contact the oor and support part of the body wei
ght) Sit (body weight primarily supported by the ischial tuberosities)
Adapted from Keyserling 1990.
analysis involves three steps. First, a continuous video recording of the job is
obtained. Second, a sequential description of the major tasks required to perfo
rm the job is done in a laboratory, with the job being broken into fundamental w
ork elements and their times measured. The third and nal step involves collection
of the postural data using the common time scale developed from the fundamental
work elements. Postural changes are keyed into the system through the preassign
ed keys corresponding to speci c postures. The value of each posture and the time
of postural change for a given joint are recorded and stored in a computer. Base
d on the above data, the system generates a posture pro le for each joint, consist
ing of the total time spent on each standard posture during the work cycle, the
range of times spent in each standard posture, the frequency of posture use, and
so on. The system can also provide a graph showing postural changes over time f
or any of the body joints (segments) of interest.
5.4.
Acceptability of Working Postures
Analysis of posture must take into consideration not only the spatial elements o
f the posture, that is, how much is the person exed, laterally bent, or rotated (
twisted), but how long these postures are maintained. Milner et al. (1986) point
ed out where an individual is working to the limits of endurance capacity, it ha
s been found that full recovery is not possible within a rest period 12 times th
e maximum holding time. Full recovery is possible as long as the holding time is
a small percentage of maximum handling time. Bonney et al. (1990) studied toler
ability of certain postures and showed that complex postures requiring movement
in more than one direction are more uncomfortable than simple postures. Lateral
bending produced more discomfort than either exed or rotated postures and appears
to be the least well-tolerated posture. Rotation by itself does not cause signi c
ant discomfort. This nding is consistent with epidemiological results of Kelsey a
nd Golden (1988), who hypothesized that twisting along may not produce enough st
ress to bring about a detectable increase in risk. Corlett and Manenica (1980) d
erived estimates for maximum handling times for various postures when performing
a no-load task. These recommendations are as follows:
1064
PERFORMANCE IMPROVEMENT MANAGEMENT
1. Slightly bent forward postures (approx. 1520)
8 min 2. Moderately bent forward
posture (approx. 2060 )
34 min 3. Severely bent forward postures (greater than abou
t 60) approx. 2 min Colombini et al. (1985) presented criteria on which posture a
ssessments should be based. Postures considered tolerable include (1) those that
do not involve feelings of short-term discomfort and (2) those that do not caus
e long-term morpho-functional complaints. Short-term discomfort is basically the
presence of a feeling of fatigue and / or pain affecting any section of the ast
eo-arthromuscular and ligamentous apparatus appearing in periods lasting minutes
, hours, or days. Miedema et al. (1997) derived the maximum holding times (MHT)
of 19 standing postures in terms of percent of shoulder height and percent of ar
m reach. They also classi ed such working postures into three categories, dependin
g on the mean value of the MHT: (1) comfortable; (2) moderate; and (3) uncomfort
able postures (see Table 13). Recently, Kee and Karwowski (2001) presented data
for the joint angles of isocomfort (JAI) for the whole body in sitting and stand
ing postures, based on perceived joint comfort measures. The JAI value was de ned
as a boundary indicating joint deviation from neutral (0), within which the perce
ived comfort for different body joints is expected to be the same. The JAI value
s were derived for nine verbal categories of joint comfort using the regression
equations representing the relationships between different levels of joint devia
tion and corresponding comfort scores for each joint motion. The joint angles wi
th marginal comfort levels for most motions around the wrist, elbow, neck, and a
nkle were similar to the maximum range-of-motion (ROM) values for these joints.
However, the isocomfort joint angles with the marginal comfort category for the
back and hip motions were much smaller than the maximum ROM values for these joi
nts. There were no signi cant differences in percentage of JAI in terms of the cor
responding maximum ROM values between standing and sitting postures. The relativ
e marginal comfort index, de ned as the ratio between joint angles for marginal co
mfort and the corresponding maximum ROM values, for hip was the smallest among a
ll joints. This was followed in increasing order of the index for lower back and
for shoulder, while the index values for elbow were the largest. This means tha
t hip motions are less comfortable than any other joint motion, while elbow moti
ons are the most comfortable. The relative good comfort index exhibited much sma
ller values of joint deviation, with most
TABLE 13
MHT and Postural Load Index for the 18 Posturesa MHT (min) Postural Load Index
Posture Categories Comfortable postures (MHT 10 min) SH / AR
75 / 50 75 / 25 100
/ 50 50 / 25 125 / 50 50 / 50 Moderate postures (5 min MHT 10 min) 100 / 25 100
/ 100 75 / 100 125 / 100 75 / 75 50 / 100 100 / 75 50 / 75 Uncomfortable postur
es (MHT
5 min) 25 / 25 25 / 50 150 / 50 25 / 75 25 / 100
a
37.0 18.0 17.0 14.0 12.0 12.0 10.0 9.0 9.0 8.0 6.0 6.0 5.5 5.3 5.0 4.0 3.5 3.3 3
.0
3 3 3 0 8 0 7 5 4 8 12 10 8 5 10 10 13 10 13
Posture was de ned after Miedema et al. 1997 in terms of % of shoulder height (SH)
/ % of arm reach (AR).
PHYSICAL TASKS: ANALYSIS, DESIGN, AND OPERATION TABLE 14 Summary of ISO / CEN Dr
aft Standards and Work Items
1065
Ergonomic guiding principles ISO 6385: 1981-06-00 ENV 26385: 1990-06-00 Ergonomi
c principles of the design of work systems EN 614-1: 1995-02-00 Safety of machin
eryErgonomic design principlesPart 1: Terminology and general principles prEN 6142: 1997-12-00 Safety of machineryErgonomic design principlesPart 2: Interactions b
etween the design of machinery and work tasks Anthropometry EN 547-1: 1996-12-00
ISO / DIS 15534-1:1998-04-00 Safety of machineryHuman body measurementsPart 1: Pr
inciples for determining the dimensions required for openings for whole body acc
ess into machinery for mobile machinery. EN 547-2: 1996-12-00 ISO / DIS 15534-2:
1998-04-00 Safety of machineryHuman body measurementsPart 2: Principles for determ
ining the dimensions required for access openings EN 547-3: 1996-12-00 ISO / DIS
15534-3:1998-04-00 Safety of machineryHuman body measurementsPart 3: Anthropometr
ic data ISO 7250: 1996-07-00 EN ISO 7250: 1997-07-00 Basic human body measuremen
ts for technological design (ISO 7250:1996) ISO / DIS 14738: 1997-12-00 prEN ISO
14738: 1997-12-00 Safety of machineryAnthropometric requirements for the design
of workstations at machinery ErgonomicsComputer manikins, body templates Under pr
eparation Selection of persons for testing of anthropometric aspects of industri
al products and designs Under preparation: Safeguarding crushing points by means
of a limitation of the active forces Under preparation: ErgonomicsReach envelope
s Under preparation: Anthropometric database Document scope: The European Standa
rd establishes an anthropometric database for all age groups to be used as the b
asis for the design of work equipment, workplaces, and workstations at machinery
. Under preparation: Notation system of anthropometric measurements used in the
European Standards EN 547 Part 1 to Part 3 Biomechanics prEN 1005-1: 1998-12-00
Safety of machineryHuman physical performancePart 1: prEN 1005-2: 1998-12-00 Safet
y of machineryHuman physical performancePart 2: component parts of machinery prEN
1005-3: 1998-12-00 Safety of machineryHuman physical performancePart 3: machinery
operation prEN 1005-4: 1998-11-00 Safety of machineryHuman physical performancePar
t 4: relation to machinery Under preparation: Safety of machineryHuman physical p
erformancePart 5: handling at high frequency ISO / DIS 11226; 1999-02-00 Ergonomi
csEvaluation of working postures ISO / DIS 11228-1: 1998-08-00 ErgonomicsManual ha
ndlingPart 1: Lifting and carrying
Terms and De nitions Manual handling of machinery and Recommended force limits for
Evaluation of working postures in Risk assessment for repetitive
1068
PERFORMANCE IMPROVEMENT MANAGEMENT
conditions with respect to exposure of the upper extremities. The document inclu
des two important international standards: Evaluation of Working Postures (ISO /
DIS 11226 1998), presented in Figure 11, and Evaluation of Working Postures in
Relation to Machinery (CEN prEN 10054, 1997): Upper arm elevation, shown in Figur
e 12.
5.5.2.
European Standards for Working Postures During Machinery Operation
The draft proposal of the European (CEN / TC122, WG4 / SG5) and the internationa
l (ISO TC159 / SC3 / WG2) standardization document (1993) Safety of machineryhuman
physical performance, Part 4: Working postures during machinery operation, speci es
criteria of acceptability of working postures vs. exposure times. These criteria
are discussed below. 5.5.2.1. General Design Principles Work task and operation
should be designed with suf cient physical and mental variation so that the physi
cal load is distributed over various postures and patterns of movements. Designs
should accommodate the full range of possible users. To evaluate whether workin
g postures during machinery operation are acceptable, designers should perform a
risk assessment, that is, an evaluation of the actual low-varying (static) post
ures of body segments. The lowest maximum acceptable holding time for the variou
s body segment postures should be determined. 5.5.2.2. Assessment of Trunk Postu
re An asymmetric trunk posture should be avoided (no trunk axial rotation or tru
nk lateral exion). Absence of normal lumbar spine lordosis should be avoided. If
the trunk is inclined backward, full support of the lower and upper back should
be provided. The forward trunk inclination should be less than 60, on the conditi
on that the holding time be less than the maximum acceptable holding time for th
e actual forward trunk inclination, as well as that adequate rest is provided af
ter action (muscle tness should not be below 80%). 5.5.2.3. Assessment of Head Po
sture An asymmetric head posture should be avoided (no axial rotation or lateral
exion of the head with respect to the trunk). The head inclination should not be
less than trunk inclination (no neck extension). The head inclination should no
t be larger than the trunk inclination for more than 25 (no extreme neck exion). I
f the head is inclined backward, full head support should be provided. The forwa
rd head inclination should be less than 25 (if full trunk support is provided), f
orward inclination should be between less than 85, on the condition that the hold
ing time should be less than the maximum acceptable holding time for the actual
forward head inclination as well as that adequate rest is provided. 5.5.2.4. Ass
essment of Upper Extremity Posture Shoulder and upper arm posture. Upper arm ret
ro exion and upper-arm adduction should be avoided. Raising the shoulder should be
avoided. The upper-arm elevation should be less than 60, on the condition that t
he holding time be less than the maximum acceptable holding time for the actual
upper-arm elevation as well as that adequate rest be provided after action (musc
le tness should not be below 80%). Forearm and hand posture. Extreme elbow exion o
r extension, extreme forearm pronation or supination, and extreme wrist exion or
extension should be avoided. The hand should be in line with the forearm (no uln
ar / radial deviation of the wrist).
6.
6.1.
OCCUPATIONAL BIOMECHANICS
De nitions
As reviewed by Karwowski (1992), occupational biomechanics is a study of the physi
cal interaction of workers with their tools, machines and materials so as to enh
ance the workers performance while minimizing the risk of future musculoskeletal
disorders (Chaf n et al. 1999). There are six methodological areas, or contributing
disciplines, important to the development of current knowledge in biomechanics:
1. Kinesiology, or study of human movement, which includes kinematics and kineti
cs 2. Biomechanical modeling methods, which refer to the forces acting on the hu
man body while a worker is performing well-de ned and rather common manual task 3.
Mechanical work-capacity evaluation methods in relation to physical capacity of
the worker and job demands 4. Bioinstrumentation methods (performance data acqu
isition and analysis) 5. Classi cation and time-prediction methods that allow for
detailed time analysis of the human work activities and implementation of biomec
hanics principles to t the workplace to the worker
1070
PERFORMANCE IMPROVEMENT MANAGEMENT
and the resistance arm are de ned as the distance from the fulcrum to the effort f
orce or resistance force, respectively. Mechanical advantage (MA) of the lever s
ystem is de ned as the ratio between the force arm distance (d) and the resistance
arm distance (dr) where MA
d / dr. In the rst class of lever systems, the fulcrum
is located between the effort and resistance. Examples include the triceps muscl
e action of the ulna when the arm is abducted and held over the head, and the sp
lenium muscles, acting to extend the head across the atlanto-occipital joints (W
illiams and Lissner 1977). The second class of lever systems is one where resist
ance is located between the fulcrum and the effort, providing for mechanical adv
antage greater than one. An example of such a lever system is the distribution o
f forces in the lower leg when raising ones heel off the ground (see Figure 13).
In the third class of lever systems, the effort is located between the fulcrum a
nd the resistance, and consequently the mechanical advantage is always less than
one, that is, to balance the resistance, the magnitude of effort must be greate
r than the magnitude of resistance. Many bone lever systems in the human body, f
or example, the system involved in forearm exion as illustrated in Figure 13(c),
are third-class systems.
7.
7.1.
DESIGN OF MANUAL MATERIALS HANDLING TASKS
Epidemiology of Low-Back Disorders
As reviewed by Ayoub et al. (1997), manual materials-handling (MMH) tasks, which
include unaided lifting, lowering, carrying, pushing, pulling, and holding acti
vities, are the principal source of compensable work injuries affecting primaril
y the low back in the United States (NIOSH 1981; National Academy of Sciences 19
85; Federal Register 1986; Bigos et al. 1986; Battie et al. 1990). These include
a large number of low-back disorders (LBDs) that are due to either cumulative e
xposure to manual handling of loads over a long period of time or to isolated in
cidents of overexertion when handling heavy objects (BNA 1988; National Safety C
ouncil 1989; Videman et al. 1990). Overexertion injuries in 1985 for the United
States accounted for 32.7% of all accidents: lifting objects (15.1%), carrying,
holding, etc. (7.8%), and pulling or pushing objects (3.9%). For the period of 1
9851987, back injuries accounted for 22% of all cases and 32% of the compensation
costs. In the United States, about 28.2% of all work injuries involving disabil
ity are caused by overexertion, lifting, throwing, folding, carrying, pushing, o
r pulling loads that weigh less than 50 lb (National Safety Council 1989). The a
nalysis by industry division showed that the highest percent of such injuries oc
curred in service industries (31.9%), followed by manufacturing (29.4%), transpo
rtation and public utility (28.8%), and trade (28.4%). The total time lost due t
o disabling work injuries was 75 million workdays, while 35 million days were lo
st due to other accidents. The total work accident cost was $47.1 billion; the a
verage cost per disabling injury was $16,800. Spengler et al. (1986) reported th
at while low-back injuries comprised only 19% of all injuries incurred by the wo
rkers in one of the largest U.S. aircraft companies, they were responsible for 4
1% of the total injury costs. It is estimated that the economic impact of back i
njuries in the United States may be as high as 20 billion annually, with compens
ation costs exceeding $6 billion per year (BNA 1988). Major components of the MM
H system, related risk factors for low-back pain (LBP), and LBDs include worker
characteristics, material / container characteristics, task / workplace characte
ristics, and work practice characteristics (Karwowski et al. 1997). A wide spect
rum of work- and individualrelated risk factors have been associated with the LB
P and LBDs (Riihima ki 1991). However, the precise knowledge about the extent to
which these factors are etiologic and the extent to which they are symptom prec
ipitating or symptom aggravating is still limited. Kelsey and Golden (1988) repo
rted that the risk of lumbar disk prolapse for workers who lift more than 11.3 k
g (25 lb) more than 25 times a day is over three times greater than for workers
who lift lower weights. The OSHA (1982) study also revealed very important infor
mation regarding workers perception of the weights lifted at the time of injury.
Among the items perceived by the workers as factors contributing to their injur
ies were lifting too-heavy objects (reported by 36% of the workers) and underest
imation of weight of objects before lifting (reported by 14% of the workers). An
important review of epidemiological studies on risk factors of LBP using ve comp
rehensive publications on LBP was made by Hildebrant (1987). A total of 24 workrelated factors were found that were regarded by at least one of the reviewed so
urces as risk indicators of LBP. These risk factors include the following catego
ries: 1. General: heavy physical work, work postures in general 2. Static worklo
ad: static work postures in general, prolonged sitting, standing or stooping, re
aching, no variation in work posture 3. Dynamic workload: heavy manual handling,
lifting (heavy or frequent, unexpected heavy, infrequent, torque), carrying, fo
rward exion of trunk, rotation of trunk, pushing / pulling 4. Work environment: v
ibration, jolt, slipping / falling 5. Work content: monotony, repetitive work, w
ork dissatisfaction
ychophysical limits usually refer to weights or forces (MAW / F), although maxim
um acceptable frequencies have also been established (e.g., Nicholson and Legg 1
986; Snook and Ciriello 1991). Despite the relative simplicity of the psychophys
ical method to determine acceptable limits for manual lifting, which makes this
approach quite popular (Karwowski et al 1999), caution should be exercised with
respect to interpretation and usability of the currently available design limits
and databases.
7.4.
MMH Design Databases
One can use already available databases such as the one reported by Snook and Ci
riello (1991). Another database was reported by Mital (1992) for symmetrical and
asymmetrical lifting, and other databases include work by Ayoub et al. (1978) a
nd Mital (1984). Using such available data replaces conducting a study for every
work task and group of workers. Tables provided by the various investigators ca
n be used to estimate the MAW / F for a range of job conditions and work populat
ions. The databases provided in tabular format often make allowances for certain
task, workplace, and / or worker characteristics. The use of databases begins w
ith the determination of the various characteristics with which the database is
strati ed.
1072
PERFORMANCE IMPROVEMENT MANAGEMENT
7.5.
Psychophysical Models
Another method to estimate MAW / F is regression models based on the psychophysi
cal data. Most of these models predict MAWL. The design data presented here are
based upon the database of Snook and Ciriello (1991). Table 15 provides Snook an
d Ciriellos (1991) two-handed lifting data for males and females, as modi ed by Mit
al et al. (1993). Those values that were modi ed have been identi ed. The data in th
ese tables were modi ed so that a job severity index value of 1.5 is not exceeded,
which corresponds to 27.24 kg. Likewise, a spinal compression value that, on av
erage, provides a margin of safety for the back of 30% was used for the biomecha
nical criterion, yielding a maximum load of 27.24 for males and 20 kg for female
s. Finally, the physiological criterion of energy expenditure was used. The limi
ts selected were 4 kcal / min for males and 3 kcal / min for females for an 8-hr
working day (Mital et al. 1993). The design data for maximal acceptable weights
for two-handed pushing / pulling tasks, maximal acceptable weights for carrying
tasks, and maximal acceptable holding times can be found in Snook and Ciriello
(1991) and Mital et al. (1993). The maximal acceptable weights for manual handli
ng in unusual postures are presented by Smith et al. (1992).
7.6.
The Physiological Approach
The physiological approach is concerned with the physiological response of the b
ody to MMH tasks. During the performance of work, physiological changes take pla
ce within the body. Changes in work methods, performance level, or certain envir
onmental factors are usually re ected in the stress levels of the worker and may b
e evaluated by physiological methods. The basis of the physiological approach to
risk assessment is the comparison of the physiological responses of the body to
the stress of performing a task with levels of permissible physiological limits
. Many physiological studies of MMH tended to concentrate on whole body indicato
rs of fatigue such as heart rate, energy expenditure, blood lactate, or oxygen c
onsumption as a result of the workload. Mital et al. (1993) arrived at the same
conclusion as Petrofsky and Lind, that is, that the physiological criteria for l
ifting activities for males should be approximately 4 kcal / min and 3 kcal / mi
n for females. The energy cost of manual handling activities can be estimated ba
sed on the physiological response of the body to the load, that is, by modeling
the physiological cost using work and worker characteristics. The estimates obta
ined from such models are then compared to the literature recommendations of per
missible limits. Garg et al. (1978) report metabolic cost models. Although curre
ntly in need of update, they still provide a more comprehensive and exible set of
physiological cost models as a function of the task variables. The basic form o
f the Garg et al. model is: Ejob where: Ejob Epost-i Ti Np Etask-i Nt T
Np
i 1
(Epost-i
i 1
Ti)
E
Nt
task-i
T
average energy expenditure rate of the job (kcal / min) metabolic energy expendi
ture rate due to maintenance of ith posture (kcal / min) time duration of ith po
sture (min) total number of body postures employed in the job net metabolic ener
gy expenditure of the ith task in steady state (kcal) total number of tasks in t
he given job time duration of the job (min)
Different models require different input data, but typically most of these model
s involve input information regarding task type, load weight / force, load size,
height, frequency, and worker characteristics, which include body weight and ge
nder.
7.7.
The Biomechanical Approach
The biomechanical approach focuses on the establishment of tissue tolerance limi
ts of the body, especially the spine (e.g., compressive and shear force limits t
olerated by the lumbar spine). The levels of stresses imposed on the body are co
mpared to permissible levels of biomechanical stresses, measured by, for example
, peak joint moments, peak compressive force on the lumbar spine, and peak shear
forces on the lumbar spine. Other measures include mechanical energy, average a
nd integrated moments or forces over the lifting, and MMH activity times (Anders
son 1985; Gagnon and Smyth 1990; Kumar 1990). Methods used to estimate the permi
ssible level of stress in biomechanics for MMH include strength testing, lumbar
tissue failure, and the epidemiological relationship between biomechanical stres
s and injury. Tissue failure studies are based on cadaver tissue strength. Gener
ally, the research has focused on the ultimate compressive strength of the lumba
r spine. Studies and literature reviews by Brinckmann et al. (1989) and Ja ger a
nd Luttmann (1991) indicate that the ultimate compressive strength of
TABLE 15 Recommended Weight of Lift (kg) for Male (Female) Industrial Workers fo
r Two-Handed Symmetrical Lifting for Eight Hours Floor to 80 cm Height Frequency
of Lift 1 / 5 min 1 / min 4 / min 8 / min 12 / min 16 / min
Cont. Size
1 / 8 hr
1 / 30 min
75 cm
90 75 50 25 10
17 (12) 24 (14) 27* (17) 27* (20*) 27* (20*)
14 (9) 21 (11) 27* (13) 27* (15) 27* (17)
14 (8) 20 (10) 27 (12) 27* (14) 27* (16)
11(7) 16 (9) 22 (11) 27* (13) 27* (14)
9 (7) 13 (9) 17 (10) 21 (12) 25 (14)
7 (6) 10.5 (8) 14 (9) 17.5 (11) 20.5 (13)
6 (5) 9 (7) 12 (8) 15 (9) 18 (11)
4.5 (4) 7 (6) 9.5 (7) 12 (7) 14.5 (9)
49 cm
90 75 50 25 10
20 (13) 27* (16) 27* (19) 27* (20*) 27* (20*)
17 (9) 24 (12) 27* (14) 27* (17) 27* (19)
16 (8) 24 (10) 27* (13) 27* (15) 27* (17)
13 (8) 19 (10) 26 (12) 27* (14) 27* (15)
10 14 19 24 28
(8) (9) (11) (13) (15)
7 (7) 10 (8) 15 (10) 18.5 (11) 22 (13)
7 (6) 10 (7) 12.5 (9) 15 (10) 17.5 (11)
6.5 (5) 9 (6) 10 (8) 12 (8) 15 (9)
34 cm
90 75 50 25 10
23 (15) 27* (19) 27* (20*) 27* (20*) 27* (20*)
19 (11) 27* (14) 27* (17) 27* (20*) 27* (20*)
75 cm
90 75 50 25 10
16 (11) 22 (13) 27* (15) 27* (17.5) 27* (19)
15 (9.5) 20 (11) 25 (13) 27* (15) 27* (17)
13 (9) 18 (10.5) 23 (12) 27 (14) 27* (15)
12 (8) 17 (9.5) 21 (11) 26 (12) 27* (14)
11 15 19 23 27
(7) (8) (10) (10.5) (12)
7 (5) 8 (6) 12 (8) 17 (10) 22 (11)
6 (5) 8 (6) 11 (8) 13 (9) 18 (10)
5 (4.5) 6 (5) 8 (7) 11 (8) 13 (8)
49 cm
90 75 50 25 10 6 (5) 8 (6) 11 (8) 13 (9) 18 (10)
16 (11) 22 (13) 27* (15) 27* (17.5) 27* (19)
15 (9.5) 20 (11) 25 (13) 27* (15) 27* (17)
13 (9) 18 (10.5) 23 (12) 27 (14) 27* (15)
12 (8) 17 (9.5) 21 (11) 26 (12) 27* (14)
11 15 19 23 27
(7) (8) (10) (10.5) (12)
7 (6) 8 (6) 12 (8) 17 (10) 22 (11)
5 (4.5) 6 (5) 8 (7) 11 (8) 13 (8)
34 cm
90 75 50 25 10
18 (12) 24 (15) 27* (17) 27* (19) 27* (20*)
17 (10.5) 22 (12) 27* (15) 27* (17) 27* (19)
15 (10) 20 (11) 25 (13) 27* (15) 27* (17)
14 (9) 19(10.5) 24 (12) 27* (14) 27* (16)
12 (8) 16 (10) 20 (11) 24 (12) 27* (14)
7 (6) 8 (7.5) 12 (10) 20 (11) 22 (13)
6 (6) 8 (7.5) 11 (9) 16 (10) 18 (11)
1076
PERFORMANCE IMPROVEMENT MANAGEMENT
cadaver lumbar segments varies from approximately 800 N to approximately 13,000
N. Ja ger and Luttman (1991) report a mean failure for compression at 5,700 N fo
r males with a standard deviation of 2600 N. For females, this failure limit was
found to be 3900 N with a standard deviation of approximately 1500 N. In additi
on, several factors in uence the compressive strength of the spinal column, includ
ing age, gender, specimen cross-section, lumbar level, and structure of the disc
or vertebral body. The ultimate compressive strength of various components of t
he spine can be estimated with following regression model (Ja ger and Luttman 19
91): Compressive strength (kN) (7.65
1.18 G)
(0.502
0.382 G) A (0.035
0.127 G) C
0.167 L
0.89 S where: G A L C S
gender (0 for female; 1 for male) decades (e.g., 30 years
3, 60
6) lumbar level
(0 for L5 / S1; incremental values for each lumbar disc or vertebra) cross-secti
on (cm2) structure (0 for disc; 1 for vertebra)
It should be noted that statically determined tolerances may overestimate compre
ssive tolerances (Ja ger and Luttman 1992b). Modeling studies by Potvin et al. (
1991) suggest that erector spinae oblique elements could contribute about 500 N
sagittal shear to leave only 200 N sagittal shear for discs and facets to counte
r. According to Farfan (1983), the facet joints are capable of absorbing 3,100 N
to 3,600 N while the discs support less than 900 N. Due to the complexity of dy
namic biomechanical models, assessment of the effects of lifting on the musculos
keletal system has most frequently been done with the aid of static models. Many
lifting motions, which are dynamic in nature, appear to have substantial inerti
a components. McGill and Norman (1985) also compared the low-back moments during
lifting when determined dynamically and statically. They found that the dynamic
model resulted in peak L4 / L5 moments 19% higher on the average, with a maximu
m difference of 52%, than those determined from the static model. Given the comp
lexity of the human body and the simplicity of the biomechanical models, values
from these models can only be estimates and are best used for comparison purpose
s rather than suggesting absolute values (Delleman et al. 1992).
7.8.
Revised NIOSH (1991) Lifting Equation
The 1991 revised lifting equation has been expanded beyond the previous guidelin
e and can be applied to a larger percentage of lifting tasks (Waters et al. 1993
). The recommended weight limit (RWL) was designed to protect 90% of the mixed (
male / female) industrial working population against LBP. The 1991 equation is b
ased on three main components: standard lifting location, load constant, and ris
k factor multipliers. The standard lifting location (SLL) serves as the 3D refer
ence point for evaluating the parameters de ning the workers lifting posture. The S
LL for the 1981 Guide was de ned as a vertical height of 75 cm and a horizontal di
stance of 15 cm with respect to the midpoint between the ankles. The horizontal
factor for the SLL was increased from 15 cm to 25 cm displacement for the 1991 e
quation. This was done in view of recent ndings that showed 25 cm as the minimum
horizontal distance in lifting that did not interfere with the front of the body
. This distance was also found to be used most often by workers (Garg and Badger
1986; Garg 1989). The load constant (LC) refers to a maximum weight value for t
he SLL. For the revised equation, the load constant was reduced from 40 kg to 23
kg. The reduction in the load constant was driven in part by the need to increa
se the 1981 horizontal displacement value from a 15-cm to a 25-cm displacement f
or the 1991 equation (noted above in item 1). Table 16 shows de nitions of the rel
evant terms utilized by the 1991 equation. The RWL is the product of the load co
PHYSICAL TASKS: ANALYSIS, DESIGN, AND OPERATION TABLE 16 Multiplier Load constan
t Horizontal Vertical Distance Asymmetry Frequency Coupling Terms of the 1991 NI
OSH Equation Formula (cm) LC
23 kg HM
25 / H VM
1
( 0.003 V 75 ) DM 0.82
AM 1
0.0032 A FM (see Table 147) CM (see Table 148)
1077
H the horizontal distance of the hands from the midpoint of the ankles, measured
at the origin & destination of the lift (cm). V
the vertical distance of the ha
nds from the oor, measured at the origin and destination of the lift (cm). D
the
vertical travel distance between the origin and destination of the lift (cm). A
the angle of asymmetryangular displacement of the load from the sagittal plane, m
easured at the origin and destination of the lift (degrees). F average frequency
of lift (lifts / minute). C load coupling, the degree to which appropriate hand
les, devices, or lifting surfaces are present to assist lifting and reduce the p
ossibility of dropping the load. From Waters et al. 1993. Reprinted with permiss
ion by Taylor & Francis Ltd., London, http: / / www.tandf.co.uk.
destination, or (3) position or guide the load at the destination. If the distan
ce is less than 10 in (25 cm), then H should be set to 10 in (25 cm). The vertic
al location (V) is de ned as the vertical height of the hands above the oor and is
measured vertically from the oor to the midpoint between the hand grasps, as de ned
by the large middle knuckle. The vertical location is limited by the oor surface
and the upper limit of vertical reach for lifting (i.e., 70 in or 175 cm). The
vertical travel distance variable (D) is de ned as the vertical travel distance of
the hands between the origin and destination of the lift. For lifting tasks, D
can be computed by subtracting the vertical location (V) at the origin of the li
ft from the corresponding V at the destination of the
TABLE 17 Frequency Lifts (min) 0.2 0.5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Frequency Multipliers for the 1991 Lifting Equation Continuous Work Duration
8 hours
2 hours
1 hour
V
75 0.85 0.81 0.75 0.65 0.55 0.45 0.35 0.27 0.22 0.18
75 0.85 0.81 0.75 0.65 0.55 0.45 0.35 0.27 0.22 0.18 0.15 0.13
75 0.95 0.92 0.88 0.84 0.79 0.72 0.60 0.50 0.42 0.35 0.30 0.26
75 0.95 0.92 0.88 0.84 0.79 0.72 0.60 0.50 0.42 0.35 0.30 0.26 0.23 0.21
75 1.00 0.97 0.94 0.91 0.88 0.84 0.80 0.75 0.70 0.60 0.52 0.45 0.41 0.37
V 75 1.00 0.97 0.94 0.91 0.88 0.84 0.80 0.75 0.70 0.60 0.52 0.45 0.41 0.37 0.34
0.31 0.28
From Waters et al. 1993. Reprinted with permission by Taylor & Francis Ltd., Lon
don, http: / / www.tandf.co.uk.
4.5 /
1078
PERFORMANCE IMPROVEMENT MANAGEMENT TABLE 18 The Coupling Multipliers for the 199
1 Lifting Equation Couplings Good Fair Poor V
75 cm 1.00 0.95 0.90 V 75 cm 1.00
1.00 0.90
From Waters et al. 1993. Reprinted with permission by Taylor & Francis Ltd., Lon
don, http: / / www.tandf.co.uk.
lift. For lowering tasks, D is equal to V at the origin minus V at the destinati
on. The variable (D) is assumed to be at least 10 in (25 cm) and no greater than
70 in (175 cm). If the vertical travel distance is less than 10 in (25 cm), the
n D should be set to 10 in (25 cm). The asymmetry angle A is limited to the rang
e of 0 to 135. If A
135, then AM is set equal to zero, which results in a RWL of 0.
The asymmetry multiplier (AM) is 1
(0.0032A). The AM has a maximum value of 1.0
when the load is lifted directly in front of the body and a minimum value of 0.
57 at 135 of asymmetry. The frequency multiplier (FM) is de ned by (1) the number o
f lifts per minute (frequency), (2) the amount of time engaged in the lifting ac
tivity (duration), and (3) the vertical height of the lift from the oor. Lifting
frequency (F) refers to the average number of lifts made per minute, as measured
over a 15-min period. Lifting duration is classi ed into three categories: short
duration,
TABLE 19
Coupling Classi cation
Good coupling 1. For containers of optimal design, such as some boxes, crates, e
tc., a Good hand-to-object coupling would be de ned as handles or hand-hold cutouts of
optimal design 2. For loose parts or irregular objects that are not usually con
tainerized, such as castings, stock, and supply materials, a Good hand-to-object cou
pling would be de ned as a comfortable grip in which the hand can be easily wrappe
d around the object. Fair coupling 1. For containers of optimal design, a Fair handto-object coupling would be de ned as handles or hand-hold cut-outs of less than o
ptimal design. 2. For containers of optimal design with no handles or hand-hold
cutouts or for loose parts or irregular objects, a Fair hand-to-object coupling is d
e ned as a grip in which the hand can be exed about 90. Poor coupling 1. Containers
of less than optimal design or loose parts or irregular objects that are bulky,
hard to handle, or have sharp edges. 2. Lifting nonrigid bags (i.e., bags that s
ag in the middle). Notes: 1. An optimal handle design has 0.751.5 in (1.93.8 cm) d
iameter, 4.5 in (11.5 cm) length, 2 in (5 cm) clearance, cylindrical shape, and
a smooth, nonslip surface. 2. An optimal hand-hold cutout has the following appr
oximate characteristics: 1.5 in (3.8 cm) height, 4.5 in (11.5 cm) length, semiov
al shape, 2 in (5 cm) clearance, smooth nonslip surface, and 0.25 in (0.60 cm) c
ontainer thickness (e.g., double-thickness cardboard). 3. An optimal container d
esign has 16 in (40 cm) frontal length, 12 in (30 cm) height, and a smooth nonsl
ip surface. 4. A worker should be capable of clamping the ngers at nearly 90 under
the container, such as required when lifting a cardboard box from the oor. 5. A
container is considered less than optimal if it has a frontal length 16 in (40 c
m), height 12 in (30 cm), rough or slippery surfaces, sharp edges, asymmetric ce
nter of mass, unstable contents, or requires the use of gloves. A loose object i
s considered bulky if the load cannot easily be balanced between the hand grasps
. 6. A worker should be able to wrap the hand comfortably around the object with
out causing excessive wrist deviations or awkward postures, and the grip should
not require excessive force.
After Waters et al. 1994. Courtesy of the U.S. Department of Health and Human Se
rvices.
d for signi cant control of the object at the destination by redesigning the job o
r modifying the container / object characteristics.
If HM is less than 1.0
If VM is less than 1.0 If DM is less than 1.0 If AM is less than 1.0
If FM is less than 1.0 If CM is less than 1.0 If the RWL at the destination is l
ess than at the origin
As recommended by Waters et al. 1994. Courtesy of the U.S. Department of Health
and Human Services.
1080
PERFORMANCE IMPROVEMENT MANAGEMENT
for the risk factors (Karwowski 1992). This can be done using modern computer si
mulation techniques in order to examine the behavior of the 1991 NIOSH equation
under a broad range of conditions. Karwowski and Gaddie (1995) simulated the 199
1 equation using SLAM II (Pritsker 1986), a simulation language for alternative
modeling, as the product of the six independent factor multipliers represented a
s attributes of an entity owing through the network. For this purpose, probabilit
y distributions for all the relevant risk factors were de ned and a digital simula
tion of the revised equation was performed. As much as possible, the probability
distributions for these factors were chosen to be representative of the real in
dustrial workplace (Ciriello et al. 1990; Brokaw 1992; Karwowski and Brokaw 1992
; Marras et al. 1993). Except for the vertical travel distance factor, coupling,
and asymmetry multipliers, all factors were de ned using either normal or lognorm
al distributions. For all the factors de ned as having lognormal distributions, th
e procedure was developed to adjust for the required range of real values whenev
er necessary. The SLAM II computer simulation was run for a total of 100,000 tri
als, that is, randomly selected scenarios that realistically de ne the industrial
tasks in terms of the 1991 equation. Descriptive statistical data were collected
for all the input (lifting) factors, the respective multipliers, and the result
ing recommended weight limits. The input factor distributions were examined in o
rder to verify the intended distributions. The results showed that for all lifti
ng conditions examined, the distribution of recommended weight limit values had
a mean of 7.22 kg and a standard deviation of 2.09 kg. In 95% of all cases, the
RWL was at or below the value of 10.5 kg (about 23.1 lb). In 99.5% of all cases,
the RWL value was at or below 12.5 kg (27.5 lb). That implies that when the LI
is set to 1.0 for task design or evaluation purposes, only 0.5% of the (simulate
d) industrial lifting tasks would have the RWLs greater than 12.5 kg. Taking int
o account the lifting task duration, in the 99.5% of the simulated cases, the RW
L values were equal to or were lower than 13.0 kg (28.6 lb) for up to one hour o
f lifting task exposure, 12.5 kg (or 27.5 lb) for less than two hours of exposur
e, and 10.5 kg (23.1 lb) for lifting over an eight-hour shift. From a practical
point of view, these values de ne simple and straightforward lifting limits, that
is, the threshold RWL values (TRWL) that can be used by practitioners for the pu
rpose of immediate and easy-to-perform risk assessment of manual lifting tasks p
erformed in industry. Because the 1991 equation is designed to ensure that the R
WL will not exceed the acceptable lifting capability of 99% of male workers and
75% of female workers, this amounts to protecting about 90% of the industrial wo
rkers if there is a 50 / 50 split between males and females. The TRWL value of 2
7.5 lb can then be used for immediate risk assessment of manual lifting tasks pe
rformed in industry. If this value is exceeded, then a more thorough examination
of the identi ed tasks, as well as evaluation of physical capacity of the exposed
workers, should be performed.
7.10.
Prevention of LBDs in Industry
The application of ergonomic principles to the design of MMH tasks is one of the
most effective approaches to controlling the incidence and severity of LBDs (Ay
oub et al. 1997). The goal of ergonomic job design is to reduce the ratio of tas
k demands to worker capability to an acceptable level (see Figure 14). The appli
cation of ergonomic principles to task and workplace design permanently reduces
stresses. Such changes are preferable to altering other aspects of the MMH syste
m, such as work practices. For example, worker training may be ineffective if pr
actices trained are not reinforced and refreshed (Kroemer 1992), whereas alterin
g the workplace is a lasting physical intervention
7.10.1.
Job Severity Index
The job severity index (JSI) is a time- and frequency-weighted ratio of worker c
apacity to job demands. Worker capacity is predicted with the models developed b
y Ayoub et al. (1978), which use isometric strength and anthropometric data to p
redict psychophysical lifting capacity. JSI and each of the components are de ned
below. JSI where: n number of task groups hoursi daysi hourst dayst mi WTj
i 1
hours
days WT
hours
n mi i i j j t t j 1 i j
F
days F CAP
exposure hours / day for group i exposure days / week for group i total hours /
day for job total days / week for job number of tasks in group i maximum require
d weight of lift for task j
lifting frequency
j 1
F
mi
j
Liles et al. (1984) performed a eld study to determine the relationship between J
SI and the incidence and severity of LBDs. A total of 453 subjects was included
in the study. The results of the eld study indicated that both incidence and seve
rity of recordable back injuries rose rapidly at values of JSI greater than 1.5.
The denominator for the incidence and severity rates is 100 full-time employees
, that is, 200,000 exposure hours. JSI can be reduced to a desirable level by in
creasing worker capacity (e.g., selecting a worker with higher capacity) or alte
ring task and job parameters to reduce JSI to an acceptable level.
7.10.2.
Dynamic Model for Prediction of LBDs in Industry
Marras et al. (1993) performed a retrospective study to determine the relationsh
ips between workplace factors and trunk motion factors and LBD occurrence. A log
istic regression analysis was performed to provide a model used to estimate the
probability of high-risk LBD membership. High-risk jobs were de ned as jobs having
incidence rates of 12 or more injuries per 200,000 hours of exposure. The regre
ssors included in the model were lift rate (lifts / hr), average twisting veloci
ty (deg / sec),
1082
PERFORMANCE IMPROVEMENT MANAGEMENT
maximum moment (Nm), maximum sagittal exion (degrees), and maximum lateral veloci
ty (deg / sec). The above model can be used to guide workplace design changes be
cause the probability of high-risk LBD membership can be computed before and aft
er design changes. For example, maximum moment could be reduced by decreasing th
e load weight or the maximum horizontal distance between the load and the lumbar
spine, and the associated decrease in high-risk membership probability can be e
stimated. The model is considerably different from the models discussed above in
that LBD risk is not assumed to be related to individual capacity.
8. WORK-RELATED MUSCULOSKELETAL DISORDERS OF THE UPPER EXTREMITY
8.1. Characteristics of Musculoskeletal Disorders
The National Institute of Occupational Safety and Health (NIOSH 1997) states tha
t musculoskeletal disorders, which include disorders of the back, trunk, upper e
xtremity, neck, and lower extremity are one of the 10 leading work-related illne
sses and injuries in the United States. Praemer et al, (1992) report that work-r
elated upper-extremity disorders (WUEDs), which are formally de ned by the Bureau
of Labor Statistics (BLS) as cumulative trauma illnesses, account for 11.0 % of
all workrelated musculoskeletal disorders (illnesses). For comparison, occupatio
nal low-back disorders account for more than 51.0% of all WRMDs. According to BL
S (1995), the cumulative trauma illnesses of upper extremity accounted for more
than 60% of the occupational illnesses reported in 1993. These work-related illn
esses, which include hearing impairments due to occupational noise exposure, rep
resent 6.0% of all reportable work-related injuries and illnesses (Marras 1996).
As reviewed by Karwowski and Marras (1997), work-related musculoskeletal disord
ers currently account for one-third of all occupational injuries and illnesses r
eported to the Bureau of Labor Statistics (BLS) by employers every year. These d
isorders thus constitute the largest job-related injury and illness problem in t
he United States today. According to OSHA (1999), in 1997 employers reported a t
otal of 626,000 lost workday disorders to the BLS, and these disorders accounted
for $1 of every $3 spent for workers compensation in that year. Employers pay mo
re than $1520 billion in workers compensation costs for these disorders every year
, and other expenses associated with MSDs may increase this total to $4554 billio
n a year. Such statistics can be linked to several occupational risk factors, in
cluding the increased production rates leading to thousands of repetitive moveme
nts every day, widespread use of computer keyboards, higher percentage of women
and older workers in the workforce, better record keeping of reportable illnesse
s and injuries on the job by employers, greater employee awareness of WUEDs and
their relation to the working conditions, and a marked shift in social policy re
garding recognition and compensation of the occupational injuries and ilnesses.
8.2.
De nitions
Work-related musculoskeletal disorders (WRMDs) are those disorders and diseases
of the musculoskeletal system which have a proven or hypothetical work related c
ausal component (Kuorinka and Forcier 1995). Musculoskeletal disorders are patho
logical entities in which the functions of the musculoskeletal system are distur
bed or abnormal, while diseases are pathological entities with observable impair
ments in body con guration and function. Although WUEDs are a heterogeneous group
of disorders, and the current state of knowledge does not allow for a general de
scription of the course of these disorders, it is possible nevertheless to ident
ify a group of so-called generic risk factors, including biomechanical factors,
such as static and dynamic loading on the body and posture, cognitive demands, a
nd organizational and psychosocial factors, for which there is an ample evidence
of work-relatedness and higher risk of developing the WUEDs. Generic risk facto
rs, which typically interact and cumulate to form cascading cycles, are assumed
to be directly responsible for the pathophysiological phenonmena that depend on
location, intensity, temporal variation, duration, and repetitivness of the gene
ric risk factors (Kuorinka and Forcier 1995). It is also proposed that both insu
f cient and exccessive loading on the musculoskeletal system have deleterious effe
cts and that the pathophysiological process is dependent upon individual charact
eristics with respect to body responses, coping mechanisms, and adaptation to ri
sk factors. Musculoskeletal disorders can be de ned by combining the separate mean
ings for each word (Putz-Anderson 1993). Cumulative indicates that these disorde
rs develop gradually over periods of time as a result of repeated stresses. The
cumulative concept is based on the assumption that each repetition of an activit
y produces some trauma or wear and tear on the tissues and joints of the particu
lar body part. The term trauma indicates bodily injury from mechanical stresses,
while disorders refer to physical ailments. The above de nition also stipulates a
simple cause-and-effect model for CTD development. According to such a model, b
ecause the human body needs suf cient intervals of rest time between episodes of r
epeated strains to repair itself, if the recovery time is insuf cient, combined wi
th high repetition of forceful and awkward postures, the worker is at higher ris
k of developing a CTD. In the context of the generic model for prevention shown
in Figure 15, the above
1084
PERFORMANCE IMPROVEMENT MANAGEMENT
asbestosis). Work-related diseases are de ned as multifactorial when the work envi
ronment and the performance of work contribute signi cantly to the causation of di
sease (WHO 1985). Work-related diseases can be partially caused by adverse work
conditions. However, personal characteristics, environmental, and sociocultural
factors are also recognized as risk factors for these diseases. The scienti c evid
ence of work-relatedness of musculoskeletal disorders has been rmly established b
y numerous epidemiologic studies conducted over the last 25 years of research in
the eld (NIOSH 1997). It has also been noted that the incidence and prevalence o
f musculoskeletal disorders in the reference populations were low, but not zero,
most likely indicating the nonwork-related causes of these disorders. It was al
so documented that such variables as cultural differences, psychosocial and econ
omic factors, which may in uence ones perception and tolerance of pain and conseque
ntly affect the willingness to report musculoskeletal problems, may have signi can
t impact on the progressions from disorder to work disability (WHO 1985; Leino 1
989). Armstrong et al. (1993) developed a conceptual model for the pathogenesis
of work-related musculoskeletal disorders. The model is based on the set of four
cascading and interacting state variables of exposure, dose, capacity, and resp
onse, which are measures of the system state at any given time. The response at
one level can act as dose at the next level (see Figure 15). Furthermore, it is
assumed that a response to one or more doses can diminish or increase the capaci
ty for responding to successive doses. This conceptual model for development of
WRMDs re ects the multifactorial nature of work-related upper-extremity disorders
and the complex nature of the interactions between exposure, dose, capacity, and
response variables. The proposed model also re ects the complexity of interaction
s among the physiological, mechanical, individual, and psychosocial risk factors
. In the proposed model, exposure refers to the external factors (i.e., work req
uirements) that produce the internal dose (i.e., tissue loads and metabolic dema
nds and factors). Workplace organization and hand tool design characteristics ar
e examples of such external factors that can determine work postures and de ne loa
ds on the affected tissues or velocity of muscular contractions. Dose is de ned by
a set of mechanical, physiological, or psychological factors that in some way d
isturb an internal state of the affected worker. Mechanical disturbance factors
may include tissue forces and deformations produced as a result of exertion or m
ovement of the body. Physiological disturbances are such factors as consumption
of metabolic substrates or tissue damage, while the psychological disturbance fa
ctors are those related to, for example, anxiety about work or inadequate social
support. Changes in the state variables of the worker are de ned by the model as
responses. A response is an effect of the dose caused by exposure. For example,
hand exertion can cause elastic deformation of tendons and changes in tissue com
position and / or shape, which in turn may result in hand discomfort. The doseres
ponse time relationship implies that the effect of a dose can be immediate or th
e response may be delayed for a long periods of time. The proposed model stipula
tes that system changes (responses) can also result in either increased dose tol
erance (adaptation) or reduced dose tolerance lowering the system capacity. Capa
city is de ned as the workers ability (physical or psychological) to resist system
destabilization due to various doses. While capacity can be reduced or enhanced
by previous doses and responses, it is assumed that most individuals are able to
adapt to certain types and levels of physical activity.
Figure 16 Conceptual CTS Model. (Adapted from Tanaka and McGlothlin 1999, reprin
ted with permission from Elsevier Science.)
f a con ict between motor control of the postural activity and control needed for
rhythmic movement or skilled manipulations. In other words, the primary cause of
work-related muscular pain and injury may be altered motor control, resulting i
n imbalance between harmonious motor unit recruitment relaxation of mucles not d
irectly involved in the activity. As discussed by Armstrong et al. (1993), poor
ergonomic design of tools with respect to weight, shape, and size can impose ext
reme wrist positions and high forces on the workers musculoskeletal system. Holdi
ng heavier objects requires an increased power grip and high tension in the nger e
xor tendons, causing increased pressure in the carpal tunnel. Furthermore, the t
asks that induce hand and arm vibration cause an involuntary increase in power g
rip through a re ex of the strength receptors. Vibration can also cause protein le
akage from the blood vessels in the nerve trunks and result in
1086
PERFORMANCE IMPROVEMENT MANAGEMENT
Figure 17 Three-Dimensional Illustration of the conceptual CTS Model with Time E
xposure Factor. (Adapted from Tanaka and McGlothlin 1999, reprinted with permiss
ion from Elsevier Science.)
TABLE 22
Relationship between Physical Stresses and WRMD Risk Factors (ANSI Z-365) Magnit
ude Forceful exertions and motions Extreme postures and motions Insuf cient restin
g level High vibration level Cold temperature Repetition rate Repetitive exertio
ns Repetitive motions Insuf cient pauses or breaks Repeated vibration exposure Rep
eated cold exposure Duration Sustained exertions Sustained postures Insuf cient re
st time Long vibration exposure Long cold exposure
Physical Stress Force Joint angle Recovery Vibration Temperature
edema and increased pressure in the nerve trunks and therefore also result in ed
ema and increased pressure in the nerve (Lundborg et al. 1987).
8.5.
Musculoskeletal Disorders: Occupational Risk Factors
A risk factor is de ned as an attribute or exposure that increases the probability
of a disease or disorder (Putz-Anderson, 1988). Biomechanical risk factors for
musculoskeletal disorders include repetitive and sustained exertions, awkward po
stures, and application of high mechanical forces. Vibration and cold environmen
ts may also accelerate the development of musculoskeletal disorders. Typical too
ls that can be used to identify the potential for development of musculoskeletal
disorders include conducting work-methods analyses and checklists designed to i
temize undesirable work site conditions or worker activities that contribute to
injury. Since most of manual work requires the active use of the arms and hands,
the structures of the upper extremities are particularly vulnerable to soft tis
sue injury. WUEDs are typically associated with repetitive manual tasks with for
ceful exertions, such as those performed at assembly lines, or when using hand t
ools, computer keyboards and other devices, or operating machinery. These tasks
impose repeated stresses to the upper body, that is, the muscles, tendons, ligam
ents, nerve tissues, and neurovascular structures. There are three basic types o
f WRDs to the upper extremity: tendon disorder (such as tendonitis), nerve disor
der (such as carpal tunnel syndrome), and neurovascular disorder (such as thorac
ic outlet syndrome or vibrationRaynauds syndrome). The main biomechanical risk fac
tors of musculoskeletal disorders are presented in Table 22.
An ordinal rating is assigned for each of the variables according to the exposur
e data. The proposed strain index is the product of these six multipliers assign
ed to each of the variables.
1088
PERFORMANCE IMPROVEMENT MANAGEMENT
The strain index methodology aims to discriminate between jobs that expose worke
rs to risk factors (task variables) that cause WUEDs and jobs that do not. Howev
er, the strain index is not designed to identify jobs associated with an increas
ed risk of any single speci c disorder. It is anticipated that jobs identi ed as in
the high-risk category by the strain index will exhibit higher levels of WUEDs a
mong workers who currently perform or historically performed those jobs that are
believed to be hazardous. Large-scale studies are needed to validate and update
the proposed methodology. The strain index has the following limitations in ter
ms of its application: 1. There are some disorders of the distal upper extremity
that should not be predicted by the strain index, such as handarm vibration synd
rome (HAVS) and hypothenar hammer syndrome. 2. The strain index has not been dev
eloped to predict increased risk for distal upper-extremity disorders to uncerta
in etiology or relationship to work. Examples include ganglion cysts, osteoarthr
itis, avascular necrosis of carpal bones, and ulnar nerve entrapment at the elbo
w. 3. The strain index has not been developed to predict disorders outside of th
e distal upper extremity, such as disorders of the shoulder, shoulder girdle, ne
ck, or back. The following major principles have been derived from the physiolog
ical model of localized muscle fatigue: 1. The primary task variables are intens
ity of exertion, duration of exertion, and duration of recovery. 2. Intensity of
exertion refers to the force required to perform a task one time. It is charact
erized as a percentage of maximal strength. 3. Duration of exertion describes ho
w long an exertion is applied. The sum of duration of exertion and duration of r
ecovery is the cycle time of one exertional cycle. 4. Wrist posture, type of gra
sp, and speed of work are considered via their effects of maximal strength. 5. T
he relationship between strain on the body (endurance time) and intensity of exe
rtion is nonlinear. The following are the major principles derived from the epid
emiological literature: 1. The primary task variable associated with an increase
d prevalence or incidence of distal upperextremity disorders are intensity of ex
ertion (force), repetition rate, and percentage of recovery time per cycle. 2. I
ntensity of exertion was the most important task variable in two of the three st
udies explicitly mentioned. The majority (or all) of the morbidity was related t
o disorders of the muscle tendon unit. The third study, which considered only CTS
, found that repetition was more important than forcefulness (Silverstein et al.
1987). 3. Wrist posture may not be an independent risk factor. It may contribut
e to an increased incidence of distal upper-extremity disorders when combined wi
th intensity of exertion. 4. The roles of other task variables have not been cle
arly established epidemiologically; therefore, one has to rely on biomechanical
and physiological principles to explain their relationship to upper-extremity di
sorders, if any. Moore and Garg (1994) compared exposure factors for jobs associ
ated with WUEDs to jobs without prevalence of such disorders. They found that th
e intensity of exertion, estimated as a percentage of maximal strength and adjus
ted for wrist posture and speed of work, was the major discriminating factor. Th
e relationship between the incidence rate for distal upper-extremity disorder an
d the job risk factors was de ned as follows: IE
30
F 2 RT0.6
where: IR incidence rate (per 100 workers per year) F
intensity of exertion (%MS
) RT recovery time (percentage of cycle time) The proposed concept of the strain
index is a semiquantitative job analysis methodology that results in a numerica
l score that is believed to correlate with the risk of developing distal upperex
tremity disorders. The SI score represents the product of six multipliers that c
orrespond to six task variables. These variables:
PHYSICAL TASKS: ANALYSIS, DESIGN, AND OPERATION TABLE 23 Rating Criteria for Str
ain Index Duration of Exertion (% of Cycle)
10 1029 3049 5079 80
1089
Rating 1 2 3 4 5
Intensity of Exertion Light Somewhat hard Hard Very hard Near maximal
Efforts / Minute
4 48 914 1519 20
HandWrist Posture Very good Good Fair Bad Very bad
Speed of Work Very slow Slow Fair Fast Very fast
Duration per Day (h)
1 12 24 28 8
Adapted from Moore and Garg 1995.
1. 2. 3. 4. 5. 6.
Intensity of exertion Duration of exertion Exertions per minute Handwrist posture
Speed of work Duration of task per day
These ratings, applied to model variables, are presented in Table 23. The multip
liers for each task variable related to these ratings are shown in Table 24. The
strain index score as the product of all six multipliers is de ned as follows: St
rain index (SI)
(intensity of exertion multiplier)
(duration of exertion multiplier)
(exertions per minute multiplier)
(posture mul
tiplier) (speed of work multiplier)
(duration per day multiplier)
Intensity of exertion, the most critical variable of SI, is an estimate of the f
orce requirements of a task and is de ned as the percentage of maximum strength re
quired to perform the task once. As such, the intensity of exertion is related t
o physiological stress (percentage of maximal strength) and biomechanical stress
es (tensile load) on the muscletendon units of the distal upper extremity. The in
tensity of exertion is estimated by an observer using verbal descriptors and ass
igned corresponding rating values (1, 2, 3, 4, or 5). The multiplier values are
de ned based on the rating score raised to a power of 1.6 in order to re ect the non
linear nature of the relationship between intensity of exertion and manifestatio
ns of strain according to the psychophysical theory. The multipliers for other t
ask variables are modi ers to the intensity of exertion multiplier. Duration of ex
ertion is de ned as the percentage of time an exertion is applied per cycle. The t
erms cycle and cycle time refer to the exertional cycle and average exertional c
ycle time, respectively. Duration of recovery per cycle is equal to the exertion
al cycle time minus the duration of exertion per cycle. The duration of exertion
is the average duration of exertion per exertional cycle (calculated by dividin
g all durations of a series of exertions by the number of observed exertions). T
he percentage
TABLE 24
Multiplier Table for Strain Index Duration of Exertion (% of Cycle) 0.5 1.0 1.5
2.0 3.0a HandWrist Posture 1.0 1.0 1.5 2.0 3.0 Speed of Work 1.0 1.0 1.0 1.5 2.0
Duration per Day (h) 0.25 0.50 0.75 1.00 1.50
Rating 1 2 3 4 5
Intensity of Exertion 1 3 6 9 13
Efforts / Minute 0.5 1.0 1.5 2.0 3.0
Adapted from Moore and Garg 1995. a If duration of exertion is 100%, then the ef
forts / minute multiplier should be set to 3.0.
1090 TABLE 25
PERFORMANCE IMPROVEMENT MANAGEMENT An Example to Demonstrate the Procedure for C
alculating SI Score Intensity of Exertion Duration of Exertion Speed of Duration
per (% of Cycle) Efforts / Minute Posture Work Day (h) 60% 4 2.0 12 3 1.5 Fair
3 1.5 Fair 3 1.0 48 4 1.0
Exposure dose Somewhat hard Ratings 2 Multiplier 3.0
Adapted from Moore and Garg 1995.
SI Score
3.0
2.0
1.5
1.5
1.0
1.0
13.5
Efforts per minute is the number of exertions per minute (i.e., repetitiveness)
and is synonymous with frequency. Efforts per minute are measured by counting th
e number of exertions that occur during a representative observation period (as
described for determining the average exertional cycle time). The measured resul
ts are compared to the ranges shown in Table 23 and given the corresponding rati
ngs. The multipliers are de ned in Table 24. Posture refers to the anatomical posi
tion of the wrist or hand relative to neutral position and is rated qualitativel
y using verbal anchors. As shown in Table 23, posture has four relevant ratings.
Postures that are very good or good are essentially neutral and have multipliers of
. Hand or wrist postures progressively deviate beyond the neutral range to extre
mes, graded as fair, bad, and very bad. Speed of work estimates perceived pace of
r job and is subjectively estimated by a job analyst or ergonomics team. Once a
verbal anchor is selected, a rating is assigned. Duration of task per day is de ne
d as a total time that a task is performed per day. As such, this variable re ects
the bene cial effects of task diversity such as job rotation and the adverse effe
cts of prolonged activity such as overtime. Duration of task per day is measured
in hours and assigned a rating according to Table 23. Application of the strain
index involves ve steps: 1. 2. 3. 4. 5. Collecting data Assigning rating values
Determining multipliers Calculating the SI score Interpreting the results
TABLE 26 Percentage of Population 90 75 50 25 10
Maximum Acceptable Forces for Female Wrist Flexion (Power Grip) (N) Repetition R
ate 2 / min 14.9 23.2 32.3 41.5 49.8 5 / min 14.9 23.2 32.3 41.5 49.8 10 / min 1
3.5 20.9 29.0 37.2 44.6 15 / min 12.0 18.6 26.0 33.5 40.1 20 / min 10.2 15.8 22.
1 28.4 34.0
Adapted with permission from Snook et al. 1995. Copyright
d., London, http: / / www.tandf.co.uk.
The values of intensity of exertion, handwrist posture, and speed of work are est
imated using the verbal descriptors in Table 23. The values of percentage durati
on of exertion per cycle, efforts per minute, and duration per day are based on
measurements and counts. Theses values are then compared to the appropriate colu
mn in Table 24 and assigned a rating. The calculations of SI are shown in Table
25.
9.3.
Psychophysical Models: The Maximum Acceptable Wrist Torque
Snook et al. (1995) used the psychophysical approach to determine the maximum ac
ceptable forces for various types and frequencies for repetitive wrist motion, g
rips, and repetition rates that would not result in signi cant changes in wrist st
rength, tactile sensitivity, or number of symptoms reported by the female subjec
ts. Three levels of wrist motion were used: 1. Flexion motion with a power grip
2. Flexion motion with a pinch grip 3. Extension motion with a power grip The de
pendent variables were maximum acceptable wrist torque, maximum isometric wrist
strength, tactile sensitivity, and symptoms. The maximum acceptable wrist torque
(MAWT) was de ned as the number of Newton meters of resistance set in the brake b
y the participants (averaged and recorded every minute). The data for maximum ac
ceptable wrist torques for the two-days-perweek exposure were used to estimate t
he maximum acceptable torques for different repetitions of wrist exion (power gri
p) and different percentages of the population. This was done by using the adjus
ted means and coef cients of variation from the two-days-per-week exposure. The or
iginal torque values were converted into forces by dividing each torque by the a
verage length of the handle lever (0.081 m). The estimated values for the maximu
m acceptable forces for female wrist exion (power grip) are shown in Table 26. Si
milarily, the estimated maximum acceptable forces were developed for wrist exion
(pinch grip, see Table 27) and wrist extension (power grip, see Table 28). The t
orques were converted into forces by dividing by 0.081 m for the power grip and
0.123m for the pinch grip. Snook et al. (1995) note that the estimated values of
the maximum acceptable wrist torque do not apply to any other tasks and wrist p
ositions than those that were used in the study.
TABLE 28 Percentage of Population 90 75 50 25 10
Maximum Acceptable Forces for Female Wrist Extension (Power Grip) (N) Repetition
Rate 2 / min 8.8 13.6 18.9 24.2 29.0 5 / min 8.8 13.6 18.9 24.2 29.0 10 / min 7
.8 12.1 16.8 21.5 25.8 15 / min 6.9 10.9 15.1 19.3 23.2 20 / min 5.4 8.5 11.9 15
.2 18.3
Adapted with permission from Snook et al. 1995. Copyright
d., London, http: / / www.tandf.co.uk.
1092
PERFORMANCE IMPROVEMENT MANAGEMENT
10.
10.1.
MANAGEMENT OF MUSCULOSKELETAL DISORDERS
Ergonomic Guidelines
Most of the current guidelines for control of musculoskeletal disorders at work
aim to reduce the extent of movements at the joints, reduce excessive force leve
ls, and reduce the exposure to highly repetitive and stereotyped movements. For
example, some of the common methods to control for wrist posture, which is belie
ved one of the risk factors for carpal tunnel syndrome, are altering the geometr
y of tool or controls (e.g., bending the tool or handle), changing the location
/ positioning of the part, and changing the position of the worker in relation t
o the work object. In order to control for the extent of force required to perfo
rm a task, one can reduce the force required through tool and xture redesign, dis
tribute the application of force, or increase the mechanical advantage of the (m
uscle) lever system. It has been shown that in the dynamic tasks involving upper
extremities, the posture of the hand itself has very little predictive power fo
r the risk of musculoskeletal disorders. Rather, it is the velocity and accelera
tion of the joint that signi cantly differentiate the musculoskeletal disorders ri
sk levels (Schoenmarklin and Marras 1990). This is because the tendon force, whi
ch is a risk factor of musculoskeletal disorders, is affected by wrist accelerat
ion. The acceleration of the wrist in a dynamic task requires transmission of th
e forearm forces to the tendons. Some of this force is lost to friction against
the ligaments and bones in the carpal tunnel. This frictional force can irritate
the tendons synovial membranes and cause tenosynovitis or carpal tunnel syndrom
(CTS). These new research results clearly demonstrate the importance of dynamic
components in assessing CTD risk of highly repetitive jobs. With respect to task
repetitivness, it is believed today that jobs with a cycle time of less than 30
seconds and a fundamental cycle that exceeds 50% of the total cycle (exposure)
time lead to increased risk of musculoskeletal disorders. Because of neurophysio
logical needs of the working muscles, adequate rest pauses (determined based on
scienti c knowledge on the physiology of muscular fatigue and recovery) should be
scheduled to provide relief for the most active muscles used on the job. Further
more, reduction in task repetition can be achieved by, for example, by task enla
rgement (increasing variety of tasks to perform), increase in the job cycle time
, and work mechanization and automation. The expected bene ts of reduced musculosk
eletal disorders problems in industry are improved productivity and quality of w
ork products, enhanced safety and health of employees, higher employee morale, a
nd accommodation of people with alternative physical abilities. Strategies for m
anaging musculoskeletal disorders at work should focus on prevention efforts and
should include, at the plant level, employee education, ergonomic job redesign,
and other early intervention efforts, including engineering design technologies
such as workplace reengineering, active and passive surveillance. At the macrolevel, management of musculoskeletal disorders should aim to provide adequate oc
cupational health care provisions, legislation, and industry-wide standardizatio
n.
10.2.
Administrative and Engineering Controls
The recommendations for prevention of musculoskeletal disorders can be classi ed a
s either primarily administrative, that is, focusing on personnel solutions, or
engineering, that is, focusing on redesigning tools, workstations, and jobs (Put
z-Anderson 1988). In general, administrative controls are those actions to be ta
ken by the management that limit the potentially harmful effects of a physically
stressful job on individual workers. Administrative controls, which are focused
on workers, refer to modi cation of existing personnel functions such as worker t
raining, job rotation, and matching employees to job assignments. Workplace desi
gn to prevent repetitive strain injury should be directed toward ful lling the fol
lowing recommendations: 1. Permit several different working postures. 2. Place c
ontrols, tools, and materials between waist and shoulder height for ease of reac
h and operation. 3. Use jigs and xtures for holding purposes. 4. Resequence jobs
to reduce the repetition. 5. Automate highly repetitive operations. 6. Allow sel
f-pacing of work whenever feasible. 7. Allow frequent (voluntary and mandatory)
rest breaks. The following guidelines should be followed (for details see Putz-A
nderson 1988):
ctors may pose minimal risk of injury if suf cient exposure is not present or if s
uf cient recovery time is provided. It is known that changes in the levels of risk
factors will result in changes in the risk of WRMDs. Therefore, a reduction in
WRMD risk factors should reduce the risk for WRMDs. Figure 18 shows the ow chart
for the ergonomics rule for control of MSDs at the workplace proposed by OSHA (2
000).
11.2.
Work Organization Risk Factors
The mechanisms by which poor work organization could increase the risk for WUEDs
include modifying the extent of exposure to other risk factors (physical and en
vironmental) and modifying the
1094
GENERAL INDUSTRY
PERFORMANCE IMPROVEMENT MANAGEMENT
EMPLOYER DETERMINES WHETHER STANDARD APPLIES TO HIS OR HER BUSINESS AGRICULTURE
CONSTRUCTION MARITIME RAILROADS STANDARD DOES NOT APPLY
EMPLOYER DOES NOT HAVE A ERGONOMIC PROGRAM
EMPLOYER HAS A ERGONOMIC PROGRAM
EMPLOYER PROVIDES EMPLOYEES WITH BASIC INFORMATION ABOUT MSDs AND THE STANDARD
NO
EMPLOYER PROVIDES EMPLOYEES WITH BASIC INFORMATION ABOUT MSDs AND THE STANDARD
YES EMPLOYEE REPORTS MDS OR MSE SIGNS & SYMPTOMS EMPLOYER CONTINUES TO OPERATE H
IS OR HER ERGONOMIC PROGRAM
EMPLOYER DETERMINES IF IT IS AN MSD INCIDENT
NO
NO FURTHER ACTION REQUIRED
EMPLOYER DETERMINES IF THE JOB INVOLVES EXPOSURE TO RELEVANT RISK FACTORS AT LEV
ELS IN THE BASIC SCREENING TOOL
YES
MUST DO BOTH
EMPLOYER IMPLEMENTS MSD MANAGEMENT PROCESS WORK RESTRICTION PROTECTION CONTENTS
OF HEALTH CARE PROFESSIONALS OPINION
NO REQUEST FOR SECOND OPINION NO FURTHER ACTION REQUIRED
EMPLOYER IMPLEMENTS PROGRAM MANAGEMENT LEADERSHIP JOB HAZARD ANALYSIS EMPLOYEE P
ARTICIPATION QUICK FIX NOT SUCCESSFUL OR EMPLOYER NOT ELIGIBLE
EMPLOYER MAY USE QUICK FIX IF ELIGIBLE TYPES OF CONTROLS
TRAINING QUICK FIX SUCCESSFUL EMPLOYER MAINTAINS CONTROLS AND TRAINING RELATED T
O CONTROLS
EMPLOYER CONDUCTS JOB HAZARD ANALYSIS
MSD HAZARDS IDETIFIED
EMPLOYER IMPLEMENTS CONTROLS
NO MSD HAZARDS
MSD HAZARDS REDUCED TO EXTENT FEASIBLE
JOB HAS BEEN SPECIFIED LEVEL CONTROLLED
RISK FACTORS REDUCED BELOW LEVELS IN BASIC SCREENING TOOLS
12.4. Analysis of Existing Records and Survey (Past Case(s)-Initiated Entry into
the Process)
Analysis of existing records and surveys consists of reviewing existing database
s, principally collected for other purposes, to identify incidents and patterns
of work-related cumulative trauma disorders. It can help determine and prioritiz
e the jobs to be further analyzed using job analysis. There are three types of e
xisting records and survey analyses:
1096
PERFORMANCE IMPROVEMENT MANAGEMENT
1. Initial analysis of upper-limb WRMDs reported over the last 2436 months 2. Ong
oing trend analysis of past cases 3. Health surveys
12.5.
Job Surveys (Proactive Entry into the Process)
The aim of proactive job surveys is to identify speci c jobs and processes that ma
y put employees at risk of developing WRMDs. Job surveys are typically performed
after the jobs identi ed by the previous two surveillance components have been re
cti ed. Job surveys of all jobs or a sample of representatives should be performed
. Analysis of existing records will be used to estimate the potential magnitude
of the problem in the workplace. The number of employees in each job, department
, or similar population will be determined rst. Then the incidence rates will be
calculated on the basis of hours worked, as follows: Incidence (new case) rate (
IR) # of new cases during time
200,000 Total hours worked during time
This is equivalent to the number of new cases per 100 worker years. Workplace-wi
de incidence rates (IRs) will be calculated for all cumulative trauma disorders
and by body location for each department, process, or type of job. (If speci c wor
k hours are not readily available, the number of full-time equivalent employees
in each area multiplied by 2000 hours will be used to obtain the denominator.) S
everity rates (SRs) traditionally use the number of lost workdays rather than th
e number of cases in the numerator. Prevalence rates (PRs) are the number of exi
sting cases per 200,000 hours or the percentage of workers with the condition (n
ew cases plus old cases that are still active).
12.6.
ANSI Z-365 Evaluation Tools for Control of WRMDs
Some of the research evaluation tools de ned by the ANSI Z-365 Draft Standard for
the purpose of surveillance and job analysis include the following: 1. 2. 3. 4.
5. Proactive job survey (checklist #1) Quick check risk factor checklist (checkl
ist #2) Symptom survey (questionnaire) Posture discomfort survey History of pres
ent illness recording form
12.7.
Analysis and Interpretation of Surveillance Data
Surveillance data can be analyzed and interpreted to study possible associations
between the WRMD surveillance data and the risk-factor surveillance data. The t
wo principal goals of the analysis are to help identify patterns in the data tha
t re ect large and stable differences between jobs or departments and to target an
d evaluate intervention strategies. This analysis can be done on the number of e
xisting WRMD cases (cross-sectional analysis) or on the number of new WRMD cases
in a retrospective and prospective fashion (retrospective and prospective analy
sis). The simplest way to assess the association between risk factors and WRMDs
is to calculate odds ratios (Table 29). To do this, the prevalence data obtained
in health surveillance are linked with the data obtained in risk-factor surveil
lance. The data used can be those obtained with symptom quesTABLE 29
Examples of Odds Ratio Calculations for a Firm of 140 Employees WRMDs Present No
D) Total 40 (A
B) 100 (C
D) 140 (N)
C)
Number in each cell indicates the count of employees with or without WRMD and th
e risk factor. Odds ratio (OR)
(A
D) / (B
C) (15
85) / (25
15)
3.4
14.
PROPOSED OSHA ERGONOMICS REGULATIONS
The National Research Council / National Academy of Sciences of the United State
s recently concluded that there is a clear relationship between musculoskeletal
disorders and work and between ergonomic interventions and a decrease in such di
sorders. According to the Academy, research demonstrates that speci c intervention
s can reduce the reported rate of musculoskeletal disorders for workers who perf
orm high-risk tasks (National Research Council 1998). The effective and universa
l standard for dealing with the work-related hazards should signi cantly reduce th
e risk to WRMDS to employees.
1098
PERFORMANCE IMPROVEMENT MANAGEMENT
The high prevalence of work-related musculoskeletal disorders, has motivated the
Occupational Safety and Health Administration (OSHA) to focus on standardizatio
n efforts. Recently, OSHA announced the initiation of rulemaking under Section 6
(b) of the Occupational Safety and Health Act of 1970, 29 U.S.C. 655, to amend P
art 1910 of Title 29 of the Code of Federal Regulations and requested informatio
n relevant to preventing, eliminating, and reducing occupational exposure to erg
onomic hazards. According to OSHA (2000), the proposed standard is needed to bri
ng this protection to the remaining employees in general industry workplaces tha
t are at signi cant risk of incurring a workrelated musculoskeletal disorder but a
re currently without ergonomics programs. A substantial body of scienti c evidence
supports OSHAs effort to provide workers with ergonomic protection. This evidenc
e strongly supports two basic conclusions: (1) there is a positive relationship
between workrelated musculoskeletal disorders and workplace risk factors, and (2
) ergonomics programs and speci c ergonomic interventions can reduce these injurie
s.
14.1.
Main Provisions of the Draft Ergonomics Standard
The standard applies to employers in general industry whose employees work in ma
nufacturing jobs or manual handling jobs or report musculoskeletal disorders (MS
Ds) that meet the criteria of the standard (see Figure 18). The standard applies
to the following jobs: 1. Manufacturing jobs. Manufacturing jobs are production
jobs in which employees perform the physical work activities of producing a pro
duct and in which these activities make up a signi cant amount of their work time;
2. Manual handling jobs. Manual handling jobs are jobs in which employees perfo
rm forceful lifting / lowering, pushing / pulling, or carrying. Manual handling
jobs include only those jobs in which forceful manual handling is a core element
of the employees job; and 3. Jobs with a musculoskeletal disorder. Jobs with an
MSD are those jobs in which an employee reports an MSD that meets all of these c
riteria: (a) The MSD is reported after the effective date; (b) The MSD is an OSHA
recordable MSD, or one that would be recordable if the employer was required to ke
ep OSHA injury and illness records; and (c) The MSD also meets the screening cri
teria. The proposed standard covers only those OSHA-recordable MSDs that also me
et these screening criteria: 1. The physical work activities and conditions in t
he job are reasonably likely to cause or contribute to the type of MSD reported;
and 2. These activities and conditions are a core element of the job and / or m
ake up a signi cant amount of the employees work time. The standard applies only to
the jobs speci ed in Section 1910.901, not to the entire workplace or to other wo
rkplaces in the company. The standard does not apply to agriculture, constructio
n, or maritime operations. In the proposed standard, a full ergonomics program c
onsists of these six program elements: 1. 2. 3. 4. 5. 6. Management leadership a
nd employee participation Hazard information and reporting Job hazard analysis a
nd control Training MSD management Program evaluation
According to the standard, the employer must: 1. Implement the rst two elements o
f the ergonomics program (management leadership and employee participation, and
hazard information and reporting) even if no MSD has occurred in those jobs. 2.
Implement the other program elements when either of the following occurs in thos
e jobs (unless one eliminates MSD hazards using the quick x option
nalysis and control can be limited to that individual employees job. In such a ca
se, the employer should:
1100
PERFORMANCE IMPROVEMENT MANAGEMENT
1. Include in the job-hazard analysis all of the employees in the problem job or
those who represent the range of physical capabilities of employees in the job.
2. Ask the employees whether performing the job poses physical dif culties and, i
f so, which physical work activities or conditions of the job they associate wit
h the dif culties. 3. Observe the employees performing the job to identify which o
f the physical work activities, workplace conditions, and ergonomic risk factors
are present. 4. Evaluate the ergonomic risk factors in the job to determine the
MSD hazards associated with the covered MSD. As necessary, evaluate the duratio
n, frequency, and magnitude of employee exposure to the risk factors. The propos
ed engineering controls include physical changes to a job that eliminate or mate
rially reduce the presence of MSD hazards. Examples of engineering controls for
MSD hazards include changing, modifying, or redesigning workstations, tools, fac
ilities, equipment, materials, and processes. Administrative controls are change
s in the way that work in a job is assigned or scheduled that reduce the magnitu
de, frequency, or duration of exposure to ergonomic risk factors. Examples of ad
ministrative controls for MSD hazards include employee rotation, job task enlarg
ement, alternative tasks, and employer-authorized changes in work pace. Finally,
it should be noted that the OSHAs Final Ergonomic Program Standard took effect o
n January 16, 2001.
REFERENCES
Alexander, D. C., and Orr, G. B. (1992), The Evaluation of Occupational Ergonomics
Programs, in Proceedings of the Human Factors Society 36th Annual Meeting, pp. 69
7701. Andersson, G. B. J. (1985), Permissible Loads: Biomechanical Considerations, Er
gonomics, Vol. 28, No. 1, pp. 323326. ANSI Z-365 Draft (1995), Control of Work-Rela
ted Cumulative Trauma Disorders, Part I: Upper Extremities, April 17. Armstrong, T
. (1986), Ergonomics and Cumulative Trauma Disorders, Hand Clinics, Vol. 2, pp. 55356
5. Armstrong, T. J., Buckle, P., Fine, L. J., Hagberg, M., Jonsson, B., Kilbom,
A., Kuorinka, I., Silverstein, B. A., Sjogaard, G., and Viikari-Juntura, E. (199
3), A Conceptual Model for Work-Related Neck and Upper-Limb Musculoskeletal Disord
ers, Scandinavian Journal of Work and Environmental Health, Vol. 19, pp. 7384. Arms
trong, T. J., and Lifshitz, Y. (1987), Evaluation and Design of Jobs for Control o
f Cumulative Trauma Disorders, in Ergonomic Interventions to Prevent Musculoskelet
al Injuries in Industry, American Conference of Governmental Industrial Hygienis
ts, Lewis, Chelsea, MI. Asfour, S. S., Genaidy, A. M., Khalil, T. M., and Greco,
E. C. (1984), Physiological and Psychophysical Determination of Lifting Capacity
for Low Frequency Lifting Tasks, in Trends in Ergonomics / Human Factors I, NorthHolland, Cincinnati, pp. 149153. Ayoub, M. A. (1982), Control of Manual Lifting Haz
ards: II. Job Redesign, Journal of Occupational Medicine, Vol. 24, No. 9, pp. 66867
6. Ayoub, M. M., Bethea, N. J., Deivanayagam, S., Asfour, S. S., Bakken, G. M.,
Liles, D., Mital, A., and Sherif, M. (1978), Determination and Modelling of Liftin
g Capacity, Final Report HEW [NIOSH] Grant No. 5R010H-000545-02, National Institut
e of Occupational Safety and Health, Washington, DC. Ayoub, M. M. (1983), Design o
f a Pre-Employment Screening Program, in Ergonomics of Workstation Design, T. O. K
valseth, Ed., Butterworths, London. Ayoub, M. M., and Mital, A. (1989), Manual M
aterial Handling, Taylor & Francis, London. Ayoub, M. M., Dempsey, P. G., and Ka
rwowski, W. (1997), Manual Materials Handling, in Handbook of Human Factors and Ergo
nomics, 2nd Ed., G. Salvendy, Ed., John Wiley & Sons, New York, pp. 10851123. Bad
ler, N. L., Becket, W. M., and Webber, B. L. (1995), Simulation and Analysis of Co
mplex Human Tasks for Manufacturing, in Proceedings of SPIEThe International Societ
y for Optical Engineering, Vol. 2596, pp. 225233. Battie , M. C., Bigos, S. J., F
isher, L. D., Spengler, D. M., Hansson, T. H., Nachemson, A. L., and Wortley, M.
D. (1990), The Role of Spinal Flexibility in Back Pain Complaints within Industry
: A Prospective Study, Spine, Vol. 15, pp. 768773. Bigos, S. J., Spengler, D. M., M
artin, N. A., et al. (1986), Back Injuries in Industry: A Retrospective Study: II.
Injury Factors, Spine, Vol. 11, pp. 246251.
1102
PERFORMANCE IMPROVEMENT MANAGEMENT
Damkot, D. K., Pope, M. H., Lord, J., and Frymoyer, J. W. (1984), The Relationship
Between Work History, Work Environment and Low-Back Pain in Men, Spine, Vol. 9, p
p. 395399. Delleman, N. J., Drost, M. R., and Huson, A. (1992), Value of Biomechani
cal Macromodels as Suitable Tools for the Prevention of Work-Related Low Back Pr
oblems, Clinical Biomechanics, Vol. 7, pp. 138148. Edwards, R. H. T. (1988), Hypothes
is of Peripheral and Central Mechanisms Underlying Occupational Muscle Pain and
Injury, European Journal of Applied Physiology, Vol. 57, pp. 275282. Eisler, H. (19
62), Subjective Scale of Force for a Large Muscle Group, Journal of Experimental Psy
chol., Vol. 64, No. 3, pp. 253257. Farfan, H. F. (1983), Biomechanics of the Lumbar
Spine, in Managing Low Back Pain, W. H. Kirkaldy-Willis, Ed., Churchill Livingsto
ne, New York, pp. 921. Federal Register (1986), Vol. 51, No. 191, October 2, pp.
3541. Ferguson, S. A., Marras, W. S., and Waters, T. R. (1992), Quanti cation of Back
Motion During Asymmetric Lifting, Ergonomics, Vol. 40, No. 7 / 8, pp. 845859. Fort
in, C., Gilbert, R., Beuter, A., Laurent, F., Schiettekatte, J., Carrier, R., an
d Dechamplain, B. (1990), SAFEWORK: A Microcomputer-Aided Workstation Design and A
nalysis, New Advances and Future Developments, in Computer-Aided Ergonomics, W. Ka
rwowski, A. Genaidy, and S. S. Asfour, Eds.,Taylor & Francis, London, pp. 157180.
Frymoyer, J. W., Pope, M. H., Constanza, M. C., Rosen, J. C., Goggin, J. E., an
d Wilder, D. G. (1980), Epidemiologic Studies of Low Back Pain, Spine, Vol. 5, pp. 4
19423. Frymoyer, J. W., Pope, M. H., Clements, J. H., Wilder, D. G., MacPherson,
B., and Ashikaga, T. (1983), Risk Factors in Low Back Pain: An Epidemiological Sur
vey, Journal of Bone Joint Surgery, Vol. 65-A, pp. 213218. Gagnon, M., and Smyth, G
. (1990), The Effect of Height in Lowering and Lifting Tasks: A Mechanical Work Ev
aluation, in Advances in Industrial Ergonomics and Safety II, B. Das, Ed., Taylor
& Francis, London, pp. 669672. Gamberale, F., and Kilbom, A. (1988), An Experimenta
l Evaluation of Psychophysically Determined Maximum Acceptable Workload for Repe
titive Lifting Work, in Proceedings of the 10th Congress of the International Ergo
nomics Association, A. S. Adams, R. R. Hall, B. J. McPhee, and M. S. Oxenburgh,
Eds., Taylor & Francis, London, pp. 233235. GAO (1997), Worker Protection: Private
Sector Ergonomics Programs Yield Positive Results, GAO/ HEHS-97-163, U.S. General
Accounting Of ce, Washington, DC. Garg, A. (1989), An Evaluation of the NIOSH Guidel
ines for Manual Lifting with Special Reference to Horizontal Distance, American In
dustrial Hygiene Association Journal, Vol. 50, No. 3, pp. 157164. Garg, A., and B
adger, D. (1986), Maximum Acceptable Weights and Maximum Voluntary Strength for As
ymmetric Lifting, Ergonomics, Vol. 29, No. 7, pp. 879892. Garg, A., and Banaag, J.
(1988), Psychophysical and Physiological Responses to Asymmetric Lifting, in Trends
in Ergonomics / Human Factors V, F. Aghazadeh, Ed., North-Holland, Amsterdam, pp
. 871877. Garg, A., Chaf n, D. B., and Herrin, G. D. (1978), Prediction of Metabolic
Rates for Manual Materials Handling Jobs, American Industrial Hygiene Association
Journal, Vol. 39, No. 8, pp. 661675. Garg, A., Chaf n, D. B., and Freivalds, A. (19
82), Biomechanical Stresses from Manual Load Lifting: Static vs. Dynamic Evaluatio
n, Institute of Industrial Engineers Transactions, Vol. 14, pp. 272281. Genaidy, A.
, Karwowski, W., and Christensen, D. (1999), Principles of Work System Performance
Optimization: A Business Ergonomics Approach, Human Factors and Ergonomics in Man
ufacturing, Vol. 9, No. 1, pp. 105128. Grandjean, E. (1980), Fitting the Task to
the Man, Taylor & Francis, London, pp. 4162. Grieve, D., and Pheasant, S. (1982),
Biomechanics, in The Body at Work, W. T. Singleton, Ed., Taylor & Francis, London,
pp. 71200. Habes, D. J., and Putz-Anderson, V. (1985), The NIOSH Program for Evalua
ting Biomechanical Hazards in the Workplace, Journal of Safety and Research, Vol.
16, pp. 4960. Hagberg, M. (1984), Occupational Musculoskeletal Stress and Disorders
of the Neck and Shoulder: A Review of Possible Pathophysiology, International Arc
hives of Occupational and Environmental Health, Vol. 53, pp. 269278. Ha gg, G. (1
991), Static Work Loads and Occupational Myalgia: A New Explanation Model, in Electr
omyographical Kinesiology, P. Anderson, D. Hobart, and J. Danoff, Eds., Elsevier
Science, New York, pp. 141144.
1104
PERFORMANCE IMPROVEMENT MANAGEMENT
Karwowski, W., Gaddie, P., Jang, R., and GeeLee, W. (1999), A Population-Based Loa
d Threshold Limit (LTL) for Manual Lifting Tasks Performed by Males and Females, i
n The Occupational Ergonomics Handbook, W. Karwowski and W. S. Marras, Eds., CRC
Press, Boca Raton, FL, pp. 10631074. Karwowski, W., Genaidy, A. M., and Asfour,
S. S., Eds. (1990), Computer-Aided Ergonomics, Taylor & Francis, London. Kee, D.
, and Karwowski, W. (2001), Ranking Systems for Evaluation of Joint Motion Stressf
ulness Based on Perceived Discomforts, Ergonomics (forthcoming). Kelsey, J. L., an
d Golden, A. L. (1988), Occupational and Workplace Factors Associated with Low B
ack Pain, Occupational Medicine: State of the Art Reviews, Vol. 3, No. 1, pp. 716.
Kelsey, J. L., and Golden, A. L. (1988), Occupational and Workplace Factors Associ
ated with Low Back Pain, in Back Pain in Workers, R. A. Deyo, Ed., Hanley & Belfus
, Philadelphia. Kelsey, J., and Hochberg, M. (1988), Epidemiology of Chronic Muscu
loskeletal Disorders, Annual Review of Public Health, Vol. 9, pp. 379401. Kelsey, J
. L., Githens, P. B., White, A. A., Holford, R. R., Walter, S. D., OConnor, T., A
stfeld, A. M., Weil, U., Southwick, W. O., and Calogero, J. A. (1984), An Epidemio
logic Study of Lifting and Twisting on the Job and Risk for Acute Prolapsed Lumb
ar Intervertebral Disc, Journal of Orthopaedic Research, Vol. 2, pp. 6166. Keyserli
ng, W. M. (1986), Postural Analysis of the Trunk and Shoulder in Simulated Real Ti
me, Ergonomics, Vol. 29, No. 4, pp. 569583. Keyserling, W. M. (1990), Computer-Aided
Posture Analysis of the Trunk, Neck, Shoulders and Lower Extremities, in ComputerAided Ergonomics, W. Karwowski, A. Genaidy, and S. S. Asfour, Eds., Taylor & Fra
ncis, London, pp. 261272. Keyserling, W. M., Punnett, L., and Fine, L. J. (1988),
Trunk Posture and Low Back Pain: Identi cation and Control of Occupational Risk Fac
tors, Applied Industrial Hygiene, Vol. 3, pp. 87 92. Keyserling, W. M., Stetson, D.
S., Silverstein, B. A., and Brouver, M. L. (1993), A Checklist for Evaluating Ris
k Factors Associated with Upper Extremity Cumulative Trauma Disorders, Ergonomics,
Vol. 36, No. 7, pp. 807831. Klaucke, D. N., Buehler, J. W., Thacker, S. B., Parr
ish, R. G., Trowbridge, R. L., and Berkelman, R. L. (1988), Guidelines for Evaluat
ing Surveillance System, Morbidity and Mortality Weekly, Vol. 37, Suppl. 5, pp. 118
. Kroemer, K. H. E. (1970), Human Strength: Terminology, Measurement, and Interpre
tation of Data, Human Factors, Vol. 12, pp. 297313. Kroemer, K. H. E. (1989), Enginee
ring Anthropometry, Ergonomics, Vol. 32, No. 7, pp. 767784. Kroemer, K. H. E. (1992
), Personnel Training for Safer Material Handling, Ergonomics, Vol. 35, No. 9, pp. 1
1191134. Kroemer, K. H. E., Kroemer, H. J., and Kroemer-Elbert, K. E. (1986), Eng
ineering Physiology, Elsevier, Amsterdam. Kumar, S. (1990), Cumulative Load as a R
isk Factor for Back Pain, Spine, Vol. 15, No. 12, pp. 13111316. Kuorinka, I., and F
orcier, L., Eds. (1995), Work Related Musculoskeletal Disorders (WMSDs): A Refer
ence Book for Prevention, Taylor & Francis, London. Landau, K., Brauchler, R., B
rauchler, W., Landau, A., and Bobkranz, R. (1990), Job Analysis and Work Design Us
ing Ergonomic Databases, in Computer-Aided Ergonomics, W. Karwowski, A. Genaidy, a
nd S. S. Asfour, Eds., Taylor & Francis, London, pp. 197212. Legg, S. J., and Myl
es, W. S. (1985), Metabolic and Cardiovascular Cost, and Perceived Effort over an
8 Hour Day When Lifting Loads Selected by the Psychophysical Method, Ergonomics, V
ol. 28, pp. 337343. Leino, P. (1989), Symptoms of Stress Production and Musculoskel
etal Disorders, Journal of Epidemiology and Community Health, Vol. 43, pp. 293300.
Liles, D. H., Deivanayagam, S., Ayoub, M. M., and Mahajan, P. (1984), A Job Severi
ty Index for the Evaluation and Control of Lifting Injury, Human Factors, Vol. 26,
pp. 683693. Lloyd, M. H., Gauld, S., and Soutar, C. A. (1986), Epidemiologic Study
of Back Pain in Miners and Of ce Workers, Spine, Vol. 11, pp. 136140. Lundborg, G.,
Dahlin, L. B., Danielsen, N., and Kanje, M. (1990), Vibration Exposure and Nerve F
ibre Damage, Journal of Hand Surgery, Vol. 15A, pp. 346351. Maeda, K., Hunting, W.,
and Grandjean, E. (1980), Localized Fatigue in Accounting Machine Operators, Journa
l of Occupational Medicine, Vol. 22, pp. 810816.
1106
PERFORMANCE IMPROVEMENT MANAGEMENT
National Research Council (1998), Work-Related Musculoskeletal Disorders: A Review
of the Evidence, National Academy Press, Washington, DC., website www.nap.edu / b
ooks / 0309063272 / html / index.html. National Safety Council (1989), Accident
Facts 1989, National Safety Council, Chicago. Nayar, N. (1995), Deneb / ERGOA Simul
ation Based Human Factors Tool, in Proceedings of the Winter Simulation Conference
. Nicholson, L. M., and Legg, S. J. (1986), A Psychophysical Study of the Effects
of Load and Frequency upon Selection of Workload in Repetitive Lifting, Ergonomics
, Vol. 29, No. 7, pp. 903911. Occupational Safety and Health Administration (OSHA
) (1982), Back Injuries Associated with Lifting, Bulletin 2144, U.S. Government Prin
ting Of ce, Washington, DC. Occupational Safety and Health Administration (OSHA) (
2000), Final Ergonomic Program Standard:Regulatory Text, website www.osha-slc.gov /
ergonomics-standard / regulatory / index.html. Pearcy, M. J., Gill, J. M., Hindl
e, J., and Johnson, G. R. (1987), Measurement of Human Back Movements in Three Dim
ensions by Opto-Electronic Devices, Clin. Biomech, Vol. 2, pp. 199 204. Pheasant, S
. (1986), Bodyspace: Anthropometry, Ergonomics and Design, Taylor & Francis, Lon
don. Pheasant, S. (1989), Anthropometry and the Design of Workspaces, in Evaluation
of Human Work, J. R. Wilson, and N. Corlett, Eds., Taylor & Francis, London, pp.
455471. Porter, J. M., Freer, M., Case, K., and Bonney, M. c. (1995), Computer Aid
ed Ergonomics and Workspace Design, in J. R. Wilson and E. N. Corlett, Eds., Evalu
ation of Human Work: A Practical Ergonomics Methodology, Taylor & Francis, Londo
n. Pottier, M., Lille, F., Phuon, M., and Monod, H. (1969), Etude de la contractio
n Statique Intermittente, Le Travail Humain, Vol. 32, pp. 271284 (in French). Potvi
n, J., Norman, R. W., and McGill, S. M. (1991), Reduction in Anterior Shear Forces
on the L4 / L5 Disc by the Lumbar Musculature, Clinical Biomechanics, Vol. 6, pp.
8896. Praemer, A., Furner, S., and Rice, D. P. (1992), Musculoskeletal Condition
s in the United States, American Academy of Orthopaedic Surgeons, Park Ridge, IL
. Pritsker, A. A. B. (1986), Introduction to Simulation and SLAM II, 3d Ed., Joh
n Wiley & Sons, New York. Putz-Anderson, V., Ed. (1988), Cumulative Trauma Disor
ders: A Manual for Musculoskeletal Diseases for the Upper Limbs, Taylor & Franci
s, London. Putz-Anderson, V., Ed. (1993), Cumulative Trauma Disorders: A Manual
for Musculoskeletal Diseases for the Upper Limbs, Taylor & Francis, London. Riih
ima ki, H. (1991), Low-Back Pain, Its Origin and Risk Indicators, Scandinavian Journ
al of Work, Environment and Health, Vol. 17, pp. 8190. Riihima ki, H., Tola, S.,
Videman, T., and Ha nninen, K. (1989), Low-Back Pain and Occupation: A Cross-Secti
onal Questionnaire Study of Men in Machine Operating, Dynamic Physical Work and
Sedentary Work, Scandinavian Journal of Work, Environment and Health, Vol. 14, pp.
204209. Robinette, K. M. M., and McConville, J. T. (1981), An Alternative to Perce
ntile Models, SAE Technical Paper #810217, Society of Automotive Engineers, Warren
dale, PA. Roebuck, J. A., Kroemer, K. H. E., and Thomson, W. G. (1975), Engineer
ing Anthropometry Methods, John Wiley & Sons, New York. Rohmert, W. (1960), Ermitt
lung von Erholungspausen fu r Statische Arbeit des Menschen, Internationele Zeitsc
hrift fu r Angewandte Physiologie Einschliesslich Arbeitsphysiologie, Vol. 18, pp
. 123164. Rohmert, W. (1973), Problems of Determination of Rest Allowances, Part 2, A
pplied Ergonomics, Vol. 4, pp. 158162. Rombach, V., and Laurig, W. (1990), ERGON-EX
PERT: A Modular Knowledge-Based Approach to Reduce Health and Safety Hazards in
Manual Materials Handling Tasks, in Computer-Aided Ergonomics, W. Karwowski, A. Ge
naidy, and S. S. Asfour, Eds., Taylor & Francis, London, pp. 299309. Ryan, P. W.
(1971), Cockpit Geometry Evaluation, Phase II, Vol. II, Joint ArmyNavy Aircraft Instr
ument Research Report 7012313, Boeing, Seattle, WA. Schaub, K., and Rohmert, W.
(1990), HEINER Helps to Improve and Evaluate Ergonomic Design, in Proceedings to the
21st International Symposium on Automotive Technology and Automation, Vol. 2, p
p. 9991016.
1108
PERFORMANCE IMPROVEMENT MANAGEMENT
Waters, T. R., Putz-Anderson, V., Garg, A., and Fine, L. J. (1993), Revised NIOSH
Equation for the Design and Evaluation of Manual Lifting Tasks, Ergonomics, Vol. 3
6, No. 7, pp. 749776. Waters, T. R., Putz-Anderson, V., and Garg, A. (1994), Applic
ation Manual for the Revised NIOSH Lifting Equation, U.S. Department of Health and
Human Services, Cincinnati. Westgaard, R. H., and Bjrkland, R. (1987), Generation
of Muscle Tension Additional to Postural Muscle Load, Ergonomics, Vol. 30, No. 6,
pp. 196203. Williams, M., and Lissner, H. R. (1977), Biomechanics of Human Motion
, 2nd Ed., W. B. Saunders, Philadelphia. Winter, D. A. (1979), Biomechanics of H
uman Movement, John Wiley & Sons, New York. World Health Organization (WHO) (198
5), Identi cation and Control of Work-Related Diseases, Technical Report No. 174, WHO,
Geneva, pp. 711.
ADDITIONAL READING
Aaras, A., Westgaard, R. H., and Stranden, E., Postural Angles as an Indicator of
Postural Load and Muscular Injury in Occupational Work Situations, Ergonomics, Vol
. 31, 1988, pp. 915933. Ayoub, M. M., Mital, A., Bakken, G. M., Asfour, S. S., an
d Bethea, N. J., Development of Strength and Capacity Norms for Manual Materials H
andling Activities: The State of the Art, Human Factors, Vol. 22, No. 3, 1980, pp.
271283. Bonney, M. C., and Case, K., The Development of SAMMIE for Computer-Aided
Workplace and Work Task Design, in Proceedings of the 6th Congress of the Internat
ional Ergonomics Association, Human Factors Society, Santa Monica, CA, 1976. Bri
nckmann, P., Biggemann, M., and Hilweg, D. (1988), Fatigue Fractures of Human Lumb
ar Vertebrae, Clinical Biomechanics, Vol. 3 (Supplement 1), 1988. Burdorf, A., Gov
aert, G., and Elders, L. (1991), Postural Load and Back Pain of Workers in the Man
ufacturing of Prefabricated Elements, Ergonomics, Vol. 34, 1991, pp. 909918. Chaf n,
D. B., A Computerized Biomechanical Model: Development of and Use in Studying Gros
s Body Actions, Journal of Biomechanics, Vol. 2, 1969, pp. 429441. Chaf n, D. B., Human
Strength Capability and Low-Back Pain, Journal of Occupational Medicine, Vol. 16,
No. 4, 1974, pp. 248254. Chaf n, D. B., and Park, K. S., A Longitudinal Study of Low
Back Pain as Associated with Occupational Weight Lifting Factors, American Indust
rial Hygiene Association Journal, Vol. 34, No. 12, 1973, pp. 513525. Fernandez, J
. E., and Ayoub, M. M., The Psychophysical Approach: The Valid Measure of Lifting
Capacity, in Trends in Ergonomics / Human Factors V, F. Aghazadeh, Ed., North-Holl
and, Amsterdam, 1988, pp. 837845. Garg, A., and Chaf n, D. B., A Biomechanical Comput
erized Simulation of Human Strength, IIE Transactions, Vol. 14, No. 4, 1975, pp. 2
72281. Genaidy, A., and Karwowski, W.,The Effects of Body Movements on Perceived Jo
int Discomfort Ratings in Sitting and Standing Postures, Ergonomics, Vol. 36, No.
7, 1993, pp. 785792. Genaidy, A. M., and Karwowski, W., The Effects of Neutral Post
ure Deviations on Perceived Joint Discomfort Ratings in Sitting and Standing Pos
tures, Ergonomics, Vol. 36, No. 7, 1993, pp. 785 792. Genaidy, A., Al-Shedi, A., an
d Karwowski, W., Postural Stress Analysis in Industry, Applied Ergonomics, Vol. 25,
No. 2, 1994, pp. 7787. Genaidy, A., Barkawi, H., and Christensen, D., Ranking of St
atic Non-neutral Postures around the Joints of the Upper Extremity and the Spine
, Ergonomics, Vol. 38, No. 9, pp. 18511858. Genaidy, A., Karwowski, W., Christensen
, D., Vogiatzis, C., and Prins, A., What Is Heavy? Ergonomics, Vol. 41, No. 4, 1998, p
p. 320432. Gescheider, G. A. (1985), Psychophysics: Method, Theory, and Applicati
on, 2nd Ed., Erlbaum, London. Grobelny, J., Anthropometric Data for a Drivers Workp
lace Design in the AutoCAD System, in Computer-Aided Ergonomics, W. Karwowski, A.
Genaidy, and S. S. Asfour, Eds., Taylor & Francis, London, pp. 8089. Herrin, G. D
., Chaf n, D. B., and Mach, R. S., Criteria for Research on the Hazards of Manual Ma
terials Handling, in Workshop Proceedings, Contract CDC-99-74-118, U.S. Department
of Health and Human Services (NIOSH), Cincinnati, 1974.
1110
PERFORMANCE IMPROVEMENT MANAGEMENT
Sanders, M., and McCormick, E., Human Factors In Engineering Design, McGraw-Hill
, New York, 1987. Shoaf, C., Genaidy, A., Karwowski, W., Waters, T., and Christe
nsen, D., Comprehensive Manual Handling Limits for Lowering, Pushing, Pulling and
Carrying Activities, Ergonomics, Vol. 40, No. 11, 1997, pp. 11831200. Snook, S. H.,
The Costs of Back Pain in Industry, Occupational Medicine: State of the Art Reviews
, Vol. 3, JanuaryMarch, 1988, pp. 15. Stevens, S. S., Mack, J. D., and Stevens, S.
S., Growth of Sensation on Seven Continua as Measured by Force of Hand-Grip, Journa
l of Experimental Psychology, Vol. 59, 1960, pp. 6067. Taboun, S. M., and Dutta,
S. P., Energy Cost Models for Combined Lifting and Carrying Tasks, International Jou
rnal of Industrial Ergonomics, Vol. 4, No. 1, 1989, pp. 117. Webb Associates, Ant
hropometric Source Book, Vol. 1, Ch. 6, NASA Ref. 1024, 1978.
1112
PERFORMANCE IMPROVEMENT MANAGEMENT
1.
INTRODUCTION
The design of workplaces and products continues to migrate from paper to the com
puter, where analysis accuracy, visualization, and collaboration utilities allow
designs to be realized much faster and better than ever before. As the pace of
this development accelerates with the increased capabilities of the software des
ign tools, less time is spent on physical prototyping, allowing for shortened ti
meto-market for new products. Ergonomists, who in the past used the physical pro
totypes to perform human factors analyses, are now challenged to move the analys
is into the virtual domain using new tools and methods. Usability, maintainabili
ty, physical ergonomic assessments, psychological perception, and procedural tra
ining are some of the human factors issues that might bene t from analysis prior t
o the rst physical incarnation of the design. While this represents a challenge f
or the ergonomists, it provides an opportunity to effect change in the designs m
uch earlier than was typically possible in the past and to take advantage of the
dramatically reduced cost of design alterations in the early design phases. Com
mercial pressures that leverage the cost bene ts offered by complete in-tube design ar
e driving a rapid development of the available computer technologies. Human simu
lation technology is no exception. Contemporary human modeling software is assim
ilating a variety of human modeling knowledge, including population anthropometr
y descriptions and physical capability models. Companies are deploying these hum
an modeling products to allow their ergonomists and designers to populate digita
l representations of products and workplaces ef ciently with virtual human gures an
d ask meaningful questions regarding the likely performance of actual people in
those environments. Identi cation of ergonomic design problems early in the design
phase allows time-consuming and expensive reworking of the manufacturing proces
s or design to be avoided. Computerized human modeling itself has been evolving
over some time. Perhaps the rst attempt to develop a computer-integrated tool for
performing reach tasks was performed by Vetter and Ryan for the Boeing Aircraft
company in the late 1960s. This effort was referred to as the First Man program, wh
ich later became Boeman. This software was later expanded by the USAF Aerospace Medi
cal Research Laboratory Crew Systems Interface Division, which added the ability
to simulate a variety of male and female anthropometric dimensions while seated
in different types of aircraft, culminating in the software COMBIMAN. In the 198
0s, this software was further developed at AMRL to address maintenance tasks, ad
ding performance models of lifting, pulling, and pushing on various tools and ob
jects placed in the hands, and became CrewChief. During this same time in Europe
, a wide variety of models were developed, perhaps the most widely known being S
AMMIE (System for Aiding ManMachine Interaction Evaluation), developed by Case, P
orter, and Bonney at Nottingham and Loughborough Universities in the United King
dom. SAMMIE was conceived as a very general model for assessing reach, interfere
nce, and sight-line issues within a CAD environment. The details of these develo
pments are described in greater depth elsewhere (e.g., Chaf n 2000; Bubb 1999; Bad
ler 1993). Perhaps as a testament to the rapid development in this eld, new human
models that are integrated in modern CAD, 3D visualization, and automation simu
lation products are now the most popular and seeing the most rapid development a
nd deployment. These include Deneb Ergo, EAI Jack, Genicom Safeworks, TecMath Ra
msis, and Tecnomatix RobCAD Man. This chapter reviews the foundation of contempo
rary human modeling technology for physical ergonomics and presents examples of
how digital humans are currently used in industry. The chapter concludes with a
discussion of the current development efforts in the area of human modeling.
2.
2.1.
ve and representative database for the U.S. population. These publicly available
data contain weighting information based on the most recent U.S. census (1988199
4). The census weighting data allow U.S. representative population statistics to
be computed for any population selections based on gender, race, ethnicity, and
age. While both the ANSUR and NHANES data describe single dimension measures ta
ken between anthropometric landmarks, a new anthropometric survey has been initi
ated to provide a database of population 3D body shapes. The CAESAR project (Civ
ilian American and European Surface Anthropometric Resource) will scan approxima
tely 6000 individuals in the United States and Europe. These data are in the for
m of both traditional anthropometric measures and new 3D data from whole body la
ser scanners, that provide a highly detailed data cloud describing the shape of
the subject surface contour (Figure 1). Both children- and nationality-speci c ant
hropometric databases are also available, although these databases have not been
adopted to the same degree as those previously mentioned due to their limited i
nternational availability and data restrictions (Table 1).
2.2.2.
Accommodation Methods
One of the advantages digital ergonomics can bring to the development process is
the ability to investigate accommodation issues early in the design process. In
the past, physical mockups were created and evaluated using a large subject pop
ulation to arrive at accommodation metrics. This approach is both expensive and
time consuming and does not lend itself to rapid evaluation of design alternativ
es. In the digital space, a population of gures can be used to investigate many o
f the same
1114
PERFORMANCE IMPROVEMENT MANAGEMENT
Figure 1 The CAESAR Data are Collected with Laser Body Scanners That Produce a 3
D Point Cloud of Surface Topography. These data will allow accurate measures in
such applications as accommodation analysis and clothing t.
issues of clearance, visibility, reach, and comfort. De ning the sizes of the mani
kins to be used in the process is one of the rst steps of these analyses. To perf
orm an accommodation study, the user de nes a percentage of the population that he
or she wishes to accommodate in their design or workplace and then scales repre
sentative gures using data from applicable anthropometric databases to represent
the extreme dimensions of this accommodation range. As few as one measure, often
stature, may be judged to be important to the design and used in the creation o
f the representative gures. For other applications, such as cockpit design, multi
ple measures, such as head clearance, eye height, shoulder breadth, leg length,
and reach length, may all affect the design simultaneously. Several methods are
in use to investigate the accommodation status of a design and are described bel
ow. 2.2.2.1. Monte Carlo Simulation The Monte Carlo approach randomly samples su
bjects from an anthropometric database to create a representative sample of the
population and processes these gures through the design. Recording the success or
failure of the design to accommodate each
TABLE 1 Models
Name ANSUR 88 NHANES III
Sample of Recent Anthropometric Databases Used by Contemporary Human
N (population and ages) 9,000 (U.S. Army) 33,994 (U.S. civilian ages 2 mo to 99
years) 4,127 (U.S. children ages 2 weeks 18 years) 6,000 (U.S. and European) 40,00
0 (Japanese ages 799 years) 8,886 (Korean ages 650 years)
Dimensions 132 21
Survey Year 1987 19881994
Availability Summary statistics and individual subject data Individual weighted
subject data
CPSC Children
87
19751977
Summary statistics and individual subject data Planned: individual 3D landmark l
ocations, summary statistics, 3D point cloud data Summary statistics and portion
s of individual subject data Dimensional statistics with display software
CAESAR3D
HQLJapan KRISSKorea
44 Traditional and 3D surface 178 84
Present
19921994 1992 (1997)
2.3.2.
Inverse Kinematics
Even with the substantial reduction in degrees of freedom that coupled joints br
ing, there are still far too many degrees of freedom remaining in a contemporary
gure for rapid posturing in production use. To address this, human modelers have
drawn from the robotics eld the concept of inverse kinematics (IK) or specifying
joint kinematics based on a desired end-effector position. Inverse kinematics o
perates on a linked chain of segments, for example the torso, shoulder, arm, for
earm, and wrist, and, given the location of the distal segment (i.e., hand), sol
ves all of the joint postures along this chain based on some optimization criter
ia. For human models, these criteria include that the joints do not separate and
that the joint angles remain within their physiological range of motion. Using
inverse kinematics, the practitioner is able to grab the virtual gures hand in the
3D visualization environment and manipulate its position in real time while the
rest of the gure modi es its posture (i.e., torso, shoulder, arm) to satisfy the r
equested hand position. While the IK methods can be made to respect the physiolo
gic range of motion limitations inherent to the joints, they tend not to have th
e sophistication always to select the most likely or physiologically reasonable
postures. This is especially problematic when the number of joints in the joint
linkage is large. If the number of degrees of freedom is too great, there is unl
ikely to be just one unique posture that satis es the speci ed end-effector position
. For speci c cases, this is being addressed with empirical-based posturing models
, which are discussed in greater detail below. However, even with the caveat tha
t IK
1116
PERFORMANCE IMPROVEMENT MANAGEMENT
sometimes selects inappropriate postural solutions, it is currently the most pop
ular and rapid method of general postural manipulation in 3D environments.
2.4.
Motion / Animation
While static posturing is often suf cient to analyze many ergonomic issues, such a
s reach, vision, clearance, and joint loading, often gure motion in the form of a
n animation is important. Examples include simulated training material, manageri
al presentations, and analyses that depend on observations of a person performin
g an entire task cycle, for example when assessing the ef ciency a workplace layou
t. Realistically controlling gure motion is without question one of the most chal
lenging aspects of human modeling. Humans are capable of an almost in nite number
of different movements to accomplish the same task. Indeed, people may use sever
al postural approaches during a single task, for example to get better leverage
on a tool or gain a different vantage point for a complex assembly. This incredi
ble postural exibility makes it very dif cult for human modeling software to predic
t which motions a worker will use to perform a task. Most current animation syst
ems circumvent this dilemma by requiring the user to specify the key postures of
the gure during the task. The software then transitions between these postures,
driving the joint angles to change over time such that motions conform to correc
t times. A variety of mechanisms are used to perform the posture transitions, fr
om prede ned motion rules to inverse kinematics. Depending on the system, the leve
l of control given to the user to de ne and edit the postures also varies, with so
me products making more assumptions than others. While built-in rules offer util
ity to the novice user, the in exibility imposed by the system automatically selec
ting task postures can be restrictive and a source of frustration to the advance
d user. In addition, the level of delity required in the motion varies greatly de
pending on the application. For applications such as the validation of a factory
layout or animation of a procedure for training or communication purposes, a hu
man motion simulation that simply looks reasonable may be suf cient. However, if s
ophisticated biomechanical analyses are to be run on the simulated motion, it ma
y be necessary to generate motions that are not only visually reasonable but als
o obey physiologic rules. These include accurate joint velocities and accelerati
ons, realistic positioning of the center of mass relative to the feet, and accur
ate speci cation of externally applied forces.
3.
HUMAN PERFORMANCE MODELS
Human capability analysis is one of the primary motivations for simulation. Comm
ercial modelers have implemented performance models from the academic literature
into their software, taking advantage of the human gure sophistication and realtime visualization technologies. A review of the commonly available performance
models reveals that independent research groups largely developed them. The deve
lopment diversity is re ected in the variety of inputs that these tools require in
order to provide an ergonomic assessment. This represents a challenge to the mo
delers as they work to seamlessly integrate these assessment tools into their an
imation and simulation offerings. Some tools lend themselves well to integration
, such as those that can capture all required information from posture, gure, and
load mass characteristics. Typically these are the tools that have as their fou
ndation biomechanical models from which the inputs are derived. Others, which we
re originally intended to be used with a checklist approach, are more challengin
g in that they often require complex questions to be answered that are straightf
orward for a trained ergonomist but quite complex for a computer simulation syst
em (Table 2). Most often, simulation engineers expect to ask human performance q
uestions of their simulation without having to redescribe the simulation in the
language of the tool. Thus, ideally, the tools are implemented such that they ca
n derive all the necessary information from the simulation directly. Less ideall
y, the engineer performing the assessment must explicitly identify details of th
e simulation using tool speci c descriptors.
3.1.
Strength
Strength assessments are a typical human performance analysis, regardless of whe
ther the application involves manual handling tasks, serviceability investigatio
ns, or product operation. Questions of strength can be posed in a variety of way
s. Designers may want to know the maximum operating force for a lever, dial, or
wheel such that their target demographic will have the strength to operate it. O
r the engineer may create a job design and might ask what percentage of the popu
lation would be expected to have the strength to perform the required tasks of t
he job. Strength data might also be used to posture virtual human gures by using
an algorithm that optimally adjusts the individual joints of the manikin to prod
uce most effectively the required forces for a task. A large amount of strength
data has been collected over the past quarter century. Most of these have been i
n the form of maximal voluntary exertions (MVEs) of large muscle groups. Subject
s are placed in special strength testing devices (e.g., strength chairs) to isol
ate individual muscle groups, and standard methods controlling for repeatability
and duration of effort are then used to capture the
TABLE 2 Input Parameters Posture and lift begin and end, object weight, hand cou
pling Joint torques, postures Body posture, hand loads Body posture, hand loads
Task description, hand coupling, gender Gender, body size, posture, task conditi
on Joint torques, strength equations Task descriptions, gender, load description
Suitable Suitable Table lookup Table lookup Suitable Suitable
Partial List of Performance Tools Available in High-End Human Simulation Toolsa
Integration Issues Must determine start and end of lift. Must identify hand coup
ling
Performance Model
Data Source
NIOSH lifting equation
Waters et al. 1993
Low-back injury risk assessment
See Chaf n et al. 1999
Strength assessment
University of Michigan static strength equations Burandt 1978 and Schultetus 198
7 Ciriello and Snook 1991 CrewChief
Fatigue analysis
Rohmert 1973a, b; Laurig 1973
Metabolic energy expenditure
Garg et al. 1978
Must identify the type of task motion (i.e., lift, carry, arm work, etc.). Must
identify muscle use and force descriptions. Suitable Suitable
Rapid upper limb assessment Posture assessment Posture assessment
McAtamney and Corlett 1993
Posture assessment, muscle use, force description
Ovako working posture
Karhu et al. 1977
Comfort
Variety of sources, including Dreyfuss 1993; Rebiffe 1966; Krist 1994
a
The tools require different types of input that often cannot be accurately deduc
ed from an animation sequence, requiring tool speci c user input.
1117
1118
PERFORMANCE IMPROVEMENT MANAGEMENT
strength levels accurately. Strength can also be assessed while taking into acco
unt a subjects perception of the effort. These data, in which subjects impression
of the load strain is included, are called psychophysical strength data. They di
ffer from the maximal voluntary exertions in that they are more task speci c. Subj
ects are asked to identify the load amount they would be comfortable working wit
h over the duration of a work shift. Typically, these are laboratory studies in
which mockups of tasks very close to the actual work conditions are created and
subjects are given an object to manipulate to which weight can be added. The wor
ker has no knowledge of the weight amount (bags of lead shot in false bottoms ar
e often used), and experimental techniques are employed to converge on the weigh
t that the subject feels comfortable manipulating over the course of a workday.
The data of Ciriello and Snook (1991) are of this nature. Finally, a few dynamic
strength data have been collected. These data are complex to collect and for th
is reason are also the most scarce. Speci c dynamic strength-testing equipment is
required to control for the many variables that affect dynamic strength, includi
ng the movement velocity and posture. As a consequence of the datacollection lim
itations, these data are also quite restricted in terms of the range of conditio
ns in which they can be applied, such as the prediction of dynamic strength capa
bility for ratchet wrench push and pull operations (Pandya et al. 1992). Lately,
the rise of cumulative trauma injuries for the lower arm, wrist, and hand has c
reated a need for strength data speci cally pertaining to the hand and ngers and es
timates of hand fatigue. A extensive amount of data is available on grip strengt
hs in various grip postures, but these data, because they do not adequately desc
ribe the distribution of exertion loads on the individual digits, do not lend th
emselves well to the estimation of hand fatigue issues. This problem is compound
ed by the challenge of accurately posturing the hands in digital models. There a
re many bones and joints that allow the complex movement of the ngers, most of wh
ich are included in contemporary human models. For example, the Jack human model
has a total of 69 segments and 135 DOF, of which 32 segments and 40 DOF are in
the hands alone. While a solution to the manipulation of these many degrees of f
reedom is presented in the section describing motion-tracking technologies, mode
ls that are able to analyze the postures and gripping conditions are still neede
d before hand fatigue issues can be addressed. Whole body strength data in conte
mporary human models are available in a range of forms, from simple data lookup
tables to statistical equations that are used in conjunction with biomechanical
models to drive a measure of population strength capability. In the United State
s, perhaps the most widely used strength data are from the University of Michiga
n Center for Ergonomics (3DSSPP) and Liberty Mutual Insurance Co. (Ciriello and
Snook 1991). Within the defense industry, the CrewChief strength data are also p
opular because the modeled strengths were obtained from military-related mainten
ance tasks. In Europe, particularly Germany, the data of Burandt and Schultetus
are often used. As mentioned previously, a few of these data were obtained witho
ut the intent to incorporate them into human models. Instead, the data are prese
nted in tables indexed by such factors as loading condition and gender. Those da
ta that were collected and described with a focus toward eventual human model in
clusion tend to be formulated such that all relevant information needed for a st
rength assessment can be deduced from the human model mass, loading, and posture
information. These strength models now are very attractive to the human modelin
g community because they afford the real-time assessment of strength issues duri
ng a simulation without the user having to identify dataspeci c parameters or cond
itions (Table 2). As discussed above, the availability of dynamic strength data
is limited. The narrow scope of applications to which these data can be applied
restricts their attractiveness to human modelers and the user community. An inte
resting associated note regarding these dynamic data and human models is that ev
en if these data were more complete, the dif culty in accurately determining movem
ent velocities from simulations would affect their use. Unless the virtual human
motions are de ned via motion-capture technology, the designers guess of the movem
ent speeds is fraught with error. Even under conditions in which actual motion c
apture data are used to animate the virtual gures, the velocities are derived by
differentiation of the position information, which can result in noisy and unrel
iable input to the dynamic strength predictors. However, because people undeniab
ly move during work and dynamic strength capability can differ greatly from stat
ic, this is clearly an area that will likely see research and technological atte
ntion in the near future.
3.2.
Fatigue / Metabolic Energy Requirements
Once a simulation of a virtual human performing a task has been created, questio
ns regarding the fatigue of the worker are commonplace. Can the worker be expect
ed to perform at this cycle rate, or do we have to decrease the line rate or oth
erwise reengineer the task to avoid worker fatigue? Research suggests that whole
-body fatigue increases the risk of musculoskeletal injury through premature dec
rease in strength. Unfortunately, the available empirical data are largely inade
quate to predict a workers fatigue level accurately. The lack of data can be expl
ained by the large number of variables that affect fatigue, including exertion l
evel, dynamism of the exertion, muscle temperature, previous exertion levels, th
e muscle groups involved, and individual conditioning. Nevertheless,
3.3.
Low-Back Injury Risk
Low-back injury is estimated to cost the U.S. industry tens of billions annually
through compensation claims, lost workdays, reduced productivity, and retrainin
g needs (NIOSH 1997; Cats-Baril and Frymoyer 1991; Frymoyer et al. 1983). Approx
imately 33% of all workers compensation costs are for musculoskeletal disorders.
Experience has shown that these injuries can be avoided with the proper ergonomi
c intervention. Biomechanical models available can be used for job analysis eith
er proactively, during the design phase, or reactively in response to injury inc
idence, to help identify the injurious situations. The most common types of inju
ry-assessment analyses performed using human models include low-back compression
force analysis and strength analysis. Low-back pain has been well researched ov
er the past 20 years, including epidemiological studies that have identi ed spinal
compression force as one of the signi cant predictors of low-back injury. In resp
onse, sophisticated biomechanical models have been developed to estimate this co
mpression force accurately, taking into account not only the weight of the objec
t and the body segments but also internal forces generated by the musculature an
d connective tissues as they balance the external loads (e.g., Nussbaum et al. 1
997; Raschke et al. 1996; Van Diee n 1997). These internal contributions
1120
PERFORMANCE IMPROVEMENT MANAGEMENT
to the spinal forces can be an order of magnitude larger than the applied loads.
NIOSH has recommended guidelines against which the predicted compression forces
can be compared and job-design decisions can be made.
3.4.
Comfort
Assessment of worker comfort using digital models can be based on both posture a
nd performance model analysis. However, since comfort is in uenced by a wide varie
ty of interacting factors, these tools are largely insuf cient to quantify the per
ception of comfort with accuracy. Most comfort studies performed to date have be
en centered around a speci c task, such as VDT operation or vehicle driving (Rebif
fe 1966; Grandjean 1980; Porter and Gyi 1998; Krist 1994; Dreyfuss 1993). Within
the boundaries of these tasks, subjects are observed in different postures and
asked to report on their comfort via a questionnaire. The joint angles are measu
red and correlated with the comfort rating to arrive at a postural comfort metri
c. Because these data are mostly collected under speci c, often seated, task condi
tions, some caution is required to apply these to the analysis of comfort in sta
nding postures such as materials-handling operations. In addition to the posture
-based comfort assessment, a variety of the performance tools can be used to hel
p with the assessment of comfort, including strength capability, fatigue, and po
sture duration information. Certainly the multifactorial nature of the comfort a
ssessment makes it challenging, and perhaps for this reason it is seldomly used
in the analysis of physical tasks.
3.5.
Motion Timing
A typical simulation question regards the estimated time it will require a perso
n to perform a task. Digital human models can draw on a wealth of predetermined
time data available. Motion-timing data are collected in studies where elemental
motions (walk, turn, reach, grasp, etc.) are observed performed by skilled oper
ators in the workplace, and timed using a stopwatch. The motion time data are th
en published in an easily indexed form with associated movement codes. The best
known of these is the methods time measurement (MTM-1) system published by the M
TM Association. This system has the advantage that it has a large number of elem
ental motions de ned, allowing for a precise partitioning of the work motions with
in a job task and subsequent accurate assessment of the movement time. One drawb
ack of this high resolution is that considerable overhead is required to break t
he motion into the elemental movements. To address this, the MTM association as
well as other groups have published grosser movement times, which combine severa
l elemental movements into one. Several of the human modeling solutions now prov
ide simulation solutions that can de ne movement duration with input from these mo
vement time systems.
4.
ERGONOMIC ANALYSIS IN DIGITAL ENVIRONMENTS
The large cost of worker injury, in both social and economic terms, has motivate
d considerable research in the development of models that predict potentially in
jurious situations in the workplace. According to the Bureau of Labor Statistics
(1999), 4 out of 10 injuries and illnesses resulting in time away from work in
1997 were sprains or strains. In the following sections, the key steps in a huma
osed to the human modeling publishers is to determine whether the part can actua
lly be held, and extracted, with suf cient clearance for the part, ngers, and arm.
Depending on the complexity of the environment, this is an incredibly dif cult pro
blem to solve without user input, and to date no solution is available that nds a
solution in a reasonable amount of time. To address this, human models can be u
sed in conjunction with immersive technologies in which the design engineer move
s the virtual part in the environment with an avatar (virtual human) representin
g their arm and hand in the scene (see Section 5). Collisiondetection capabiliti
es of the human modeling software are used to identify if a free path can be fou
nd. This technology is now being evaluated to identify serviceability issues pri
or to the rst physical build, and also to provide task timing estimates (cost) of
performing a particular service operation (Figure 2).
4.2.
Product Design
The availability of human modeling technology during the product-design phase ex
pands the range of analyses that can be performed prior to a physical prototype
construction. In the past, SAE recommended practices, or J-standards, were among the
limited tools available for benchmarking and design. These tools, derived from
empirical studies of people in vehicles, provide population response models that
describe such functional information as reach, eye location, and head clearance
. However, these data are presented as statistical summaries of the population r
esponse, which do not maintain information on the response of any particular ind
ividual. The SAE eye-ellipse zone, for example, provides an ellipsoid that de nes
a region where the eye locations of a speci c portion of the
1122
PERFORMANCE IMPROVEMENT MANAGEMENT
Figure 2 Serviceability Analysis of a Design Can Be Performed Prior to a Prototy
pe Build. Here the Jack human gure is used to analyze the maintenance access to a
n electrical box inside of the aircraft nose cone. Eye views and collision detec
tion can help the designer during the evaluation process. (Courtesy EMBRAER)
population can be expected. The speci c location where a small or tall persons eyes
might fall within this distribution is not de ned (Figure 3). For this reason, wh
en the behavior of a speci cally sized individual or group is required, such as wh
en a design is targeted to a speci c demographic, human modeling tools can be used
to answer these questions.
4.2.1.
Accommodation
Once a design proposal is in place, accommodation questions can be posed. The pr
ocess generally mirrors that for workplace analysis, with a few modi cations.
4.2.2.
De nition of the Test Population Anthropometry
Most often the accommodation needs for product design are more involved than dur
ing manufacturing ergonomic analysis because more anthropometric dimensions typi
cally need to be taken into account. For example, the product design may have to
accommodate individuals with a variety of sitting eye heights, shoulder breadth
s, and arm lengths. As mentioned in the sections describing anthropometric metho
ds, statistical methods such as factor analysis (principal components) can be us
ed to select a family of gures or boundary manikins that will adequately test the
range of these multiple dimensions. 4.2.2.1. Figure Posturing Posturing a gure w
ithin the digital environment can impact the design analysis dramatically. As ev
idence of the importance of this issue, various posture-prediction methodologies
have been developed in different industries. Pioneering work at Boeing led to a
posture prediction method for the aerospace industry (Ryan and Springer 1969).
In the late 1980s, a consortium of German automotive manufactures and seat suppl
iers sponsored the development of driver posture-prediction methodologies for th
e RAMSIS CAD manikin (Seidl 1993). Most recently, a global automotive industrial
consortium sponsored new and more comprehensive methodologies to predict the po
stures of drivers and passengers through the ASPECT program (Reed 1998). These l
atest methods have been made available to modelers for inclusion in their softwa
re, allowing for sophisticated accommodation studies in automotive environments.
Data for posture prediction in heavy truck and earth-moving equipment environme
nts are still needed. The boundary manikins are postured in the environment and
tested for clearance, reach, vision, and comfort issues. Measurements from these
boundary manikins can be used to establish design zones with which product desi
gn decisions can be made. A common technique for considering reachability, for e
xample, is to generate zones representing the space where the boundary manikins
1124
PERFORMANCE IMPROVEMENT MANAGEMENT
Figure 4 The Anthropometric Landmark Data from Multiple Subjects Is Displayed in
the Form of Data Clouds. The simultaneous view of the data allows for rapid ass
essment of control location and adjustment questions. (Courtesy Dr. Jerry Duncan
, John Deere Corporation)
5.
IMMERSIVE VIRTUAL REALITY
Many of the predictive technologies available for human modeling currently do no
t adequately answer the questions designers pose to their human modeling solutio
ns. This limitation is especially pronounced in the areas of natural complex mot
ion simulation and cognitive perception. As mentioned in previous sections, a de
signer might ask how a person will move to perform an operation, or ask if there
is suf cient clearance for a person to grasp a part within the con nes of surroundi
ng parts. Situations that require nontypical, complex motions currently cannot b
e answered adequately with the movement prediction algorithms available. Only th
rough the use of immersive virtual reality technology that allows the mapping of
a designers movements to an avatar in the virtual scene can these complex moveme
nt situations be adequately and ef ciently analyzed. Similarly, cognitive models p
roviding subjective perceptions of an environment, such as feelings of spaciousn
ess, control, and safety, are not currently available, yet a designer looking th
rough the eyes of a digital human can assess these emotions of the virtual envir
onment under design. For these reasons, immersive virtual reality (VR) is increa
singly being used in both design, and manufacturing applications. Early in a des
ign cycle when only CAD models are available, VR can allow the design to be expe
rienced by designers, managers, or even potential users. Such application allows
problems to be identi ed earlier in the design cycle and can reduce the need for
physical prototypes. Immersive VR usually includes a combination of motion track
ing and stereo display to give the user the impression of being immersed in a 3D
computer environment. Auditory and, increasingly, haptic technology are also av
ailable to add realism to the users perspective. Virtual reality does not necessa
rily require a digital human model. Simply tracking a subjects head motion is suf c
ient to allow the stereo view to re ect the subjects view accurately and thus provi
de the user with a sense of being present in the computerized world. However, th
e addition of a full human model, tracking the subjects body and limb movements i
n real time, allows for additional realism because the user can see a representa
tion of themselves in the scene. Additional analysis is also possible with fullbody motion tracking. For example, collisions between limbs and the objects in t
he scene can be detected so that reach and t can be better assessed. This capabil
ity is especially useful in design for maintainability or design for assembly ap
plications. Another area where the full body tracking can be useful is for a des
igner to gain experience in interacting with the design from the perspective of
a very short or very tall person. By scaling the environment in proportion to th
e increase or decrease in anthropometric size he or she wishes to experience, th
e designer can evaluate such issues as clearance, visibility, and reachability o
f extreme-sized individuals without actually having to recruit a subject pool of
these people. Real-time application of the tracked motions to the virtual human
also gives observers a realistic third-person view of the human motion in relat
ion to the design geometry.
segments. Although multiple markers are required to obtain the same position and
orientation information as one magnetic sensor, these markers are typically pas
sive (simply balls covered with re ective material) and so do not encumber the mot
ion of the subject as dramatically as the wires of magnetic systems. The downsid
e of optical motion-tracking technology is that it is necessary for every marker
to be seen by at least two (and preferably more) cameras. Placement of cameras
to meet this requirement can be a challenge, especially in enclosed spaces such
as a vehicle cab. Examples of commercially available magnetic systems include th
e Ascension MotionStar (www. ascension-tech.com) and Polhemus FastTrak (www.polh
emus.com). Examples of optical systems include those sold by Vicon Motion System
s (www.vicon.com), Qualysis AB (www.qualysis.com), and Motion Analysis Corp. (ww
w.motionanalysis.com).
6.
HUMAN SIMULATION CHALLENGES
As human modeling becomes an integral part of the design process, the need for v
isual realism and analysis sophistication also increases. For better or worse, t
he visual appearance of human gures plays an important role in the acceptance of
the technology and the perceived con dence of the results. Efforts in several area
s focus on the increased realism of the human skin form. For performance reasons
, current commercial human models are skinned using polygonal segment representation
s that are either completely static or pseudostatic. The gures are composed of in
dividual
1126
PERFORMANCE IMPROVEMENT MANAGEMENT
segments, such as feet, lower and upper legs, pelvis, and torso. The segments ar
e drawn as a collection of static polygons arranged to give the segment its anth
ropomorphic shape. Prudent selection of the shape at the ends of the segments al
lows joints to travel through the physiological range of motion without the crea
tion of gaps. Pseudostatic skinning solutions stitch polygons between the nodes of a
djacent segments in real time to avoid the skin breaking apart at the joints. Th
ese solutions can be made to look very realistic and are adequate for most ergon
omic assessments and presentations. However, they do not model the natural tissu
e deformation that occurs at the joints throughout the range of motion. This is
visually most noticeable at complex joints, such as the shoulder joint, or quant
itatively at joints to which measurements are taken, such as the popliteal regio
n of the knee. To better model these areas, a variety of methods have been descr
ibed in the literature that deform the surface polygons according to parametric
descriptions or underlying muscle deformation models (e.g., Scheepers et al. 199
7). However, these methods have generally not been commercially adopted because
they are computationally expensive and mostly unnecessary from the ergonomic ana
lysis standpoint. Nevertheless, as the computer hardware capability increases an
d the availability of highly detailed whole-body-surface scans elevates the expe
cted level of visual realism, these deformation techniques will become more prev
alent.
6.1. 6.1.1.
Performance Models Performance Factors
Performance models used in current commercial models are largely an amalgamation
of models and data available in the ergonomics and human factors literature. As
mentioned in the review of the performance models, the presentation of most of
these research ndings was originally not intended for integration into real-time
simulation environments. The studies from which these data were derived also did
not address some of the more contemporary ergonomic issues, such as the perform
ance limitations of the elderly, cumulative trauma, shoulder injury, and movemen
t modeling. The aging population is elevating the need to have more speci c perfor
mance models for this demographic. Questions of functional limitations resulting
from decreased strength, reaction time, and joint range of motion all affect th
e design, both of products and workplaces. In the automotive design space, ingre
ss / egress capability is an example of a task that may be in uenced by these limi
tations. In the workplace, questions of strength and endurance need to be addres
sed. Cumulative trauma prediction presents a particular academic challenge becau
se the etiology of the injury is largely unknown. Biomechanical factors clearly
play a role but to date do not provide suf cient predictive power upon which to ba
se a risk-assessment tool. At best, conditions associated with an increased like
lihood of cumulative trauma can be agged. Similarly, shoulder fatigue and injury
prediction is not developed to the point where models incorporated into modeling
software can accurately predict the injurious conditions. The signi cant social a
nd economic cost of low-back injury has motivated considerable research in low-b
ack modeling over the past 20 years. The research ndings have resulted in sophist
icated models and quantitative design guidelines and have allowed manufacturing
organizations to reduce dramatically the incidence rates of low-back pain. Shoul
der injury and cumulative trauma now need the same level of investment to mature
the available data in these areas.
6.1.2.
Variation Modeling
els. The large amount of ergonomic and anthropometric knowledge integrated into
these solutions makes them ef cient tools to answer a wide variety of human factor
s questions of designs. At the same time, the global nature of these tools is se
rving to consolidate and expose research ndings from around the world and steerin
g academic research direction and focusing the presentation of the results for m
odel inclusion. While there are many areas that can be explored using the curren
t offering of modeling solutions, many interesting challenges remain as we work
to make virtual humans as lifelike as technology and our knowledge of humans all
ow.
REFERENCES
Badler, N. I., Phillips, C. B., and Webber, B. L. (1993), Simulating Humans: Com
puter Graphics Animation and Control, Oxford University Press, New York. Badler,
N. I., Palmer, M. S., and Bindiganavale, R. (1999), Animation Control for Real-Ti
me Virtual Humans, Communications of the ACM, Vol. 42, No. 8, pp. 6573. Bittner, A.
C., Wherry, R. J., and Glenn F. A. (1986), CADRE: A Family of Manikins for Workst
ation Design, Technical Report 22100.07B, ManMachine Integration Division, Naval Ai
r Development Center, Warminster, PA. Bruderlin, A., and Williams, L. (1995), Moti
on Signal Procession, in Proceedings of Computer Graphics, Annual Conference Serie
s, pp. 97104. Bubb, H. (1999), Human Modeling in the Past and Future: The Lines of
European Development, Keynote address at the 2nd SAE International Digital Human M
odeling Conference, The Hague. Burandt, U. (1978), Ergonomie fu r Design und Ent
wicklung, Schmidt, Cologne, p. 154.
1128
PERFORMANCE IMPROVEMENT MANAGEMENT
Cassell, J., Vilhjalmsson, H. (1999), Fully Embodied Conversational Avatars: Makin
g Communicative Behaviors Autonomous, Autonomous Agents and Multi-Agent Systems Vo
l. 2, pp. 4564. Cats-Baril, W., and Frymoyer, J. W. (1991), The Economics of Spinal
Disorders, in The Adult Spine, J. W. Frymoyer, T. B. Ducker, N. M. Hadler, J. P.
Kostuik, J. N. Weinstein, and T. S. Whitecloud, Eds., Raven Press, New York, 8510
5. Chaf n, D. B. (2000), Case Studies in Simulating People for Workspace Design, S
AE, Pittsburgh (forthcoming). Chaf n, D. B., and Erig, M. (1991), Three-Dimensional
Biomechanical Static Strength Prediction Model Sensitivity to Postural and Anthr
opometric Inaccuracies, IIE Transactions, Vol. 23, No. 3, pp. 215227. Chaf n, D. B.,
Andersson, G. B. J., and Martin, B. J. (1999), Occupational Biomechanics, 3rd Ed
., John Wiley & Sons, New York. Chaf n, D. B., Faraway, J., Zhang, X., and Woolley
, C. (2000), Stature, Age, and Gender Effects on Reach Motion Postures, Human Factor
s, Vol. 42, No. 3, pp. 408420. Chao, E. Y. (1978), Experimental Methods for Biomech
anical Measurements of Joint Kinematics, in CRC Handbook for Engineering in Medici
ne and Biology, Vol. 1, B. N. Feinberg and D. G. Fleming, Eds., CRC Press, Cleve
land, OH, pp. 385411. Chao, E. Y., and Rim, K. (1973), Application of Optimization
Principals in Determining the Applied Moments in Human Leg Joints During Gait, Jou
rnal of Biomechanics, Vol. 29, pp. 13931397. Ciriello, V. M., and Snook, S. H. 19
91, The Design of Manual Handling Tasks: Revised Tables of Maximum Acceptable Weig
hts and Forces, Ergonomics, Vol. 34, pp. 11971213. Davis, R. B., Ounpuu S., Tybursk
i D., and Gage, J. R. (1991), A Gait Analysis Data Collection and Reduction Techni
que, Human Movement Science, Vol. 10, pp. 575587. Dreyfuss, H. (1993), The Measure
of Man and Woman: Human Factors in Design, Whitney Library of Design, New York.
Faraway, J. J. (1997), Regression Analysis for a Functional Response, Technometrics,
Vol. 39, No. 3, pp. 254262. Frymoyer, J. W., Pope, M. H., Clements, J. H., et al
. (1983), Risk Factors in Low Back Pain: An Epidemiologic Survey, Journal of Bone an
d Joint Surgery, Vol. 65A, pp. 213216. Funge, J., Tu, X., and Terzopoulos, D. (19
99), Cognitive Modeling: Knowledge, Reasoning and Planning for Intelligent Charact
ers, in SIGGRAPH Conference Proceedings (Los Angeles, August). Garg, A., Chaf n, D.
B., and Herrin, G. D. (1978), Prediction of Metabolic Rates for Manual Materials H
andling Jobs, American Industrial Hygiene Association Journal, Vol. 39, No. 8, pp.
661674. Gordon, C. C., Bradtmiller, B., Churchill, T., Clauser, C. E., McConvill
e, J. T., Tebbetts, I. O., and Walker, R. A. (1988), 1988 Anthropometric Survey of
U.S. Army Personnel: Methods and Summary Statistics, Technical Report Natick / TR
-89 / 044. Grandjean, E. (1980), Sitting Posture of Car Drivers from the Point of
View of Ergonomics, in Human Factors in Transport Research (Part 1), E. Grandjean,
Ed., Taylor & Francis, London, pp. 20213. Hodgins, J. K., and Pollard, N. S. (19
97), Adapting Simulated Behaviors for New Characters, SIGGRAPH 97, pp. 153162. Johnson
, W. L., Rickel, J. W., and Lester, J. C. (2000), Animated Pedagogical Agents: Fac
e-to-Face Interaction in Interactive Learning Environments, International Journal
of Arti cial Intelligence in Education, Vol. 11, pp. 4778. Karhu, O., Kansi, P., an
d Kuorina, I. (1977), Correcting Working Postures in Industry: A Practical Method
for Analysis, Applied Ergonomics, Vol. 8, pp. 199201. Krist, R. (1994), Modellierun
g des Sitzkomforts: Eine experimentelle Studie, Schuch, Weiden. Laurig, W. (1973
), Suitability of Physiological Indicators of Strain for Assessment of Active Ligh
t Work, Applied Ergonomics (cited in Rohmert 1973b). McAtamney, L., and Corlett, E
. N. (1993), RULA: A Survey Method for the Investigation of WorkRelated Upper Limb
Disorders, Applied Ergonomics, Vol. 24, pp. 9199. National Institute for Occupatio
nal Safety and Health (NIOSH) (1997), Musculoskeletal Disorders (MSDs) and Workp
lace Factors: A Critical Review of Epidemiologic Evidence for Work Related Muscu
loskeletal Disorders of the Neck, Upper Extremity, and Low Back, U.S. Department
of Health and Human Services, Cincinnati. NHANES (1994), National Health and Nutr
ition Examination Survey, III: 19881994, U.S. Department of Health and Human Servic
es, Center for Disease Control and Prevention, Vital and Health Statistics, Seri
es 1, No. 32.
1130
PERFORMANCE IMPROVEMENT MANAGEMENT
Consumer Product Safety Commission by the Highway Safety Research Institute, Uni
versity of Michigan, Ann Arbor, MI, 1977. HQL, Japanese Body Size Data 19921994,
Research Institute of Human Engineering for Quality Life, 1994. Winter, D. A., T
he Biomechanics and Motor Control of Human Gait: Normal, Elderly and Pathologica
l, University of Waterloo Press, Waterloo, ON, 1991.
1132
PERFORMANCE IMPROVEMENT MANAGEMENT
interventions in a computer-component assembly line. In both cases, the objectiv
es of the intervention were used to establish appropriate measures for the evalu
ation. Ergonomics / human factors, however, is no longer con ned to operating in a
project mode. Increasingly, the establishment of a permanent function within an
industry has meant that ergonomics is more closely related to the strategic obj
ectives of the company. As Drury et al. (1989) have observed, this development r
equires measurement methodologies that also operate at the strategic level. For
example, as a human factors group becomes more involved in strategic decisions a
bout identifying and choosing the projects it performs, evaluation of the indivi
dual projects is less revealing. All projects performed could have a positive im
pact, but the group could still have achieved more with a more astute choice of
projects. It could conceivably have had a more bene cial impact on the companys str
ategic objectives by stopping all projects for a period to concentrate on traini
ng the management, workforce, and engineering staff to make more use of ergonomi
cs. Such changes in the structure of the ergonomics / human factors profession i
ndeed demand different evaluation methodologies. A powerful network of individua
ls, for example, who can, and do, call for human factors input in a timely manne
r can help an enterprise more than a number of individually successful project o
utcomes. Audit programs are one of the ways in which such evaluations can be mad
e, allowing a company to focus its human factors resources most effectively. The
y can also be used in a prospective, rather than retrospective, manner to help q
uantify the needs of the company for ergonomics / human factors. Finally, they c
an be used to determine which divisions, plants, departments, or even product li
nes are in most need of ergonomics input.
2.
DESIGN REQUIREMENTS FOR AUDIT SYSTEMS
Returning to the de nition of an audit, the emphasis is on checking, acceptable po
licies, and consistency. The aim is to provide a fair representation of the busi
ness for use by third parties. A typical audit by a certi ed public accountant wou
ld comprise the following steps (adapted from Koli 1994): 1. Diagnostic investig
ation: description of the business and highlighting of areas requiring increased
care and high risk 2. Test for transaction: trace samples of transactions group
ed by major area and evaluate 3. Test of balances: analyze content 4. Formation
of opinion: communicate judgment in an audit report Such a procedure can also fo
rm a logical basis for human factors audits. The rst step chooses the areas of st
udy, the second samples the system, the third analyzes these samples, and the nal
step produces an audit report. These de ne the broad issues in human factors audi
t design: 1. How to sample the system: how many samples and how these are distri
buted across the system 2. What to sample: speci c factors to be measured, from bi
omechanical to organizational 3. How to evaluate the sample: what standards, goo
d practices, or ergonomic principles to use for comparison 4. How to communicate
the results: techniques for summarizing the ndings, how far separate ndings can b
e combined A suitable audit system needs to address all of these issues (see Sec
tion 3), but some overriding design requirements must rst be speci ed.
2.1.
Breadth, Depth, and Application Time
Ideally, an audit system would be broad enough to cover any task in any industry
, would provide highly detailed analysis and recommendations, and would be appli
ed rapidly. Unfortunately, the three variables of breadth, depth, and applicatio
n time are likely to trade off in a practical system. Thus a thermal audit (Pars
ons 1992) sacri ces breadth to provide considerable depth based on the heat balanc
e equation but requires measurement of seven variables. Some can be obtained rap
idly (air temperature, relative humidity), but some take longer (clothing insula
tion value, metabolic rate). Conversely, structured interviews with participants
in an ergonomics program (Drury 1990a) can be broad and rapid but quite de cient
in depth. At the level of audit instruments such as questionnaires or checklists
, there are comprehensive surveys such as the position analysis questionnaire (M
cCormick 1979), the Arbeitswissenschaftliche Erhebungsverfahren zur Ta tikgkeits
analyse (AET) (Rohmert and Landau 1989), which takes two to three hours to compl
ete, or the simpler work analysis checklist (Pulat 1992). Alternatively, there a
re
nd Broadbent 1987) and distract from task performance. An audit procedure can as
sess the noise on multiple criteria, that is, on hearing protection and on commu
nication interruptions, with the former criterion used on all jobs and the latte
r only where verbal communication is an issue. If standards and other good pract
ices are used in a human factors audit, they provide a quantitative basis for de
cision making. Measurement reliability can be high and validity self-evident for
legal standards. However, it is good practice in auditing to record only the me
asurement used, not its relationship to the standard, which can be established l
ater. This removes any temptation by the analyst to bend the measurement to reac
h a predetermined conclusion. Illumination measurements, for example, can vary c
onsiderably over a workspace, so that an audit question: Work Surface Illuminati
on 750 Lux
yes
no
could be legitimately answered either way for some workspaces by choice of sampl
ing point. Such temptation can be removed, for example, by an audit question.
1134
PERFORMANCE IMPROVEMENT MANAGEMENT
Illumination at four points on workstation:
Lux
Later analysis can establish whether, for example, the mean exceeds 750 Lux or w
hether any of the four points fall below this level. It is also possible to prov
ide later analyses that combine the effects of several simple checklist response
s, as in Parsonss (1992) thermal audit, where no single measure would exceed good
practice even though the overall result would be cumulative heat stress.
2.3.
Evaluation of an Audit System
For a methodology to be of value, it must demonstrate validity, reliability, sen
sitivity, and usability. Most texts that cover measurement theory treat these as
pects in detail (e.g., Kerlinger 1964). Shorter treatments are found in human fa
ctors methodology texts (e.g., Drury 1990b; Osburn 1987). Validity is the extent
to which a methodology measures the phenomenon of interest. Does our ergonomics
audit program indeed measure the quality of ergonomics in the plant? It is poss
ible to measure validity in a number of ways, but ultimately all are open to arg
ument. For example, if we do not know the true value of the quality of ergonomic
s in a plant, how can we validate our ergonomics audit program? Broadly, there a
re three ways in which validation can be tested. Content validity is perhaps the
simplest but least convincing measure. If each of the items of our measurement
device displays the correct content, then validity is established. Theoretically
, if we could list all of the possible measures of a phenomenon, content validit
y would describe how well our measurement device samples these possible measures
. In practice it is assessed by having experts in the eld judge each item for how
well its content represents the phenomenon studied. Thus, the heat balance equa
tion would be judged by most thermal physiologists to have a content that well r
epresents the thermal load on an operator. Not all aspects are as easily validat
ed! Concurrent (or prediction) validity has the most immediate practical impact.
It measures empirically how well the output of the measurement device correlate
s with the phenomenon of interest. Of course, we must have an independent measur
e of the phenomenon of interest, which raises dif culties. To continue our example
, if we used the heat balance equation to assess the thermal load on operators,
then there should be a high correlation between this and other measures of the e
ffects of thermal loadperhaps measures such as frequency of temperature complaint
s or heat disorders: heat stroke, hyperthermia, hypothermia, and so on. In pract
ice, however, measuring such correlations would be contaminated by, for example,
propensity to report temperature problems or individual acclimatization to heat
. Overall outputs from a human factors audit (if such overall outputs have any u
seful meaning) should correlate with other measures of ergonomic inadequacy, suc
h as injuries, turnover, quality measures, or productivity. Alternatively, we ca
n ask how well the audit ndings agree with independent assessments of quali ed huma
n factors engineers (Keyserling et al. 1992; Koli et al. 1993) and thus validate
against one interpretation of current good practice. Finally, there is construc
t validity, which is concerned with inferences made from scores, evaluated by co
nsidering all empirical evidence and models. Thus, a model may predict that one
of the variables being measured should have a particular relationship to another
variable not in the measurement device. Con rming this relationship empirically w
ould help validate the particular construct underlying our measured variable. No
te that different parts of an overall measurement device can have their construc
t validity tested in different ways. Thus, in a board human factors audit, the t
hermal load could differentiate between groups of operators who do and do not su
ffer from thermal complaints. In the same audit, a measure of dif culty in a targe
t aiming task could be validated against Fittss law. Other ways to assess constru
ct validity are those that analyze clusters or factors within a group of measure
s. Different workplaces audited on a variety of measures and the scores, which a
re then subjected to factor analysis, should show an interpretable, logical stru
cture in the factors derived. This method has been used on large databases for j
ob evaluation-oriented systems such as McCormicks position analysis questionnaire
(PAQ) (McCormick 1979). Reliability refers to how well a measurement device can
repeat a measurement on the same sample unit. Classically, if a measurement X i
s assumed to be composed of a true value Xt and a random measurement error Xe, t
hen X
Xt Xe For uncorrelated Xt and Xe, taking variances gives: Variance (X )
riance (Xt) variance (Xe) or
va
V(Xt)
V(Xe)
1135
We can de ne the reliability of the measurement as the fraction of measurement var
iance accounted for by true measurement variance: Reliability
V(Xt) V(Xt)
V(Xe)
yes
no
then almost all workplaces could answer yes (although the author has found a number
that could not meet even this low criterion). Conversely, a oor effect would be a
very high threshold for illluminance. Sensitivity can arise too when validity i
s in question. Thus, heart rate is a valid indicator of heat stress but not of c
old stress. Hence, exposure to different degrees of cold stress would be only in
sensitively measured by heart rate. Usability refers to the auditors ease of use
of the audit system. Good human factors principles should be followed, such as d
ocument design guidelines in constructing checklists (Patel et al. 1993; Wright
and Barnard 1975). If the instrument does not have good usability, it will be us
ed less often and may even show reduced reliability due to auditors errors.
3.
AUDIT SYSTEM DESIGN
As outlined in Section 2, the audit system must choose a sample, measure that sa
mple, evaluate it, and communicate the results. In this section we approach thes
e issues systematically. An audit system is not just a checklist; it is a method
ology that often includes the technique of a checklist. The distinction needs to
be made between methodology and techniques. Over three decades ago, Easterby (1
967) used Bainbridge and Beishons (1964) de nitions: Methodology: a principle for d
e ning the necessary procedures Technique: a means to execute a procedural step. E
asterby notes that a technique may be applicable in more than one methodology.
3.1.
The Sampling Scheme
In any sampling, we must de ne the unit of sampling, the sampling frame, and the s
ample choice technique. For a human factors audit the unit of sampling is not as
self-evident as it appears. From a job-evaluation viewpoint (e.g., McCormick 19
79), the natural unit is the job that is composed of a number of tasks. From a m
edical viewpoint the unit would be the individual. Human factors studies focus o
n the task / operator / machine / environment (TOME) system (Drury 1992) or equi
valently the software / hardware / environment / liveware (SHEL) system (ICAO 19
89). Thus, from a strictly human factors viewpoint, the speci c combination of TOM
E can become the sampling unit for an audit program. Unfortunately, this simple
view does not cover all of the situations for which an audit program may be need
ed. While it works well for the rather repetitive tasks performed at a single wo
rkplace, typical of much manufacturing and service industry, it cannot suf ce when
these conditions do not hold. One relaxation is to remove the stipulation of a
particular incumbent, allowing for jobs that require frequent rotation of tasks.
This means that the results for one task will depend upon the incumbent chosen,
or that several tasks will need to be combined if an individual operator is of
1136
PERFORMANCE IMPROVEMENT MANAGEMENT
interest. A second relaxation is that the same operator may move to different wo
rkplaces, thus changing environment as well as task. This is typical of maintena
nce activities, where a mechanic may perform any one of a repertoire of hundreds
of tasks, rarely repeating the same task. Here the rational sampling unit is th
e task, which is observed for a particular operator at a particular machine in a
particular environment. Examples of audits of repetitive tasks (Mir 1982; Drury
1990a) and maintenance tasks (Chervak and Drury 1995) are given below to illust
rate these different approaches. De nition of the sampling frame, once the samplin
g unit is settled, is more straightforward. Whether the frame covers a departmen
t, a plant, a division, or a whole company, enumeration of all sampling units is
at least theoretically possible. All workplaces or jobs or individuals can in p
rinciple be listed, although in practice the list may never be up to date in an
agile industry where change is the normal state of affairs. Individuals can be l
isted from personnel records, tasks from work orders or planning documents, and
workplaces from plant layout plans. A greater challenge, perhaps, is to decide w
hether indeed the whole plant really is the focus of the audit. Do we include of c
e jobs or just production? What about managers, chargehands, part-time janitors,
and so on? A good human factors program would see all of these tasks or people
as worthy of study, but in practice they may have had different levels of ergono
mic effort expended upon them. Should some tasks or groups be excluded from the
audit merely because most participants agree that they have few pressing human f
actors problems? These are issues that need to be decided explicitly before the
audit sampling begins. Choice of the sample from the sampling frame is well cove
red in sociology texts. Within human factors it typically arises in the context
of survey design (Sinclair 1990). To make statistical inferences from the sample
to the population (speci cally to the sampling frame), our sampling procedure mus
t allow the laws of probability to be applied. The most often-used sampling meth
ods are: Random sampling: Each unit within the sampling frame is equally likely
to be chosen for the sample. This is the simplest and most robust method, but it
may not be the most ef cient. Where subgroups of interest (strata) exist and thes
e subgroups are not equally represented in the sampling frame, one collects unne
cessary information on the most populous subgroups and insuf cient information on
the least populous. This is because our ability to estimate a population statist
ic from a sample depends upon the absolute sample size and not, in most practica
l cases, on the population size. As a corollary, if subgroups are of no interest
, then random sampling loses nothing in ef ciency. Strati ed random sampling: Each u
nit within a particular stratum of the sampling frame is equally likely to be ch
osen for the sample. With strati ed random sampling we can make valid inferences a
bout each of the strata. By weighting the statistics to re ect the size of the str
ata within the sampling frame, we can also obtain population inferences. This is
often the preferred auditing sampling method as, for example, we would wish to
distinguish between different classes of tasks in our audits: production, wareho
use, of ce, management, maintenance, security, and so on. In this way our audit in
terpretation could give more useful information concerning where ergonomics is b
eing used appropriately. Cluster sampling: Clusters of units within the sampling
frame are selected, followed by random or nonrandom selection within clusters.
Examples of clusters would be the selection of particular production lines withi
n a plant (Drury 1990a) or selection of representative plants within a company o
r division. The difference between cluster and strati ed sampling is that in clust
er sampling only a subset of possible units within the sampling frame is selecte
d, whereas in strati ed sampling all of the sampling frame is used because each un
it must belong to one stratum. Because clusters are not randomly selected, the o
verall sample results will not re ect population values, so that statistical infer
ence is not possible. If units are chosen randomly within each cluster, then sta
tistical inference within each cluster is possible. For example, if three produc
tion lines are chosen as clusters, and workplaces sampled randomly within each,
the clusters can be regarded as xed levels of a factor and the data subjected to
analysis of variance to determine whether there are signi cant differences between
levels of that factor. What is sacri ced in cluster sampling is the ability to ma
ke population statements. Continuing this example, we could state that the light
ing in line A is better than in lines B or C but still not be able to make stati
stically valid statements about the plant as a whole.
3.2.
The Data-Collection Instrument
So far we have assumed that the instrument used to collect the data from the sam
ple is based upon measured data where appropriate. While this is true of many au
dit instruments, this is not the only way to collect audit data. Interviews with
participants (Drury 1990a), interviews and group meetings to locate potential e
rrors (Fox 1992), and archival data such as injury of quality records (Mir 1982)
have been used. All have potential uses with, as remarked earlier, a judicious
range of methods often providing the appropriate composite audit system. One con
sideration on audit technique design and use is the extent of computer involveme
nt. Computers are now inexpensive, portable, and powerful and can thus be used t
o assist data collection,
t should also be proven reliable, valid, sensitive, and usable, although preciou
s few meet all of these criteria. In the remainder of this section, a selection
of checklists will be presented as typical of (reasonably) good practice. Emphas
is will be on objective, structure, and question design. 3.2.1.1. The IEA Checkl
ist The IEA checklist (Burger and de Jong 1964) was designed for ergonomic job a
nalysis over a wide range of jobs. It uses the concept of functional load to giv
e a logical framework relating the physical load, perceptual load, and mental lo
ad to the worker, the environment, and the working methods / tools / machines. W
ithin each cell (or subcell, e.g., physical load could be static or dynamic), th
e load was assessed on different criteria such as force, time, distance, occupat
ional, medical, and psychological criteria. Table 1 shows the structure and typi
cal questions. Dirken (1969) modi ed the IEA checklist to improve the questions an
d methods of recording. He found that it could be applied in a median time of 60
minutes per workstation. No data are given on evaluation of the IEA checklist,
but its structure has been so in uential that it included here for more than histo
rical interest. 3.2.1.2. Position Analysis Questionnaire The PAQ is a structured
job analysis questionnaire using worker-oriented elements (187 of them) to char
acterize the human behaviors involved in jobs (McCormick et al. 1969). The PAQ i
s structured into six divisions, with the rst three representing the classic expe
rimental psychology approach (information input, mental process, work output) an
d
1138 TABLE 1
PERFORMANCE IMPROVEMENT MANAGEMENT IEA Checklist: Structure and Typical Question
s A Worker B Environment C Working method, tools, machines
A: Structure of the Checklist Load 1. Mean 2. Peaks Intensity, Frequency, Durati
on I. Physical load II. Perceptual load III. Mental load B: Typical Question I B
. Physical load / environment
1. 2. 1. 2. 3. 1. 2.
Dynamic Static Perception Selection, decision Control of movement Individual Gro
up
2.1. Physiological Criteria 1. Climate: high and low temperatures 1. Are these e
xtreme enough to affect comfort or ef ciency? 2. If so, is there any remedy? 3. To
what extent is working capacity adversely affected? 4. Do personnel have to be
specially selected for work in this particular environment?
the other three a broader sociotechnical view (relationships with other persons,
job context, other job characteristics). Table 2 shows these major divisions, e
xamples of job elements in each and the rating scales employed for response (McC
ormick 1979). Construct validity was tested by factor analyses of databases cont
aining 3700 and 2200 jobs, which established 45 factors. Thirty-two of these t ne
atly into the original six-division framework, with the remaining 13 being class
i ed as overall dimensions. Further proof of construct validity was based on 76 human
attributes derived from the PAQ, rated by industrial psychologists and the ratin
gs subjected to principal components analysis to develop dimensions which had reas
onably similar attribute pro les (McCormick 1979, p. 204). Interreliability, as note
d above, was 0.79, based on another sample of 62 jobs. The PAQ covers many of th
e elements of concern to human factors engineers and has indeed much in uenced sub
sequent instruments such as AET. With good reliability and useful (though perhap
s dated), construct validity, it is still a viable instrument if the natural uni
t of sampling is the job. The exclusive reliance on rating scales applied by the
analyst goes rather against current practice of comparison of measurements agai
nst standards or good practices. 3.2.1.3. AET (Arbeit the Arbeitswissenschaftlic
he Erhebungsverfahren zur Ta tikgkeitsanalyse) The AET, published in German (Lan
dau and Rohmert 1981) and later in English (Rohmert and Landau 1983), is the job
-analysis subsystem of a comprehensive system of work studies. It covers the analy
sis of individual components of man-at-work systems as well as the description a
nd scaling of their interdependencies (Rohmert and Landau 1983, pp. 910). Like all
good techniques, it starts from a model of the system (REFA 1971, referenced in
Wagner 1989), to which is added Rohmerts stress / strain concept. This latter see
s strain as being caused by the intensity and duration of stresses impinging upo
n the operators individual characteristics. It is seen as useful in the analysis
of requirements and work design, organization in industry, personnel management,
and vocational counseling and research. AET itself was developed over many year
s, using PAQ as an initial starting point. Table 3 shows the structure of the su
rvey instrument with typical questions and rating scales. Note the similarity be
tween AETs job demands analysis and the rst three categories of the PAQ and the sc
ales used in AET and PAQ (Table 2). Measurements of validity and reliability of
AET are discussed by H. Luczak in an appendix to Landau and Rohment, although no
numerical values are given. Cluster analysis of 99 AET records produced groupin
gs which supported the AET constructs. Seeber et al. (1989) used AET along with
1140 TABLE 3
PERFORMANCE IMPROVEMENT MANAGEMENT AET: Structure and Typical Questions
A: Structure of the Checklist Part A: Work systems analysis Major Division 1. Wo
rk objects Section 1.1. Material work objects 1.2 Energy as work object 1.3 Info
rmation as work object 1.4 Man, animals, plants as work objects 2.1 Working equi
pment 2.2 Other equipment 3.1 Physical environment 3.2 Organizational and social
environment 3.3 Principles and methods of remuneration
2. Equipment 3. Work environment
B: Task analysis
C: Job demand analysis
1. 2. 3. 4. 1.
Tasks relating to material work objects Tasks relating to abstract work objects
Man-related tasks Number and repetitiveness of tasks Demands on perception
2. Demands for decision 3. Demands for response / activity
1.1 Mode of perception 1.2 Absolute / relative evaluation of perceived informati
on 1.3 Accuracy of perception 2.1 Complexity of decisions 2.2 Pressure of time 2
.3 Required knowledge 3.1 Body postures 3.2 Static work 3.3 Heavy muscular work
3.4 Light muscular work, active light work 3.5 Strenuousness and frequency of mo
ves De nition
B: Types of scale Code A F S D Type of Rating Does this apply? Frequency Signi can
ce Duration Duration Value 0 1 2 3 4 5
Typical Scale values Very infrequent Less than 10% of shift time Less than 30% o
f shift time 30% to 60% of shift time More than 60% of shift time Almost continu
ously during whole shift
Data were entered into the computer program and a rule-based logic evaluated eac
h section to provide messages to the user in the form of either a section shows no
ergonomic problems message:
MESSAGE Results from analysis of auditory aspects: Everything OK in this section
or discrepancies from a single input:
MESSAGE Seats should be padded, covered with non-slip materials and have front e
dge rounded
or discrepancies based on the integration of several inputs:
HUMAN FACTORS AUDIT TABLE 4 Section 1. Visual aspects Workplace Survey: Structur
e and Typical Questions Major Classi cation Examples of Questions
1141
2. Auditory aspects 3. Thermal aspects 4. Instruments, controls, displays Standi
ng vs. Seated Displays Labeling Coding Scales, dials, counters Control / display
relationships Controls Desks Chairs Posture (NIOSH Lifting Guide, 1981)
5. Design of workplaces 6. Manual materials handling 7. Energy expenditure 8. As
sembly / repetitive aspects 9. Inspection aspects
Nature of task Measure illuminance at task mid eld outer eld Noise level, dBA Main
source of noise Strong radiant sources present? Wet bulb temperature (Clothing i
nventory) Are controls mounted between 30 in. and 70 in. Signals for crucial vis
ual checks Are trade names deleted? Color codes same for control & display? All
numbers upright on xed scales? Grouping by sequence or subsystem? Emergency butto
n diameter 0.75 in.? Seat to underside of desk
6.7 in.? Height easily adjustable
1521 in.? Upper arms vertical? Task, H, V, D, F Cycle time Object weight Type of
work Seated, standing, or both? If heavy work, is bench 616 in. below elbow heig
ht? Number of fault types? Training time until unsupervised?
MESSAGE The total metabolic workload is 174 watts Intrinsic clothing insulation
is 0.56 clo Initial rectal temperature is predicted to be 36.0C Final rectal temp
erature is predicted to be 37.1C
Counts of discrepancies were used to evaluate departments by ergonomics aspect,
while the messages were used to alert company personnel to potential design chan
ges. This latter use of the output as a training device for nonergonomic personn
el was seen as desirable in a multinational company rapidly expanding its ergono
mics program. Reliability and validity have not been assessed, although the chec
klist has been used in a number of industries (Drury 1990a). The Workplace Surve
y has been included here because, despite its lack of measured reliability and v
alidity, it shows the relationship between audit as methodology and checklist as
technique. 3.2.1.5. ERGO, EEAM, and ERNAP (Koli et al. 1993; Chervak and Drury
1995) These checklists are both part of complete audit systems for different asp
ects of civil aircraft hangar activities. They were developed for the Federal Av
iation Administration to provide tools for assessing human factors in aircraft i
nspection (ERGO) and maintenance (EEAM) activities, respectively. Inspection and
maintenance activities are nonrepetitive in nature, controlled by task cards is
sued to technicians at the start of each shift. Thus, the sampling unit is the t
ask card, not the workplace, which is highly variable between task cards. Their
structure was based on extensive task analyses of inspection and maintenance tas
ks, which led to generic function descriptions of both types of work (Drury et a
l. 1990). Both systems have sampling schemes and checklists. Both are computer b
ased with initial data collection on either hard copy or direct into a portable
computer. Recently, both have been combined into a single program (ERNAP) distri
buted by the FAAs Of ce of Aviation Medicine. The structure of ERNAP and typical qu
estions are given in Table 5.
Number of *
1144
PERFORMANCE IMPROVEMENT MANAGEMENT
and validated a legs, trunk, and neck job screening procedure along similar line
s (Keyserling et al. 1992). 3.2.1.7. Ergonomic Checkpoints The Workplace Improve
ment in Small Enterprises (WISE) methodology (Kogi 1994) was developed by the In
ternational Ergonomics Association (IEA) and the International Labour Of ce (ILO)
to provide cost-effective solutions for smaller organizations. It consists of a
training program and a checklist of potential low-cost improvements. This checkl
ist, called ergonomics checkpoints, can be used both as an aid to discovery of s
olutions and as an audit tool for workplaces within an enterprise. The 128-point
checklist has now been published (Kogi and Kuorinka 1995). It covers the nine a
reas shown in Table 7. Each item is a statement rather than a question and is ca
lled a checkpoint. For each checkpoint there are four sections, also shown in Ta
ble 7. There is no scoring system as such; rather, each checkpoint becomes a poi
nt of evaluation of each workplace for which it is appropriate. Note that each c
heckpoint also covers why that improvement is important, and a description of th
e core issues underlying it. Both of these help the move from rule-based reasoni
ng to knowledgebased reasoning as nonergonomists continue to use the checklist.
A similar idea was embodied in the Mir (1982) ergonomic checklist. 3.2.1.8. Othe
r Checklists The above sample of successful audit checklists has been presented
in some detail to provide the reader with their philosophy, structure, and sampl
e questions. Rather then continue in the same vein, other interesting checklists
are outlined in Table 8. Each entry shows the domain, the types of issues addre
ssed, the size or time taken in use, and whether validity and reliability have b
een measured. Most textbooks now provide checklists, and a few of these are cite
d. No claim is made that Table 8 is comprehensive. Rather, it is rather a sampli
ng with references so that readers can nd a suitable match to their needs. The rst
nine entries in the table are conveniently colocated in Landau and Rohmert (198
9). Many of their reliability and validity studies are reported in this publicat
ion. The next entries are results of the Commission of European Communities fth E
CSC program, reported in Berchem-Simon (1993). Others are from texts and origina
l references. The author has not personally used all of these checklists and thu
s cannot speci cally endorse them. Also, omission of a checklist from this table i
mplies nothing about its usefulness.
TABLE 7 Structure
Ergonomic Checkpoints: Structure, Typical Checkpoints, and Checkpoint
A: Structure of the Checklist Major Section Materials handling Handtools Product
ive machine safety Improving workstation design Lighting Premises Control of haz
ards Welfare facilities Work organization B. Structure of each checkpoint WHY? H
OW? SOME MORE HINTS POINTS TO REMEMBER Reasons why improvments are important. De
scription of several actions each of which can contribute to improvement. Additi
onal points which are useful for attaining the improvement. Brief description of
the core element of the checkpoint. Typical Checkpoints
Clear and mark transport ways. Provide handholds, grips, or good holding points
for all Use jigs and xtures to make machine operations stable, safe, Adjust worki
ng height for each worker at elbow level or Provide local lights for precision o
r inspection work. Ensure safe wiring connections for equipment and lights. Use
feeding and ejection devices to keep the hands away from Provide and maintain go
od changing, washing, and sanitary Inform workers frequently about the results o
f their work.
packages and containers. and ef cient.
slightly below it.
HUMAN FACTORS AUDIT TABLE 8 Name TBS VERA RNUR LEST AVISEM GESIM RHIA MAS JL and
HA A Selection of Published Checklists Authors Hacker et al. 1983 Volpert et al
. 1983 RNUR, 1976 Gue laud. 1975 AVISEM. 1977 GESIM. 1988 Leitner et al. 1987
Groth. 1989 Mattila and Kivi. 1989 Bolijn 1993 Panter 1993 Portillo Sosa 1993 Wo
rk Analy. Thermal Aud. WAS Ergonomics Pulat 1992 Parsons 1992 Yoshida and Ogawa,
1991 Occupational Health and Safety Authority 1990 Cakir et al. 1980 Coverage M
ainly mental work Mainly mental work Mainly physical work Mainly physical work M
ainly physical work Mainly physical work Task hindrances, stress Open structure,
derived from AET Mental, physical work, hazards Physical work checklist for wom
en Checklist for load handling Checklist for VDT standards Mental and physical w
ork Thermal audit from heat balance Workplace and environment Short workplace ch
ecklists VDT checklist Reliability
1145
Validity vs. AET vs. AET
0.530.79 0.870.95 tested
vs. many vs. AET
content tested vs. expert
First nine from Landau and Rohmert 1989; next three from Berchem-Simon 1993.
3.2.2.
Other Data-Collection Methods
Not all data come from checklists and questionnaires. We can audit a human facto
rs program using outcome measures alone (e.g., Chapter 47). However, outcome mea
sures such as injuries, quality, and productivity are nonspeci c to human factors:
many other external variables can affect them. An obvious example is changes in
the reporting threshold for injuries, which can lead to sudden apparent increas
es and decreases in the safety of a department or plant. Additionally, injuries
are (or should be) extremely rare events. Thus, to obtain enough data to perform
meaningful statistical analysis may require aggregation over many disparate loc
ations and / or time periods. In ergonomics audits, such outcome measures are pe
rhaps best left for long-term validation or for use in selecting cluster samples
. Besides outcome measures, interviews represent a possible data-collection meth
od. Whether directed or not (e.g., Sinclair 1990) they can produce critical inci
dents, human factors examples, or networks of communication (e.g., Drury 1990a),
which have value as part of an audit procedure. Interviews are routinely used a
s part of design audit procedures in large-scale operations such as nuclear powe
r plants (Kirwan 1989) or naval systems (Malone et al. 1988). A novel interviewbased audit system was proposed by Fox (1992) based on methods developed in Brit
ish Coal (reported in Simpson 1994). Here an error-based approach was taken, usi
ng interviews and archival records to obtain a sampling of actual and possible e
rrors. These were then classi ed using Reasons (1990) active / latent failure schem
e and orthogonally by Rasmussens (1987) skill-, rule-, knowledge-based framework.
Each active error is thus a conjunction of skill / mistake / violation with ski
ll / rule / knowledge. Within each conjunction, performance-shaping factors can
be deduced and sources of management intervention listed. This methodology has b
een used in a number of mining-related studies: examples will be presented in Se
ction 4.
3.3.
Data Analysis and Presentation
Human factors as a discipline covers wide range of topics, from workbench height
to function allocation in automated systems. An audit program can only hope to
abstract and present a part of this range. With our consideration of sampling sy
stems and data collection devices we have seen different ways in which an unbias
ed abstraction can be aided. At this stage the data consist of large numbers of
responses to large numbers of checklist items, or detailed interview ndings. How
can, or should, these data be treated for best interpretation?
1146
PERFORMANCE IMPROVEMENT MANAGEMENT
Here there are two opposing viewpoints: one is that the data are best summarized
across sample units, but not across topics. This is typically the way the human
factors professional community treats the data, giving summaries in published p
apers of the distribution of responses to individual items on the checklist. In
this way, ndings can be more explicit, for example that the lighting is an area t
hat needs ergonomics effort, or that the seating is generally poor. Adding toget
her lighting and seating discrepancies is seen as perhaps obscuring the ndings ra
ther than assisting in their interpretation. The opposite viewpoint, in many way
s, is taken by the business community. For some, an overall gure of merit is a na
tural outcome of a human factors audit. With such a gure in hand, the relative ne
eds of different divisions, plants, or departments can be assessed in terms of e
rgonomic and engineering effort required. Thus, resources can be distributed rat
ionally from a management level. This view is heard from those who work for manu
facturing and service industries, who ask after an audit How did we do? and expect a
very brief answer. The proliferation of the spreadsheet, with its ability to su
m and average rows and columns of data, has encouraged people to do just that wi
th audit results. Repeated audits t naturally into this view because they can bec
ome the basis for monthly, quarterly, or annual graphs of ergonomic performance.
Neither view alone is entirely defensible. Of course, summing lighting and seat
ing needs produces a result that is logically indefensible and that does not hel
p diagnosis. But equally, decisions must be made concerning optimum use of limit
ed resources. The human factors auditor, having chosen an unbiased sampling sche
me and collected data on (presumably) the correct issues, is perhaps in an excel
lent position to assist in such management decisions. But so too are other stake
holders, primarily the workforce. Audits, however, are not the only use of some
of the data-collection tools. For example, the Keyserling et al. (1993) upper ex
tremity checklist was developed speci cally as a screening tool. Its objective was
to nd which jobs / workplaces are in need of detailed ergonomic study. In such c
ases, summing across issues for a total score has an operational meaning, that i
s, that a particular workplace needs ergonomic help. Where interpretation is mad
e at a deeper level than just a single number, a variety of presentation devices
have been used. These must show scores (percent of workplaces, distribution of
sound pressure levels, etc.) separately but so as to highlight broader patterns.
Much is now known about separate vs. integrated displays and emergent features
(e.g., Wickens 1992, pp. 121122), but the traditional pro les and spider web charts
are still the most usual presentation forms. Thus, Wagner (1989) shows the AVIS
EM pro le for a steel industry job before and after automation. The nine different
issues (rating factors) are connected by lines to show emergent shapes for the
old and the new jobs. Landau and Rohmerts (1981) original book on AET shows many
other examples of pro les. Klimer et al. (1989) present a spider web diagram to sh
ow how three work structures in uenced ten issues from the AET analysis. Mattila a
nd Kivi (1989) present their data on the job load and hazard analysis system app
lied to the building industry in the form of a table. For six occupations, the r
ating on ve different loads / hazards is presented as symbols of different sizes
within the cells of the table. There is little that is novel in the presentation
of audit results: practitioners tend to use the standard tabular or graphical t
ools. But audit results are inherently multidimensional, so some thought is need
ed if the reader is to be helped towards an informed comprehension of the audits
outcome.
4.
AUDIT SYSTEMS IN PRACTICE
Almost any of the audit programs and checklists referenced in previous sections
give examples of their use in practice. Only two examples will be given here, as
others are readily accessible. These examples were chosen because they represen
t quite different approaches to auditing.
4.1.
Auditing a Decentralized Business
From 1992 to 1996, a major U.S.-based apparel manufacturer had run an ergonomics
program aimed primarily at the reduction of workforce injuries in backs and upp
er extremities. As detailed in Drury et al. (1999), the company during that time
was made up of nine divisions and employed about 45,000 workers. Of particular
interest was the fact that the divisions enjoyed great autonomy, with only a sma
ll corporate headquarters with a single executive responsible for all risk-manag
ement activities. The company had grown through mergers and acquisitions, meanin
g that different divisions had different degrees of vertical integration. Hence,
core functions such as sewing, pressing, and distribution were common to most d
ivisions, while some also included weaving, dyeing, and embroidery. In addition,
the products and fabrics presented quite different ergonomic challenges, from d
elicate undergarments to heavy jeans to knitted garments and even luggage. The e
rgonomics program was similarly diverse. It started with a corporate launch by t
he highestlevel executives and was rolled out to the divisions and then to indiv
idual plants. The pace of change was widely variable. All divisions were given a
standard set of workplace analysis and modi cation tools (based on Drury and Wick
1984) but were encouraged to develop their own solutions to problems in a way a
ppropriate to their speci c needs.
1148 TABLE 9
PERFORMANCE IMPROVEMENT MANAGEMENT Ergonomics Audit: Workplace Survey with Overa
ll Data Division No Plant Factor Frequent extreme motions of back, neck, shoulde
rs, wrists Elbows raised or unsupported more than 50% of time Upper limbs contac
t nonrounded edges Gripping with ngers Knee / foot controls No Factor Leg clearan
ce restricted Feet unsupported / legs slope down Chair / table restricts thighs
Back unsupported Chair height not adjustable easily No Factor Control requires w
eight on one foot more than 50% time Standing surface hard Work surface height n
ot adjustable easily No Factor Tools require hand / wrist bending Tools vibrate
Restricted to one-handed use Tool handle ends in palm Tool handle has nonrounded
edges Tool uses only 2 or 3 ngers Requires continuous or high force Tool held co
ntinuously in one hand No Factor Vibration reaches body from any source No Facto
r More than 5 moves per minute Loads unbalanced Lift above head Lift off oor Reac
h with arms Twisting Bending trunk Floor wet or slippery Floor in poor condition
Area obstructs task Protective clothing unavailable Handles used Job Type
Number 1. Postural aspects Yes W1 W2 W3 W4 W5 1.1 Seated Yes W6 W7 W8 W9 W10 12%
21% 17% 22% 37% Yes W11 W12 W13 3% 37% 92% Yes W14 W15 W16 W17 W18 W19 W20 W21
77% 9% 63% 39% 20% 56% 9% 41% Yes W22 14% Yes W23 W24 W25 W26 W27 W28 W29 W30 W3
1 W32 W33 W34 40% 36% 14% 28% 83% 78% 60% 3% 0% 17% 4% 2% 68% 66% 22% 73% 36%
1.2 Standing
1.3 Hand tools
2. Vibration
3. Manual materials handling
HUMAN FACTORS AUDIT TABLE 12 Responses to Ergonomics User Corporate Question and
Issue 1. What is Ergonomics? 1.1 Fitting job to operator 1.2 Fitting operator t
o job 2. Who do you call on to get ergonomics work done? 2.1 Plant ergonomics pe
ople 2.2 Division ergonomics people 2.3 Personnel department 2.4 Engineering dep
artment 2.5 We do it ourselves 2.6 College interns 2.7 Vendors 2.8 Everyone 2.9
Operators 2.10 University faculty 2.11 Safety 3. When did you last ask them for
help? 3.1 Never 3.2 Sometimes / infrequently 3.3 One year or more ago 3.4 One mo
nth or so ago 3.5 less than 1 month ago 5. Who else should we talk to about ergo
nomics? 5.1 Engineers 5.2 Operators 5.3 Everyone 6. General Ergonomics Comments
6.1 Ergonomics Concerns 6.11 Workplace design for safety / ease / stress / fatig
ue 6.12 Workplace design for cost savings / productivity 6.13 Workplace design f
or worker satisfaction 6.14 Environment design 6.15 The problem of nishing early
6.16 The Seniority / bumping problem 6.2 Ergonomics program concerns 6.21 Level
of reporting of ergonomics 6.22 Communication / who does erognomics 6.23 Stabili
ty / staf ng of ergonomics 6.24 General evaluation of ergonomics Positive Negative
6.25 Lack of nancial support for ergonomics 6.26 Lack of priority for ergonomics
6.27 Lack of awareness of ergonomics Mgt 1 0 0 0 3 1 0 0 0 0 0 0 0 0 2 0 0 1 0
1 0 2 1 1 2 0 0 0 7 0 1 4 0 2 2 Staff 6 6 3 4 0 8 2 0 0 1 1 0 1 4 0 1 0 0 0 1 0
5 0 1 1 0 3 1 1 0 3 10 0 2 1 Mgt 10 0 3 5 0 6 1 4 0 0 0 1 0 2 1 4 2 3 3 2 2 13 2
0 3 1 1 7 4 10 3 10 1 1 6
1151
Plant Staff 5 0 2 2 0 11 0 2 1 0 0 0 0 0 0 0 0 4 2 0 0 5 1 1 0 1 0 0 0 4 4 3 0 4
1
The outcome was that the accident rate dropped from 35.40 per 100,000 person-shi
fts to 8.03 in one year. This brought the colliery from worst in the regional gr
oup of 15 collieries to best in the group, and indeed in the United Kingdom. In
addition, personnel indicators, such as industrial relations climate and absence
rates, improved.
5.
FINAL THOUGHTS ON HUMAN FACTORS AUDITS
An audit system is a specialized methodology for evaluating the ergonomic status
of an organization at a point in time. In the form presented here, it follows a
uditing practices in the accounting eld, and indeed in such other elds as safety.
Data is collected, typically with a checklist, analyzed, and presented to the or
ganization for action. In the nal analysis, it is the action that is important to
human factors engineers, as the colliery example above shows. Such actions coul
d be taken using other methodologies, such as active redesign by job incumbents
(Wilson 1994); audits are only one method of tackling the problems of manufactur
ing and service industries. But as Drury (1991) points
1152
PERFORMANCE IMPROVEMENT MANAGEMENT
out, industrys moves towards quality are making it more measurement driven. Audit
s t naturally into modern management practice as measurement, feedback, and bench
marking systems for the human factors function.
REFERENCES
Alexander, D. C., and Pulat, B. M. (1985), Industrial Ergonomics: A Practitioners
Guide, Industrial Engineering and Management Press, Atlanta. American Society o
f Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) (1989), Physiolog
ical Principles, Comfort and Health, in Fundamentals Handbook, Atlanta. AVISEM (19
77), Techniques dame lioration des conditions de travail dans lindustrie, Editions
Hommes et Techniques, Suresnes, France. Ayoub, M. M., and Mital, A. (1989), Man
ual Materials Handling, Taylor & Francis, London. Bainbridge, L., and Beishon, R
. J. (1964), The Place of Checklists in ergonomic Job Analysis, in Proceedings of th
e 2nd I.E.A. Congress (Dortmund), Ergonomics Congress Proceedings Supplement. Be
rchem-Simon, O., Ed. (1993), Ergonomics Action in the Steel Industry, EUR 14832
EN, Commission of the European Communities, Luxembourg. Bolijn, A. J. (1993), Rese
arch into the Employability of Women in Production and Maintenance Jobs in Steel
works, in Ergonomics Action in the Steel Industry, O. Berchem-Simon, Ed., EUR 1483
2 EN, Commission of the European Communities, Luxembourg, 201208. British Standar
ds Institution (1965), Of ce Desks, Tables and Seating, British Standard 3893, London.
Burger, G. C. E., and de Jong, J. R. (1964), Evaluation of Work and Working Envir
onment in Ergonomic Terms, Aspects of Ergonomic Job Analysis, 185201. Carson, A. B.
, and Carlson, A. E. (1977), Secretarial Accounting, 10th Ed. South Western, Cin
cinnati. Cakir, A., Hart, D. M., and Stewart, T. F. M. (1980), Visual Display Te
rminals, John Wiley & Sons, New York, pp. 144152, 159190, App. I. Chervak, S., and
Drury, C. G. (1995), Simpli ed English Validation, in Human Factors in Aviation Maint
enancePhase 6 Progess Report, DOT / FAA / AM-95 / xx, Federal Aviation Administra
tion/ Of ce of Aviation Medicine, National Technical Information Service, Spring eld
, VA. Degani, A., and Wiener, E. L. (1990), Human Factors of Flight-Deck Checklist
s: The Normal Checklist, NASA Contractor Report 177548, Ames Research Center, CA.
Department of Defense (1989), Human Engineering Design Criteria for Military Syste
ms, Equipment and Facilities, MIL-STD-1472D, Washington, DC. Dirken, J. M. (1969),
An Ergonomics Checklist Analysis of Printing Machines, ILO, Geneva, Vol. 2, pp. 9039
13. Drury, C. G. (1990a), The Ergonomics Audit, in Contemporary Ergonomics, E. J. Lo
vesey, Ed., Taylor & Francis, London, pp. 400405. Drury, C. G. (1990b), Computerize
d Data Collection in Ergonomics, in Evaluation of Human Work, J. R. Wilson and E.
N. Corlett, Eds., Taylor & Francis, London, pp. 200214. Drury, C. G. (1991), Errors
in Aviation Maintenance: Taxonomy and Control, in Proceedings of the 35th Annual
Meeting of the Human Factors Society (San Francisco), pp. 4246. Drury, C. G. (199
2), Inspection Performance, in Handbook of Industrial Engineering, G. Salvendy, Ed.,
John Wiley & Sons, New York, pp. 22822314. Drury, C. G., and Wick, J. (1984), Ergo
nomic Applications in the Shoe Industry, in Proceedings of the International Confe
rence on Occupational Ergonomics, Vol. 1, pp. 489483. Drury, C. G., Kleiner, B. M
., and Zahorjan, J. (1989), How Can Manufacturing Human Factors Help Save a Compan
y: Intervention at High and Low Levels, in Proceedings of the Human Factors Societ
y 33rd Annual Meeting, Denver, pp. 687689. Drury, C. G., Prabhu, P., and Gramopad
hye, A. (1990), Task Analysis of Aircraft Inspection Activities: Methods and Findi
ngs, in Proceedings of the Human Factors Society 34th Annual Conference, (Santa Mo
nica, CA), pp. 11811185. Drury, C. G., Broderick, R. L., Weidman, C. H., and Mozr
all, J. R. (1999), A Corporate-Wide Ergonomics Programme: Implementation and Evalu
ation, Ergonomics, Vol. 42, No. 1, pp. 208 228. Easterby, R. S. (1967), Ergonomics Ch
ecklists: An Appraisal, Ergonomics, Vol. 10, No. 5, pp. 548556.
1154
PERFORMANCE IMPROVEMENT MANAGEMENT
Malde, B. (1992), What Price Usability Audits? The Introduction of Electronic Mail
into a User Organization, Behaviour and Information Technology, Vol. 11, No. 6, p
p. 345353. Malone, T. B., Baker, C. C., and Permenter, K. E. (1988), Human Engineer
ing in the Naval Sea Systems Command, in Proceedings of the Human Factors Society32
nd Annual Meeting1988 (Anaheim, CA), Vol. 2, pp. 11041107. Mattila, M., and Kivi,
P. (1989), Job Load and Hazard Analysis: A Method for Hazard Screening and Evaluat
ion, in Recent Developments in Job Analysis: Proceedings of the International Symp
osium on Job Analysis, K. Landau and W. Rohmert, Eds. (University of Hohenheim,
March 14 15), Taylor & Francis, New York, pp. 179186. McClelland, I. (1990), Product
Assessment and User Trials, in Evaluation of Human Work, J. R. Wilson and E. N. C
orlett, Eds., Taylor & Francis, New York, pp. 218247. McCormick, E. J. (1979), Jo
b Analysis: Methods and Applications, AMACOM, New York. McCormick, W. T., Mecham
, R. C., and Jeanneret, P. R. (1969), The Development and Background of the Posi
tion Analysis Questionnaire, Occupational Research Center, Purdue University, We
st Lafayette, IN. Mir, A. H. (1982), Development of Ergonomic Audit System and Tra
ining Scheme, M.S. Thesis, State University of New York at Buffalo. Muller-Schwenn
, H. B. (1985), Product Design for Transportation, in Ergonomics International 85, p
p. 643645. National Institute for Occupational Safety and Health (NIOSH) (1981),
Work Practices Guide for Manual Lifting, DHEW-NIOSH publication 81-122, Cincinna
ti. Occupational Health and Safety Authority (1990), Inspecting the Workplace, S
hare Information Booklet, Occupational Health and Safety Authority, Melbourne. O
ccupational Safety and Health Administration (OSHA), (1990), Ergonomics Program
Management Guidelines for Meatpacking Plants, Publication No. OSHA-3121, U.S. De
partment of Labor, Washington, DC. Osburn, H. G. (1987), Personnel Selection, in Han
dbook of Human Factors, G. Salvendy Ed., John Wiley & Sons, New York, pp. 911933.
Panter, W. (1993), Biomechanical Damage Risk in the Handling of Working Materials
and Tools: Analysis, Possible Approaches and Model Schemes, in Ergonomics Action
in the Steel Industry, O. Berchem-Simon, Ed., EUR 14832 EN, Commission of the Eu
ropean Communities, Luxembourg. Parsons, K. C. (1992), The Thermal Audit: A Fundam
ental Stage in the Ergonomics Assessment of Thermal Environment, in Contemporary E
rgonomics 1992, E. J. Lovesey, Ed., Taylor & Francis, London, pp. 8590. Patel, S.
, Drury, C. G., and Prabhu, P. (1993), Design and Usability Evaluation of Work Con
trol Documentation, in Proceedings of the Human Factors and Ergonomics Society 37t
h Annual Meeting (Seattle), pp. 11561160. Portillo Sosa, J. (1993), Design of a Com
puter Programme for the Detection and Treatment of Ergonomic Factors at Workplac
es in the Steel Industry, in Ergonomics Action in the Steel Industry, O. Berchem-S
imon, Ed., EUR 14832 EN, Commission of the European Communities, Luxembourg, pp.
421427. Pulat, B. M. (1992), Fundamentals of Industrial Ergonomics, Prentice Hal
l, Englewood Cliffs, NJ. Putz-Anderson, V. (1988), Cumulative Trauma Disorders:
A Manual for Musculo-Skeletal Diseases of the Upper Limbs, Taylor & Francis, Lon
don. Rasmussen, J. (1987), Reasons, Causes and Human Error, in New Technology and Hu
man Error, J. Rasmussen, K. Duncan, and J. Leplat, Eds., John Wiley & Sons, New
York, pp. 293301. Reason, J. (1990), Human Error, Cambridge University Press, New
York. Re gie Nationale des Usines Renault (RNUR) (1976), Les pro ls de ostes, Me
thode danalyse des conditions de travail, Collection Hommes et Savoir, Masson, Si
rte s, Paris. Rohmert, W., and Landau, K. (1983), A New Technique for Job Anal
ysis, Taylor & Francis, London. Rohmert, W., and Landau, K. (1989), Introduction t
o Job Analysis, in A New Technique for Job Analysis, Part 1, Taylor & Francis, Lon
don, pp. 722. Seeber, A., Schmidt, K.-H., Kierswelter, E., and Rutenfranz, J. (19
89), On the Application of AET, TBS and VERA to Discriminate between Work Demands
at Repetitive Short Cycle Tasks, in Recent Developments in Job Analysis, K. Land
au and W. Rohmert, Eds., Taylor & Francis, New York, pp. 2532. Simpson, G. C. (19
94), Ergonomic Aspects in Improvement of Safe and Ef cient Work in Shafts, in Ergonomi
cs Action in Mining, EUR 14831, Commission of the European Communities, Luxembou
rg, pp. 245256.
1184 11.1. 11.2. 11.3. Quality Improvement International Organization for Standa
rdization Social Democracy 1184 1185 1186
1156
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
DESIGN FOR OCCUPATIONAL HEALTH AND SAFETY 11.4. 11.5. 12. Hazard Survey Employee
/ Management Ergonomics Committee 1186 1187 1187 REFERENCES ADDITIONAL READING
APPENDIX: USEFUL WEB INFORMATION SOURCES
1157 1188 1190 1190
CONCLUSIONS
1.
INTRODUCTION
Each year in the United States, thousands of employees are killed on the job, ma
ny times that number die of work-related diseases, and millions suffer a work-re
lated injury or health disorder (BLS 1998a, 1999). According to the Internationa
l Labour Organization (ILO 1998), about 250 million workers worldwide are injure
d annually on the job, 160 million suffer from occupational diseases, and approx
imately 335,0000 die each year from occupational injuries. In the United States,
the occupational injury and illness incidence rates per 100 full-time workers h
ave been generally decreasing since 1973, but as of 1998, the illness and injury
rate was still 6.7 per 100 employees and the injury rate was 6.2 (BLS 1998a). T
here were a total of 5.9 million occupational injuries and illnesses in 1998 in
private industry (BLS 1998a). These gure represented the sixth year in a row of d
eclining injury rates. Overall lost workday rates have steadily declined from 19
90 to 1998, but cases with days of restricted work activity have increased. Ther
e were just over 60000 occupational fatalities in the private sector in 1998. Th
ese work-related deaths and injuries have enormous costs. In the United States a
lone, it was estimated that in 1992 the direct costs (e.g., medical, property da
mage) totaled $65 billion and the indirect costs (e.g., lost earnings, workplace
training and restaf ng, time delays) totaled $106 billion (Leigh et al. 1997). Of
the U.S. dollar gures presented, approximately $230 million of the direct costs
and $3.46 billion of the indirect costs were related to fatal occupational injur
ies. Nonfatal injuries accounted for $48.9 billion in direct costs and $92.7 bil
lion in indirect costs (the rest was cost due to death and morbidity from occupa
tional illnesses). These estimates assumed 6500 occupational fatalities and 13.2
million nonfatal injuries. The workplace continues to undergo rapid change with
the introduction of new technologies and processes. Many new processes, such as
genetic engineering and biotechnology, introduce new hazards that are challengi
ng, particularly since we do not know much about their potential risks. Will cur
rent hazard-control methods be effective in dealing with these new hazards? Our
challenge is to protect workers from harm while taking advantage of the bene ts of
this new technology. To achieve this, we must be ready to develop new safety an
d health methods to deal with new technology. This chapter will examine the caus
ation and prevention of occupational diseases and injuries, with an emphasis on
recognizing and evaluating hazards, determining disease / injury potential, and
de ning effective intervention strategies. Due to the huge amount of pertinent inf
ormation on each of these topic, it cannot be all inclusive. Rather, it will pro
vide direction for establishing effective detection and control methods. Additio
nal resources are provided in the Appendix for more detailed information about t
he subjects covered.
2. INTERDISCIPLINARY NATURE OF OCCUPATIONAL SAFETY AND HEALTH
Occupational health and safety has its roots in several disciplines, including s
uch diverse elds as engineering, toxicology, epidemiology, medicine, sociology, p
sychology, and economics. Essentially, occupational health and safety is a multi
disciplinary endeavor requiring knowledge from diverse sources to deal with the
interacting factors of people, technology, the work environment, and the organiz
ation of work activities. Any successful approach for the prevention of injuries
and health disorders must recognize the need to deal with these diverse factors
using the best available tools from various disciplines and to organize a syste
matic and balanced effort. Large companies have many resources that can be calle
d upon, but small companies do not have such resources and may need to contact l
ocal, state, and federal agencies for information, advice, and consultation.
3. A PUBLIC HEALTH MODEL FOR OCCUPATIONAL SAFETY AND HEALTH PROTECTION
Figure 1 illustrates a general public health approach for improving the safety a
nd health of the workforce (HHS 1989). It begins with surveillance. We have to k
now what the hazards are and their safety and health consequences before we can
establish priorities on where to apply our limited resources and develop interve
ntion strategies. At the national level, there are statistics on occupational
1158
PERFORMANCE IMPROVEMENT MANAGEMENT
Figure 1 A Public Health Model for Improving the Safety and Health of the Workfo
rce. (Source: HHS 1989)
injuries and illnesses from various sources including the U.S. Bureau of Labor S
tatistics, U.S. National Center for Health Statistics, and National Safety Counc
il. These statistics provide an indication of the most hazardous jobs and indust
ries. In addition, each state has workers compensation statistics that can provid
e information about local conditions. This same kind of surveillance can be done
by each company at the plant level to identify hazardous jobs and operations. P
lant level exposure, hazard, and injury / illness records can be examined period
ically to establish trends and determine plant hot spots that need immediate att
ention. The second level of the model de nes speci c services and protection measure
s to prevent the occurrence of hazards and injuries / illnesses. It also include
s services for quick and effective treatment if injury or illness should occur.
For example, the safety staff keep track of the state and federal requirements r
egarding workplace exposure standards. The safety staff then establishes a proce
ss for enforcing the standards through inspections and correction activities. Ad
ditional plant safety and health programs deal with employee safety and health t
raining (Cohen and Colligan 1998), emergency medical treatment facilities, and a
rrangements with local clinics. The basic thrust of these multifaceted approache
s is to reduce or eliminate adverse workplace exposures and their consequences a
nd provide effective treatment when injuries and illnesses occur. At the next le
vel of the model is the need to heighten the awareness of the professionals in t
he workplace who have to make the decisions that affect safety, such as the manu
facturing engineers, accountants, operations managers, and supervisors. In addit
ion, workers need to be informed so that they know about the risks and consequen
ces of occupational exposures. There is substantial workplace experience to atte
st that managers and employees do not always agree on the hazardous nature of wo
rkplace exposures. It is vital that those persons who can have a direct impact o
n plant exposures, such as managers and employees, have the necessary informatio
n in an easily understandable and useful form to be able to make informed choice
s that can lead to reductions in adverse exposures. Providing the appropriate in
formation is the rst basic step for good decision making. However, it does not en
sure good decisions. Knowledgeable and trained professionals are also needed to
provide proper advice on how to interpret and use the information.
Figure 2 Model of the Work System Risk Process. (Adapted from Smith and Sainfort
1989; Smith et al. 1999)
1160
PERFORMANCE IMPROVEMENT MANAGEMENT
so that employees can be trained and instructed properly. When employees lack su
f cient intelligence or knowledge-acquisition skills, much greater emphasis must b
e placed on engineering controls. Physiological considerations such as strength,
endurance, and susceptibilities to fatigue, stress, or disease are also of impo
rtance. Some jobs demand high energy expenditure and strength requirements. For
these, employees must have adequate physical resources to do the work safely. An
other example deals with a concern about women being exposed to reproductive haz
ards in select industries, for instance in lead processing or synthetic hormone
production. Biological sensitivity to substances may increase the risk of an adv
erse health outcome. Where adequate protection can be provided, there is no logi
cal reason to exclude employees on the basis of gender or biological sensitivity
. However, with certain exposures the courts in the United States have ruled tha
t biologically sensitive employees may be barred from jobs in which these biolog
ically adverse exposures cannot be adequately controlled or guarded against. An
attribute related to physical capacity is the perceptual / motor skills of an in
dividual, such as eyehand coordination. These skills vary widely among individual
s and may have more health and safety signi cance than strength or endurance becau
se they come into play in the moment-by-moment conduct of work tasks. While stre
ngth may in uence the ability to perform a speci c component of a task, perceptual /
motor skills are involved in all aspects of manual tasks. Thus, perceptual / mo
tor skills affect the quality with which a task is carried out as well as the pr
obability of a mistake that could cause an exposure or accident. An individual a
ttribute that should also be considered is personality. For many years it was be
lieved that an employees personality was the most signi cant factor in accident cau
sation and that certain workers were more accident prone than other workers. There i
s some evidence that affectivity is related to occupational injuries (Iverson and Er
win 1997). However, in an earlier review of the accident proneness literature (C
entury Research Corp. 1973), it was determined that individual characteristics s
uch as personality, age, sex, and intellectual capabilities were not signi cant de
terminants of accident potential or causation. Rather, situational consideration
s such as the hazard level of the job tasks and daily personal problems were mor
e important in determining accident risk. There is some evidence that individual
s are at greater or lesser risk at different times in their working careers due
to these situational considerations. Such situational considerations may account
for ndings that younger and older employees have higher than average injury rate
s (La amme 1997; La amme and Menckel 1995). It is critical that a proper t be achieve
d among employees and other elements of the model. This can occur with proper ha
zard orientation, training, skill enhancement, ergonomic improvements, and prope
r engineering of the tasks, technology, and environment.
4.2.
Technology and Materials
As with personal attributes, characteristics of the machinery, tools, technology
, and materials used by the worker can in uence the potential for an exposure or a
ccident. One consideration is the extent to which machinery and tools in uence the
use of the most appropriate and effective perceptual / motor skills and energy
resources. The relationship between the controls of a machine and the action of
that machine dictates the level of perceptual / motor skill necessary to perform
a task. The action of the controls and the subsequent reaction of the machinery
must be compatible with basic human perceptual / motor patterns. If not, signi ca
nt interference with performance can occur which may lead to improper responses
that can cause accidents. In addition, the adequacy of feedback about the action
of the machine affects the performance ef ciency that can be achieved and the pot
ential for an operational error. The hazard characteristics of materials will af
fect exposure and risk. More hazardous materials inherently have a greater proba
bility of adverse health outcomes upon exposure. Sometimes employees will be mor
e careful when using materials that they know have a high hazard potential. But
this can only be true when they are knowledgeable about the materials hazard leve
l. If a material is very hazardous, often less-hazardous materials available can
be substituted. The same is true for hazardous work processes. Proper substitut
ion can decrease the risk of injury or illness, but care must be taken to ensure
that the material or process being substituted is really safer and that it mixe
s well with the entire product formulation or production / assembly process.
4.3.
Task Factors
The demands of a work activity and the way in which it is conducted can in uence t
he probability of an exposure or accident. In addition, the in uence of the work a
ctivity on employee attention, satisfaction, and motivation can affect behavior
patterns that increase exposure and accident risk. Work task considerations can
be broken into the physical requirements, mental requirements, and psychological
considerations. The physical requirements in uence the amount of energy expenditu
re necessary to carry out a task. Excessive physical requirements can lead to fa
tigue, both physiological and mental, which can reduce worker capabilities to re
cognize and respond to workplace hazards.
1162
PERFORMANCE IMPROVEMENT MANAGEMENT
often stretch the limits of the capabilities of the workforce and technology. Wh
en breakdowns occur or operations are not running normally, employees tend to ta
ke risks to keep production online or get it back online quickly. It is during t
hese nonnormal operations that many accidents occur. Management must provide ade
quate resources to meet production goals and accommodate nonnormal operations. M
anagement must also establish policies to ensure that employees and supervisors
do not take unnecessary risks to ensure production.
5. SAFETY AND HEALTH ORGANIZATIONS, AGENCIES, LAWS, AND REGULATIONS
History has shown that ensuring the safety and health of the workforce cannot be
left solely to the discretion of owners and employers. There are those who take
advantage of their position and power and do not provide adequate safeguards. T
oday this is true for only a small percentage of employers but unfortunately, ev
en well-intentioned employers sometimes expose the workforce to hazardous condit
ions through ignorance of the risks. Factory safety laws were enacted in Europe
in past centuries to deal with employer abuses. The basic concepts were adopted
in the United States when some of the states took action in the form of factory
laws and regulations, both for worker safety and for compensation in case of inj
ury. Over the years these laws were strengthened and broadened until 1969 and 19
70, when the U.S. Congress created two federal laws regulating safety and health
for those employers engaged in interstate commerce in coal mining and all other
private industry. In 1977, federal legislation dealing with other forms of mini
ng was added. However, public institutions such as state and local governments a
nd universities were not covered by the federal laws in the United States and st
ill are not covered by federal safety legislation. The Occupational Safety and H
ealth Act of 1970 (OSHAct) remains the primary federal vehicle for ensuring work
place safety and health in the United States. This law requires that employers p
rovide a place of employment free from recognized hazards to employee safety or
health. The critical word is recognized because todays workplaces have many new mater
ials and processes for which hazard knowledge is absent. This places a large res
ponsibility on the employer to keep abreast of new knowledge and information abo
ut workplace hazards for their operations. The OSHAct established three agencies
to deal with workplace safety and health. These were the Occupational Safety an
d Health Administration (OSHA), the National Institute for Occupational Safety a
nd Health (NIOSH), and the Occupational Safety and Health Review Commission.
5.1.
The Occupational Safety and Health Administration
OSHA, located within the U.S. Department of Labor, has the responsibility for es
tablishing federal workplace safety and health standards and enforcing them. Ove
r the last three decades, OSHA has developed hundreds of standards that have bee
n published in the code of federal regulations (CFR) Section 29 CFR, subsections
19001928, which cover General Industry (1910), Longshoring (1918), Construction
(1926), and Agriculture (1928) (http: / / www.osha-slc.gov / OshStd toc / OSHA S
td toc.html). This code is revised periodically and new standards are added cont
inually. Current and proposed rules and standards include Process Safety Managem
ent of Highly Hazardous Chemicals (1910.119), Personal Protective Equipment (191
0.132 to 1910.139), the Proposed Safety and Health Program Rule, and the Final E
rgonomic Program Standard. It is important for employers to keep up with these n
ew and revised standards. One way is to keep in frequent contact with your area
of ce of OSHA and request that they send you updates. Another way is to subscribe
to one or more of the many newsletters for occupational safety and health that p
rovide current information and updates. A third way is to access the OSHA web pa
ge (http: / / www.osha.gov), which provides information about current activities
1164
PERFORMANCE IMPROVEMENT MANAGEMENT
measurement, employee physical examinations and interviews, examinations of comp
any records, and discussions with management. Reports are developed based on the
evaluation, which include a determination of hazards and proposed methods of co
ntrol. These reports may be sent to OSHA for compliance action (inspection and /
or citation).
5.3.
State Agencies
The state(s) in which your operation(s) are located may also have jurisdiction f
or enforcing occupational safety and health standards, and conducting investigat
ions based on an agreement with OSHA. In many states the state health agency and
/ or labor department has agreements with OSHA to provide consultative services
to employers. You can nd out by contacting these agencies directly. In several s
tates the safety and health agencies enforce safety and health rules and standar
ds for local and state government employees, or other employees not covered by O
SHA. See website http: / / www.cdc.gov / niosh / statosh.html.
5.4.
Centers for Disease Control and Prevention
The purpose of the Centers for Disease Control and Prevention (CDC) is to promot
e health and quality of life by preventing and controlling disease, injury, and
disability. The CDC provides limited information on occupational safety and heal
th. For example, their web page has information about accident causes and preven
tion, back belts, canceroccupational exposure, effects of workplace hazards on ma
le reproductive health, latex allergies, needle stick, occupational injuries, te
en workers, and violence in the workplace (see website http: / / www.cdc.gov). T
he Center for Health Statistics is located within CDC and provides basic health
statistics on the U.S. population. This information is used to identify potentia
l occupational health risks by occupational health researchers (see website http
: / / www.cdc.gov / nchs).
5.5.
The Bureau of Labor Statistics
The Bureau of Labor Statistics (BLS) is the principal fact- nding agency for the f
ederal government in the broad eld of labor economics and statistics (see website
http: / / stats.bls.gov). It collects, processes, analyzes, and disseminates es
sential statistical data. Among the data are occupational safety and health data
, including annual reports, by industry, of rates of injuries, illnesses, and fa
talities (http: / / stats.bls.gov / oshhome.htm).
5.6.
The Environmental Protection Agency
The Environmental Protection Agency (EPA) was established as an independent agen
cy in 1970 with the purpose of protecting the air, water, and land (see website
http: / / www.epa.gov). To this end, the EPA engages in a variety of research, m
onitoring, standard setting, and enforcement activities. The Agency administers
10 comprehensive environmental protection laws: the Clean Air Act (CAA); the Cle
an Water Act (CWA); the Safe Drinking Water Act (SDWA); the Comprehensive Enviro
5.8.
Safety and Ergonomics Program Standards
The purpose of the OSHA Proposed Safety and Health Program Rule (see website htt
p: / / www.oshaslc.gov / SLTC / safetyhealth / nshp.html) is to reduce the numbe
r of job-related fatalities, illnesses, and injuries by requiring employers to e
stablish a workplace safety and health program to ensure compliance with OSHA st
andards and the General Duty Clause of the OSHAct. All employers covered
1166
PERFORMANCE IMPROVEMENT MANAGEMENT
under the OSHAct are covered by this rule (except construction and agriculture),
and the rule applies to all hazards covered by the General Duty Clause and OSHA
standards. Five elements make up the program: 1. 2. 3. 4. 5. Management leaders
hip and employee participation Hazard identi cation and assessment Hazard preventi
on and control Information and training Evaluation of program effectiveness
Employers that already have safety and health programs with these ve elements can
continue using their existing programs if they are effective. Late in 2000, OSH
A announced its Final Ergonomic Program Standard (see website http: / / www.osha
-slc.gov / ergonomics-standard / index.html). The proposed Standard speci es emplo
yers obligations to control musculoskeletal disorder (MSD) hazards and provide MS
D medical management for injured employees. The Proposed Ergonomic Standard uses
a program approachthat is, the proposal speci es the type of a program to set up t
o combat MSD, as opposed to specifying the minimum or maximum hazard levels. Acc
ording to the proposed ergonomic standard, an ergonomic program consists of the
following six program elements: 1. 2. 3. 4. 5. 6. Management leadership and empl
oyee participation Hazard information and reporting Job hazard analysis and cont
rol Training MSD management Program evaluation
These are similar to the elements in the proposed safety and health program rule
. The proposed ergonomics standard covers workers in general industry, though co
nstruction, maritime, and agriculture operations may be covered in future rulema
king. The proposal speci cally covers manufacturing jobs, manual material-handling
jobs, and other jobs in general industry where MSDs occur. If an employer has a
n OSHA recordable MSD, the employer is required to analyze the job and control a
ny identi ed MSD hazards. Public hearings were ongoing regarding the proposed ergo
nomic standard as of MarchApril of 2000 and written comments were being taken. So
me states have proposed ergonomic standards to control MSDs. The California Ergo
nomic Standard (see website http: / / www.dir.ca.gov / Title8 / 5110.html) went
into effect on July 3, 1997. The standard targets jobs where a repetitive motion
injury (RMI) has occurred and the injury can be determined to be work related a
nd at a repetitive job. The injury must have been diagnosed by a physician. The
three main elements of the California standard are work site evaluation, control
of exposures that have caused RMIs, and employee training. The exact language o
f the standard has been undergoing review in the California judicial system. The
purpose of the Washington State Proposed Ergonomics Program Rule (see website h
ttp: / / www.lni.wa.gov / wisha) is to reduce employee exposure to workplace haz
ards that can cause or aggravate work-related musculoskeletal disorders (WMSDs).
There are no requirements for medical management in the proposed rule. The prop
osal covers employers with caution zone jobs, which the proposed rule de nes based
on the presence of any one or more of a number physical job factors. For exampl
e, a caution zone job exists if the job requires working with the neck, back or wr
ist(s) bent more than 30 degrees for more than 2 hours total per workday (WAC 29662-05105). The proposed standard speci es the type of ergonomic awareness educatio
n that must be provided to employees who work in caution zone jobs. The standard
also states that if caution zone jobs have WMSD hazards, employers must reduce
the WMSD hazards identi ed. Several tools are suggested that can be used to analyz
e the caution zone jobs for WMSD hazards, and thresholds are provided that indic
ate suf cient hazard reduction. The rule also states that employees should be invo
lved in analyzing caution zone jobs and in controlling the hazards identi ed. This
proposed rule was under review in 2000. The proposed North Carolina Ergonomic S
tandard (see website http: / / www.dol.state.nc.us / news/ ergostd.htm) was rst a
nnounced in November 1998. Like the proposed national standard and Californias st
andard, North Carolinas is a program standard without physical hazard thresholds.
The proposal states employers shall provide ergonomic training within 90 days o
f employment and no less than every three years thereafter. It also speci es the n
Infectious disease
Low-back disorders
Musculoskeletal disorders of the upper extremities
DESIGN FOR OCCUPATIONAL HEALTH AND SAFETY TABLE 2 Descriptions of Various Occupa
tional Disorders and Diseases Not Included in Table 1 Disease or Injury Occupati
onal lung disease Description
1169
Asbestosis
Byssinosis
Silicosis
Coal Miners pneumoconiosis Lung cancer
Occupational cancers
Traumatic injuries Amputations
Fractures
Eye loss
The latent period for many lung diseases can be several years. For instance, for
silicosis it may be as long as 15 years and for asbestos-related diseases as lo
ng as 30 years. The lung is a primary target for disease related to toxic exposu
res because it is often the rst point of exposure through breathing. Many chemica
ls and dusts are ingested through breathing. The six most severe occupational lu
ng disorders are asbestosis, byssinosis, silicosis, coal workers pneumoconiosis,
lung cancer, and occupational asthma. This disease produces scarring of the lung
tissue, which causes progressive shortness of breath. The disease continues to
progress even after exposures end, and there is no speci c treatment. The latent p
eriod is 1020 years. The agent of exposure is asbestos, and insulation and shipya
rd workers are those most affected. This disease produces chest tightness, cough
, and airway obstruction. Symptoms can be acute (reversible) or chronic. The age
nts of exposure are dusts of cotton, ax, and hemp, and textile workers are those
most affected. This is a progressive disease that produces nodular brosis, which
inhibits breathing. The agent of exposure is free crystalline silica, and miners
, foundry workers, abrasive blasting workers, and workers in stone, clay, and gl
ass manufacture are most affected. This disease produces brosis and emphysema. Th
e agent of exposure is coal dust. The prevalence of this disorder among currentl
y employed coal miners has been estimated at almost 5%. This disease has many sy
mptoms and multiple pathology. There are several agents of exposure, including c
hromates, arsenic, asbestos, chloroethers, ionizing radiation, nickel, and polyn
uclear aromatic hydrocarbon compounds. There is some debate on the signi cance of
occupational exposures in the overall rate of cancer ranging from 4 to 20% due t
o occupation, yet there is good agreement that such occupational exposures can p
roduce cancer. There are many types of cancer that are related to workplace expo
sures, including hemangiosarcoma of the liver; mesothelioma; malignant neoplasm
of the nasal cavities, bone, larynx, scrotum, bladder, kidney, and other urinary
organs; and lymphoid leukemia and erythroleukemia. The vast majority of amputat
ions occur to the ngers. The agents of exposure include powered hand tools and po
wered machines. Many industries have this type of injury, as do many occupations
. Machine operators are the single most injured occupation for amputations. Fall
s and blows from objects are the primary agents that cause fractures. The major
sources of these injuries include oors, the ground, and metal items. This suggest
s falling down or being struck by an object as the primary reasons for fractures
. Truck drivers, general laborers, and construction laborers were the occupation
s having the most fractures. It was estimated that in 1982 there were over 900,0
00 occupational eye injuries. Most were due to particles in the eye such as piec
es of metal, wood, or glass, but a smaller number were caused by chemicals in th
8.1.
Inspection Programs
Hazard identi cation prior to the occurrence of an occupational injury is a major
goal of a hazard inspection program. In the United States, such programs have be
en formalized in terms of federal and state regulations that require employers t
o monitor and abate recognized occupational health and safety hazards. These rec
ognized hazards are de ned in the federal and state regulations that provide expli
cit standards of unsafe exposures. The standards can be the basis for establishi
ng an in-plant inspection program because they specify the explicit subject matt
er to be investigated and corrected. Research has shown that inspections are mos
t effective in identifying permanent xed physical and environmental hazards that
do not vary over time. Inspections are not very effective in identifying transie
nt physical and environmental hazards or improper workplace behaviors because th
ese hazards may not be present when the inspection is taking place (Smith et al.
1971). A major bene t from inspections, beyond hazard recognition, is the positiv
e motivational in uence on employees. Inspec-
1172
PERFORMANCE IMPROVEMENT MANAGEMENT
tions demonstrate management interest in the health and safety of employees and
a commitment to a safe working environment. To capitalize on this positive motiv
ational in uence, an inspection should not be a punitive or confrontational proces
s of placing blame. Indicating the good aspects of a work area and not just the
hazards is important in this respect. It is also important to have employees par
ticipate in hazard inspections because this increases hazard-recognition skills
and increases motivation for safe behavior. The rst step in an inspection program
is to develop a checklist that identi es all potential hazards. A good starting p
oint is the state and federal standards. Many insurance companies have developed
general checklists of OSHA standards that can be tailored to a particular plant
. These are a good source when drawing up the checklist. A systematic inspection
procedure is preferred. This requires that the inspectors know what to look for
and where to look for it and have the proper tools to conduct an effective asse
ssment. It is important that the checklist be tailored to each work area after a
n analysis of that work areas needs has been undertaken. This analysis should det
ermine the factors to be inspected: (1) the machinery, tools, and materials, (2)
chemicals, gases, vapors, and biological agents, and (3) environmental conditio
ns. The analysis should also determine (1) the frequency of inspections necessar
y to detect and control hazards, (2) the individuals who should conduct and / or
participate in the inspections, and (3) the instrumentation needed to make meas
urements of the hazard(s). The hazards that require inspection can be determined
by (1) their potential to cause an injury or illness, (2) the potential serious
ness of the injuries or illnesses, (3) the number of people exposed to the hazar
d, (4) the number of injuries and illnesses at a workplace related to a speci c ha
zard, and (5) hazardous conditions de ned by federal, state, and local regulations
. The frequency of inspections should be based on the nature of the hazards bein
g evaluated. For instance, once a serious xed physical hazard has been identi ed an
d controlled, it is no longer a hazard. It will only have to be reinspected peri
odically to be sure the situation is still no longer hazardous. Random spot chec
king is another method that can indicate whether the hazard control remains effe
ctive. Other types of hazards that are intermittent will require more frequent i
nspection to assure proper hazard abatement. In most cases, monthly inspections
are warranted, and in some cases daily inspections are reasonable. Inspections s
hould take place when and where the highest probability of a hazard exists, whil
e reinspection can occur on an incidental basis to ensure that hazard control is
effectively maintained. Inspections should be conducted when work processes are
operating, and on a recurring basis at regular intervals. According to the Nati
onal Safety Council (1974), a general inspection of an entire premises should be
conducted at least once a year, except for those work areas scheduled for more
frequent inspections because of their high hazard level. Because housekeeping is
an important aspect of hazard control, inspection of all work areas should be c
onducted at least weekly for cleanliness, clutter, and traf c ow. The National Safe
ty Council (1974) indicated that a general inspection should cover the following
: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. Plant grounds Building and rela
ted structures Towers, platforms, or other additions Transportation access equip
ment and routes Work areas Machinery Tools Materials handling Housekeeping Elect
rical installations and wiring Floor loading Stairs and stairways Elevators Roof
s and chimneys
We would add to this: 15. Chemicals, biological agents, radiation, etc. 16. Ergo
nomic stressors 17. Psychosocial stressors Intermittent inspections are the most
common type and are made at irregular intervals, usually on an ad hoc basis. Su
ch inspections are unannounced and are often limited to a speci c work area
alysis system requires that detailed information must be collected about the cha
racteristics of illness and injuries and their frequency and severity. The Occup
ational Safety and Health Act (1970) established illness and injury reporting an
d recording requirements that are mandatory for all employers, with certain excl
usions such as small establishments and government agencies. Regulations have be
en developed to de ne how employers are to adhere to these requirements (BLS 1978)
. The OSHAct requirements specify that any illness or injury to an employee that
causes time lost from the job, treatment beyond rst aid, transfer to another job
, loss of consciousness, or an occupational illness must be recorded on a daily
log of injuries and illnesses, the OSHA 300 form (previously the 200 form). This
log identi es the injured person, the date and time of the injury, the department
or plant location where the injury occurred, and a brief description of the occ
urrence of the injury, highlighting salient facts such as the chemical, physical
agent, or machinery involved and the nature of the injury. An injury should be
recorded on the day that it occurs, but this is not always possible with MSDs an
d other cumulative trauma injuries. The number of days that the person is absent
from the job is also recorded upon the employees return to work. In addition to
the daily log, a more detailed form is lled out for each injury that occurs. This
form provides a more detailed description of the nature of the injury, the exte
nt of damage to the employee, the factors that could
1174
PERFORMANCE IMPROVEMENT MANAGEMENT
be related to the cause of the injury, such as the source or agent that produced
the injury, and events surrounding the injury occurrence. A workers compensation
form can be substituted for the OSHA 301 form (previously the 101 form) because
equivalent information is gathered on these forms. The OSHA Act injury and illn
ess system speci es a procedure for calculating the frequency of occurrence of occ
upational injuries and illnesses and an index of their severity. These can be us
ed by companies to monitor their health and safety performance. National data by
major industrial categories are compiled by the U.S. Bureau of Labor Statistics
annually and can serve as a basis of comparison of individual company performan
ce within an industry. Thus, a company can determine whether its injury rate is
better or worse than that of other companies in its industry. This industrywide
injury information is available on the OSHA website (http: / / www.osha.gov). Th
e OSHA system uses the following formula in determining company annual injury an
d illness incidence. The total number of recordable injuries is multiplied by 20
0,000 and then divided by the number of hours worked by the company employees. T
his gives an injury frequency per 100 person hours of work (injury incidence). T
hese measures can be compared to an industry average. Incidence
number of record
able injuries and illnesses 200,000 number of hours worked by company employees
where The number of recordable injuries and illnesses is taken from the OSHA 300
daily log of injuries. The number of hours worked by employees is taken from pa
yroll records and reports prepared for the Department of Labor or the Social Sec
urity Administration. It is also possible to determine the severity of company i
njuries. Two methods are typically used. In the rst, the total number of days los
t due to injuries is compiled from the OSHA 300 daily log and divided by the tot
al number of injuries recorded on the OSHA 300 daily log. This gives an average
number of days lost per injury. In the second, the total number of days lost is
multiplied by 200,000 and then divided by the number of hours worked by the comp
any employees. This gives a severity index per 100 person hours of work. These m
easures can also be compared to an industry average. Injury incidence and severi
ty information can be used by a company to monitor its injury and illness perfor
mance over the years to examine improvement and the effectiveness of health and
safety interventions. Such information provides the basis for making corrections
in the companys approach to health and safety and can serve as the basis of rewa
rding managers and workers for good performance. However, it must be understood
that injury statistics give only a crude indicator of safety company performance
and an even cruder indicator of individual manager or worker performance. This
information can be used to compare company safety performance with the industry
average. Because injuries are rare events, they do not always re ect the sum total
of daily performance of company employees and managers. Thus, while they are an
accurate measure of overall company safety performance, they are an insensitive
measure at the individual and departmental levels. Some experts feel that more
basic information has to be collected to provide the basis for directing health
and safety efforts. One proposed measure is to use rst-aid reports from industria
l clinics. These provide information on more frequent events than the injuries r
equired to be reported by the OSHAct. It is thought that these occurrences can p
rovide insights into patterns of hazards and / or behaviors that may lead to the
more serious injuries and that their greater number provides a larger statistic
al base for determining accident potential.
8.3.
Incident Reporting
Another approach is for a company to keep track of all accidents whether an illn
ess or injury is involved or not. Thus, property damage accidents without illnes
s or injury would be recorded, as would near accidents and incidents that almost
produced damage or injury. The proponents of this system feel that a large data
base can be established for determining accident-causation factors. As with the r
st-aid reports, the large size of the database is the most salient feature of th
is approach. A major dif culty in both systems is the lack of uniformity of record
ing and reporting the events of interest. The method of recording is much more d
iffuse because the nature of the events will differ substantially from illnesses
or injuries, making their description in a systematic or comparative way dif cult
. This critique is aimed not at condemning these approaches but at indicating ho
w dif cult they are to de ne and implement. These systems provide a larger base of i
ncidents than the limited occurrences in injury-recording systems. The main prob
lem is in organizing them into a meaningful pattern. A more fruitful approach th
an looking at these after-the-fact occurrences may be to look at the conditions
that can precipitate injuries, that is, hazards. They can provide a large body o
f information for a statistical base and can also be organized into meaningful p
atterns.
azard control. Then we must rely on the knowledge, skills, and good sense of the
employee and / or the person breaching the barrier. These human factor controls
will be discussed below. Containment is a form of a barrier guard that is used
primarily with very dangerous chemical and physical hazards. An example is the i
onizing radiation from a nuclear reactor. This radiation at the core of the reac
tor is restrained from leaving the reactor by lead-lined walls, but if leakage s
hould occur through the walls, a back-up barrier contains the leakage. In the ca
se of a closed system, the employee never comes in contact with the source (such
as the reactor core) of the hazard. The system is designed through automation t
o protect the employee from the hazard source. Many chemical plants use the conc
ept of a closed system of containment. The only time an employee would contact t
hese speci c deadly hazards would be in the case of a disaster in which the contai
nment devices failed. Another form of barrier control that looks something like
a secondary hazard control is a guard, which is used to cover moving parts that
are accessible by the employeesfor example, inrunning nip points on a machine. Su
ch guards are most often xed and cannot be removed except for maintenance. Someti
mes the guard needs to be moved to access the product. For example, when a power
press is activated, there is a hazard at the point of operation. When the ram i
s activated, guards are engaged that prohibit an employees contact with the die.
When the ram is at rest, the guard can be lifted to access the product. If the g
uard is lifted, an interlock prohibits the ram from being activated. In this sit
uation, there is a barrier to keep the employee from the area of the hazard only
when the hazard is present. The guard allows access to the area of the hazard f
or loading, unloading, and other
1176
PERFORMANCE IMPROVEMENT MANAGEMENT
job operations that can be carried out without activation. But when the machine
energy is activated, the guard moves into place to block the employee from acces
s to the site of the action. In the case of the robot, the hazard area is quite
large and a perimeter barrier is used; but in the case of a mechanical press, th
e hazard area is limited to the point of operation, which is quite small. Yet an
other engineering control that is important for dealing with workplace hazards i
s the active removal of the hazard before it contacts the employee during the wo
rk process. An example is a local scavenger ventilation system that sucks the fu
mes produced by an operation such as spot welding or laser surgery away from the
employees. This exhausts the fumes into the air outside of the plant (surgery r
oom) and away from the employees. The ventilation systems must comply with feder
al, state, and local regulations in design and in the level of emissions into th
e environment. Thus, the fumes may need to be scrubbed clean by a lter before bei
ng released into the open environment. A related ventilation approach is to dilu
te the extent of employee exposure to airborne contaminants by bringing in more
fresh air from outside the plant on a regular basis. The fresh air dilutes the c
oncentration of the contaminant to which the employee is exposed to a level that
is below the threshold of dangerous exposure. The effectiveness of this approac
h is veri ed by measuring the ambient air level of contamination and employee expo
sure levels on a regular basis. When new materials or chemicals are introduced i
nto the work process or when other new airborne exposures are introduced into th
e plant, the adequacy of the ventilation dilution approach to provide safe level
s of exposure(s) must be reveri ed. (See Hagopian and Bastress 1976 for recommenda
tions for ventilation guidelines.) When guarding or removal systems (e.g., saw g
uards, scavenger and area ventilation) cannot provide adequate employee protecti
on, then personal protective equipment (PPE) must be worn by the employees (safe
ty glasses, respirator). Because it relies on compliance by the employees, this
is not a preferred method of control. A cardinal rule of safety and health engin
eering is that the primary method of controlling hazards is through engineering
controls. Human factors controls are to be used primarily when engineering contr
ols are not practical, feasible, solely effective in hazard control, or cost eff
ective. It is recognized that human factor controls are often necessary as adjun
cts (supplements) to engineering controls and in many instances are the only fea
sible and effective controls.
9.2.
Human Factors Controls
In the traditional scheme of hazard control, there are two elements of human fac
tors considerations for controlling hazards: warning and training. These can als
o be conceptualized as informing employees about hazards and promoting safe and
healthful employee behavior. Cohen and Colligan (1998) conducted a literature re
view of safety training effectiveness studies and found that occupational safety
and health training was effective in reducing employee hazard risks and injurie
s.
9.2.1.
Informing
Informing employees about workplace hazards has three aspects: the right to know
, warnings, and instructions. Regarding the right to know, federal safety and he
alth regulations and many state and local regulations (ordinances) specify that
an employer has the obligation to inform employees of hazardous workplace exposu
res to chemicals, materials, or physical agents that are known to cause harm. Th
e local requirements of reporting vary and employers must be aware of the report
ing requirements in the areas where they have facilities. Generally, an employer
must provide information on the name of the hazard, its potential health effect
s, exposure levels that produce adverse health effects, and the typical kinds of
exposures encountered in the plant. In addition, if employees are exposed to a
toxic agent, information about rst aid and treatment should be available. For eac
h chemical or material or physical agent classi ed as toxic by OSHA, employers are
required to maintain a standard data sheet that provides detailed information o
n its toxicity, control measures, and standard operating procedures (SOPS) for u
sing the product. A list of hazardous chemicals, materials, and physical agents
is available from your local OSHA of ce or the OSHA website (http: / / www.osha.go
v). These standard data sheets (some are referred to as material safety data she
ets [MSDS]) must be supplied to purchasers by the manufacturer who sells the pro
duct. These data sheets must be shared by employers with employees who are expos
ed to the speci c hazardous products, and must be available at the plant (location
) as an information resource in case of an exposure or emergency. The motivation
behind the right-to-know concept is that employees have a basic right to knowle
dge about their workplace exposures and that informed employees will make better
choices and use better judgment when they know they are working with hazardous
materials. Warnings are used to convey the message of extreme danger. They are d
esigned to catch the attention of the employee, inform the employee of a hazard,
and instruct him or her in how to avoid the hazard. The OSHA regulations requir
e that workplace warnings meet the appropriate ANSI standards, including Z35.1-1
972 speci cations for accident prevention signs; Z35.4-1973 speci cations for inform
ational signs complementary to ANSI Z35.1-1972; and ANSI Z53.1-1971 safety color
code for marking physical hazards. These ANSI standards were revised in 1991 as
Z535.1-535.4. Warnings
are primarily visual but can also be auditory, as in the case of a re alarm. Warn
ings use sensory techniques that capture the attention of the employee. For inst
ance, the use of the color red has a cultural identi cation with danger. The use o
f loud, discontinuous noise is culturally associated with emergency situations a
nd can serve as a warning. After catching attention, the warning must provide in
formation about the nature of the hazard. What is the hazard and what will it do
to you? This provides the employee with an opportunity to assess the risk of ig
noring the warning. Finally, the warning should provide some information about s
peci c actions to take to avoid the hazard, such as Stay clear of the boom or Stand bac
50 feet from the crane or Stay away from this area. Developing good warnings requires
following the ANSI standards, using the results of current scienti c studies and
good judgment. Lehto and Miller (1986) wrote a book on warnings, and Lehto and P
apastavrou (1993) de ne critical issues in the use and application of warnings. La
ughery and Wogalter (1997) de ne the human factors aspects of warnings and risk pe
rception and considerations for designing warnings. Peters (1997) discusses the
critical aspects of technical communications that need to be considered from bot
h human factors and legal perspectives. Considerations such as the level of empl
oyees word comprehension, the placement of the warning, environmental distortions
, wording of instructions, and employee sensory overload, just to name a few, mu
st be taken into account for proper warning design and use. Even when good warni
ngs are designed, their ability to in uence employee behavior varies widely. Even
so, the regulations require their use and they do provide the employee an opport
unity to make a choice. Warnings should never be used in place of engineering co
ntrols. Warnings always serve as an adjunct to other means of hazard control. In
structions provide direction to employees that will help them to avoid or deal m
ore effectively with hazards. They are the behavioral model that can be followed
to ensure safety. The basis of good instructions is the job analysis, which pro
vides detailed information on the job tasks, environment, tools, and materials u
sed. The job analysis will identify high-risk situations. Based on veri cation of
the information in the job analysis, a set of instructions on how to avoid hazar
dous situations can be developed. The implementation of such instructions as emp
loyee behavior will be covered in the next section under training and safe behav
ior improvement.
9.2.2.
Promoting Safe and Healthful Behavior
There are four basic human factors approaches that can be used in concert to in ue
nce employee behavior to control workplace hazards: 1. Applying methods of workp
lace and job design to provide working situations that capitalize on worker skil
ls 2. Designing organizational structures that encourage healthy and safe workin
g behavior 3. Training workers in the recognition of hazards and proper work beh
avior(s) for dealing with these hazards 4. Improving worker health and safety be
havior through work practices improvement Each of these approaches is based on c
ertain principles that can enhance effective safety performance.
9.2.3.
Workplace and Job Design
The sensory environment in which job tasks are carried out in uences worker percep
tual capabilities to detect hazards and respond to them. Being able to see or sm
ell a hazard is an important prerequisite in dealing with it; therefore, workpla
ces have to provide a proper workplace sensory environment for hazard detection.
This means proper illumination and noise control and adequate ventilation. Ther
e is some evidence that appropriate illumination levels can produce signi cant red
uctions in accident rate (McCormick 1976). The environment can in uence a workers a
bility to perceive visual and auditory warnings such as signs or signals. To ens
ure the effectiveness of warnings, they should be highlighted. For visual signal
s, use the colors de ned in the ANSI standard (Z535.1, ANSI 1991) and heightened b
rightness. For auditory signals, use changes in loudness, frequency, pitch, and
phasing. Work environments that are typically very loud and do not afford normal
conversation can limit the extent of information exchange and may even increase
the risk of occupational injury (Barreto et al. 1997). In such environments, vi
sual signs are a preferred method for providing safety information. However, in
most situations of extreme danger, an auditory warning signal is preferred becau
se it attracts attention more quickly and thus provides for a quicker worker res
ponse. In general, warning signals should quickly attract attention, be easy to
interpret, and provide information about the nature of the hazard. Proper machin
ery layout, use, and design should be a part of good safety. Work areas should b
e designed to allow for traf c ow in a structured manner in terms of the type of tr
af c, the volume of traf c, and the direction of ow. The traf c ow process should suppor
t the natural progression
1178
PERFORMANCE IMPROVEMENT MANAGEMENT
of product manufacture and / or assembly. This should eliminate unnecessary traf c
and minimize the complexity and volume of traf c. There should be clearly delinea
ted paths for traf c to use and signs giving directions on appropriate traf c patter
ns and ow. Work areas should be designed to provide workers with room to move abo
ut in performing tasks without having to assume awkward postures or come into in
advertent contact with machinery. Taskanalysis procedures can determine the most
economical and safest product-movement patterns and should serve as the primary
basis for determining layout of machinery, work areas, traf c ow, and storage for
each workstation. Equipment must conform to principles of proper engineering des
ign so that the controls that activate the machine, the displays that provide fe
edback of machine action, and the safeguards to protect workers from the action
of the machine are compliant with worker skills and expectations. The action of
the machine must be compliant with the action of the controls in temporal, spati
al, and force characteristics. The layout of controls on a machine is very impor
tant for proper machinery operation, especially in an emergency. In general, con
trols can be arranged on the basis of (1) their sequence of use, (2) common func
tions, (3) frequency of use, and (4) relative importance. Any arrangement should
take into consideration (1) the ease of access, (2) the ease of discrimination,
and (3) safety considerations such as accidental activation. The use of a seque
nce arrangement of controls is often preferred because it ensures smooth, contin
uous movements throughout the work operation. Generally, to enhance spatial comp
liance, the pattern of use of controls should sequence from left to right and fr
om top to bottom. Sometimes controls are more effective when they are grouped by
common functions. Often controls are clustered by common functions that can be
used in sequence so that a combination of approaches is used. To prevent uninten
tional activation of controls, the following steps can be taken: (1) recess the
control, (2) isolate the control to an area on the control panel where it will b
e hard to trip unintentionally, (3) provide protective coverings over the contro
l, (4) provide lock-out of the control so that it cannot be tripped unless unloc
ked, (5) increase the force necessary to trip the control so that extra effort i
s necessary and / or (6) require a speci c sequence of control actions such that o
ne unintentional action does not activate the machinery. A major de ciency in mach
inery design is the lack of adequate feedback to the machine operator about the
machine action, especially at the point of operation. Such feedback is often dif c
ult to provide because there typically are no sensors at the point of operation
(or other areas) to determine when such action has taken place. However, an oper
ator should have some information about the results of the actuation of controls
to be able to perform effectively. Operators may commit unsafe behaviors to gai
n some feedback about the machines performance as the machine is operating that m
ay put them in contact with the point of operation. To avoid this, machinery des
ign should include feedback of operation. The more closely this feedback re ects t
he timing and action of the machinery, the greater the amount of control that ca
n be exercised by the operator. The feedback should be displayed in a convenient
location for the operator at a distance that allows for easy readability. Work
task design is a consideration for controlling safety hazards. Tasks that cause
employees to become fatigued or stressed can contribute to exposures and acciden
ts. Task design has to be based on considerations that will enhance employer att
ention and motivation. Thus, work tasks should be meaningful in terms of the bre
adth of content of the work that will eliminate boredom and enhance the employees
mental state. Work tasks should be under the control of the employees, and mach
inepaced operations should be avoided. Tasks should not be repetitious. This las
t requirement is sometimes hard to achieve. When work tasks have to be repeated
often, providing the employee with some control over the pacing of the task redu
ces stress associated with such repetition. Because boredom is also a considerat
ion in repetitious tasks, employee attention can be enhanced by providing freque
nt breaks from the repetitious activity to do alternative tasks or take a rest.
Alternative tasks enlarge the job and enhance the breadth of work content and em
ployee skills. The question of the most appropriate work schedule is a dif cult ma
tter. There is evidence that rotating-shift systems produce more occupational in
juries than xed-shift schedules (Smith et al. 1982). This implies that xed schedul
es are more advantageous for injury control. However, for many younger workers (
without seniority and thus often relegated to afternoon and night shifts) this m
ay produce psychosocial problems related to family responsibilities and entertai
nment needs, and therefore lead to stress. Because stress can increase illness a
nd injury potential, the gain from the xedshift systems may be negated by stress.
This suggests that one fruitful approach may be to go to xed shifts with volunte
ers working the nonday schedules. Such an approach provides enhanced biological
conditions and fewer psychosocial problems. Overtime work should be avoided beca
use of fatigue and stress considerations. It is preferable to have a second shif
t of workers than to overtax the physical and psychological capabilities of empl
oyees. Since a second shift may not be economically feasible, some consideration
s need to be given for determining appropriate amounts of overtime. This is a ju
dgmental determination since there is inadequate research evidence on which to b
ase a de nitive answer. It is reasonable that job tasks that
nce. Decisions about work task organization, work methods, and assignments shoul
d be delegated to the lowest level in the organization at which they can be logi
cally made; that is, they should be made at the point of action. This approach h
as a number of bene ts. For example, this level in the organization has the greate
st knowledge of the work processes and operations and of their associated hazard
s. Such knowledge can lead to better decisions about hazard control. Diverse inp
ut to decision making from lower levels up to higher levels makes for better dec
isions as there are more options to work with. Additionally, this spreading of r
esponsibility by having people participate in the inputs to decision making prom
otes employee and line supervisor consideration of safety and health issues. Suc
h participation has been shown to be a motivator and to enhance job satisfaction
(French 1963; Korunka et al. 1993; Lawler 1986). It also gives employees greate
r control over their work tasks and a greater acceptance of the decisions concer
ning hazard control due to the shared responsibility. All of this leads to decre
ased stress and increased compliance with safe behavior(s). Organizations have a
n obligation to increase company health and safety by using modern personnel pra
ctices. These include appropriate selection and placement approaches, skills tra
ining, promotion practices, compensation packages, and employee-assistance progr
ams. For safety purposes, the matching of employee skills and needs to job task
requirements is an important consideration. It
1180
PERFORMANCE IMPROVEMENT MANAGEMENT
is inappropriate to place employees at job tasks for which they lack the proper
skills or capacity. This will increase illness and injury risk and job stress. S
election procedures must be established to obtain a properly skilled workforce.
When a skilled worker is not available, training must be undertaken to increase
skill levels to the proper level before a task is undertaken. This assumes that
the employer has carried out a job task analysis and knows the job skills that a
re required. It also assumes that the employer has devised a way to test for the
required skills. Once these two conditions have been met, the employer can impr
ove the t between employee skills and job task requirements through proper select
ion, placement, and training. Many union contracts require that employees with s
eniority be given rst consideration for promotions. Such consideration is in keep
ing with this approach as long as the worker has the appropriate skills to do th
e job task or the aptitude to be trained to attain the necessary skills. If a pe
rson does not have the necessary knowledge and skills or cannot be adequately tr
ained, there is good reason to exclude that individual from a job regardless of
seniority.
9.3.1.
Safety Training
Training workers to improve their skills and recognize hazardous conditions is a
primary means for reducing exposures and accidents. Cohen and Colligan (1998) f
ound that safety and health training was effective in reducing employee risk. Tr
aining can be de ned as a systematic acquisition of knowledge, concepts, or skills
that can lead to improved performance or behavior. Eckstrand (1964) de ned seven
basic steps in training: (1) de ning the training objectives, (2) developing crite
ria measures for evaluating the training process and outcomes, (3) developing or
deriving the content and materials to be learned, (4) designing the techniques
to be used to teach the content, (5) integrating the learners and the training p
rogram to achieve learning, (6) evaluating the extent of learning, and (7) modif
ying the training process to improve learner comprehension and retention of the
content. These steps provide the foundation for the application of basic guideli
nes that can be used for designing the training content, and integrating the con
tent and the learner. In de ning training objectives, two levels can be establishe
d: global and speci c. The global objectives are the end goals that are to be met
by the training program. For instance, a global objective might be the reduction
of eye injuries by 50%. The speci c objectives are those that are particular to e
ach segment of the training program, including the achievements to be reached by
the completion of each segment. A speci c objective might be the ability to recog
nize eye-injury hazards by all employees by the end of the hazard-education segm
ent. A basis for de ning training objectives is the assessment of company safety p
roblem areas. This can be done using hazard-identi cation methods such as injury s
tatistics, inspections, and hazard surveys. Problems should be identi ed, ranked i
n importance, and then used to de ne objectives. To determine the success of the t
raining process, criteria for evaluation need to be established. Hazard-identi cat
ion measures can be used to determine overall effectiveness. Thus, global object
ives can be veri ed by determining a reduction in injury incidence (such as eye in
juries) or the elimination of a substantial number of eye hazards. However, it i
s necessary to have more sensitive measures of evaluation that can be used durin
g the course of training to assess the effectiveness of speci c aspects of the tra
ining program. This helps to determine the need to redirect speci c training segme
nts if they prove to be ineffective. Speci c objectives can be examined through th
e use of evaluation tools. For instance, to evaluate the ability of workers to r
ecognize eye hazards, a written or oral examination can be used. Hazards that ar
e not recognized can be emphasized in subsequent training and retraining. The co
ntent of the training program should be developed based on the learners knowledge
level, current skills, and aptitudes. The training content should be exible enou
gh to allow for individual differences in aptitudes, skills, and knowledge, as w
ell as for individualized rates of learning. The training content should allow a
ll learners to achieve a minimally acceptable level of health and safety knowled
ge and competence by the end of training. The speci cs of the content deal with th
e skills to be learned and the hazards to be recognized and controlled. There ar
e various techniques that can be used to train workers. Traditionally, on-the-jo
b training (OJT) has been emphasized to teach workers job skills and health and
safety considerations. The effectiveness of such training will be in uenced by the
skill of the supervisor or lead worker in imparting knowledge and technique as
well as his or her motivation to successfully train the worker. First-line super
visors and lead workers are not educated to be trainers and may lack the skills
and motivation to do the best job. Therefore, OJT has not always been successful
as the sole safety training method. Since the purpose of a safety training prog
ram is to impart knowledge and teach skills, it is important to provide both cla
ssroom experiences to gain knowledge and OJT to attain skills. Classroom trainin
g is used to teach concepts and improve knowledge and should be carried out in s
mall groups (not to exceed 15 employees). A small group allows for the type of i
nstructorstudent interaction needed to monitor class progress, provide proper mot
ivation and determine each learners comprehension level. Classroom training shoul
d be given in an area free of distractions to allow
de nition of hazardous work practices The de nition of new work practices to reduce
the hazards Training employees in the desired work practices Testing the new wor
k practices in the job setting Installing the new work practices using motivator
s Monitoring the effectiveness of the new work practices Rede ning the new work pr
actices Maintaining proper employee habits regarding work practices
In de ning hazardous work practices, there are a number of sources of information
that should be examined. Injury and accident reports such as the OSHA 301 Form p
rovide information about the circumstances surrounding an injury. Often employee
or management behaviors that contributed to the injury can be identi ed. Employee
s are a good source of information about workplace hazards. They can be asked to
identify critical behaviors that may be important as hazard sources or hazard c
ontrols. First-line supervisors are also a good source of information because th
ey are constantly
1182
PERFORMANCE IMPROVEMENT MANAGEMENT
observing employee behavior. All of these sources should be examined; however, t
he most important source of information is in directly observing employees at wo
rk. There are a number of considerations when observing employee work behaviors.
First, observation must be an organized proposition. Before undertaking observa
tions, it is useful to interview employees and rst-line supervisors and examine i
njury records to develop a checklist of signi cant behaviors to be observed. This
should include hazardous behaviors as well as those that are used to enhance eng
ineering control or directly control hazards. The checklist should identify the
job task being observed, the types of behaviors being examined, their frequency
of occurrence, and a time frame of their occurrence. The observations should be
made at random times so that employees do not change their natural modes of beha
vior when observed. The time of observation should be long enough for a complete
cycle of behaviors associated with a work task(s) of interest to be examined. T
wo or three repetitions of this cycle should be examined to determine consistenc
y in behavior with an employee and among employees. Random times of recording be
havior are most effective in obtaining accurate indications of typical behavior.
The recorded behaviors can be analyzed by the frequency and pattern of their oc
currence as well as their signi cance for hazard control. Hot spots can be identi ed
. All behaviors need to be grouped into categories in regard to hazard control e
fforts and then prioritized. The next step is to de ne the proper work practices t
hat need to be instilled to control the hazardous procedures observed. Sometimes
the observations provide the basis for the good procedures that you want to imp
lement. Often, however, new procedures need to be developed. There are four clas
ses of work practices that should be considered: (1) hazard recognition and repo
rting, (2) housekeeping, (3) doing work tasks safely, and (4) emergency procedur
es. The recognition of workplace hazards requires that the employee be cognizant
of hazardous conditions through training and education and that employees activ
ely watch for these conditions. Knowledge is useless unless it is applied. These
work practices ensure the application of knowledge and the reporting of observe
d hazards to fellow workers and supervisors. Housekeeping is a signi cant consider
ation for two reasons. A clean working environment makes it easier to observe ha
zards. It is also a more motivating situation that enhances the use of other wor
k practices. The most critical set of work practices deals with carrying out wor
k tasks safely through correct skill use and hazard-avoidance behaviors. This is
where the action is between the employee and the environment, and it must recei
ve emphasis in instilling proper work practices. Situations occur that are extre
mely hazardous and require the employee to get out of the work area or stay clea
r of the work area. These work practices are often life-saving procedures that n
eed special consideration because they are used only under highly stressful cond
itions, such as emergencies. Each of these areas needs to have work practices sp
elled out. These should be statements of the desired behaviors speci ed in concise
, easily understandable language. Statements should typically be one sentence lo
ng and should never exceed three sentences. Details should be excluded unless th
ey are critical to the proper application of the work practice. The desired work
practices having been speci ed, employees should be given classroom and on-the-jo
b training to teach them the work practices. Training approaches discussed earli
er should be applied. This includes classroom training as well as an opportunity
for employees to test the work practices in the work setting. To ensure the sus
tained use of the learned work practices, it is important to motivate workers th
rough the use of incentives. There are many types of incentives, including money
, tokens, privileges, social rewards, recognition, feedback, participation, and
any other factors that motivate employees, such as enriched job tasks. Positive
incentives should be used to develop consistent work practice patterns. Research
has demonstrated that the use of nancial rewards in the form of increased hourly
wage can have a bene cial effect on employee safety behaviors and reduced hazard
exposure. One study (Smith et al. 1983; Hopkins et al. 1986) evaluated the use o
f behavioral approaches for promoting employee use of safe work practices to red
uce their exposure to styrene. The study was conducted in three plants and had t
hree components: (1) the development and validation of safe work practices for w
orking with styrene in reinforced berglass operations, (2) the development and im
plementation of an employee training program for learning the safe work practice
s, and (3) the development and testing of a motivational technique for enhancing
continued employee use of the safe work practices. Forty-three work practices w
ere extracted from information obtained from a literature search, walkthrough pl
ant survey, interviews with employees and plant managers, and input from recogni
zed experts in industrial safety and hygiene. The work practices were pilot test
ed for their ef cacy in reducing styrene exposures. A majority of the work practic
es were found to be ineffective in reducing styrene exposures, and only those th
at were effective were incorporated into a worker training program. The worker t
raining program consisted of classroom instruction followed up with on-the-job a
pplication of the material learned in class. Nine videotapes were made to demons
trate the use of safe work practices. Basic information about each work practice
and its usefulness was presented, followed by a demonstration of how to perform
the work practice. Employees observed one videotape
good plant safety performance. The effectiveness of safety and health staff was
greater the higher they were in the management structure. The National Safety C
ouncil (1974) has suggested that plants with less than 500 employees and a low t
o moderate hazard level can have an effective program with a part-time safety pr
ofessional. Larger plants or those with more hazards need more safety staff. Alo
ng with funds for staf ng, successful programs also make funds available for hazar
d abatement in a timely fashion. Thus, segregated funds are budgeted to be drawn
upon when needed. This gives the safety program exibility in meeting emergencies
when funds may be hard to get quickly from operating departments. An interestin
g fact about companies with successful safety programs is that they are typicall
y ef cient in their resource utilization, planning, budgeting, quality control, an
d other aspects of general operations and include safety programming and budgeti
ng as just another component of their overall management program. They do not si
ngle safety out or make it special; instead, they integrate it into their operat
ions to make it a natural part of daily work activities. Organizational motivati
onal practices will in uence employee safety behavior. Research has demonstrated t
hat organizations that exercise humanistic management approaches have better saf
ety performance (Cohen 1977; Smith et al. 1978; Cleveland et al. 1979). These ap
proaches are sensitive to employee needs and thus encourage employee involvement
. Such involvement leads to greater awareness and higher motivation levels condu
cive to proper employee behavior. Organizations that use
1184
PERFORMANCE IMPROVEMENT MANAGEMENT
punitive motivational techniques for in uencing safety behavior have poorer safety
records than those using positive approaches. An important motivational factor
is encouraging communication between various levels of the organization (employe
es, supervisors, managers). Such communication increases participation in safety
and builds employee and management commitment to safety goals and objectives. O
ften informal communication is a more potent motivator and provides more meaning
ful information for hazard control. An interesting research nding is that general
promotional programs aimed at enhancing employee awareness and motivation, such
as annual safety awards dinners and annual safety contests, are not very effect
ive in in uencing worker behavior or company safety performance (Smith et al. 1978
). The major reason is that their relationship in time and subject matter (conte
nt) to actual plant hazards and safety considerations is so abstract that worker
s cannot translate the rewards to speci c actions that need to be taken. It is har
d to explain why these programs are so popular in industry despite being so inef
fective. Their major selling points are that they are easy to implement and high
ly visible, whereas more meaningful approaches take more effort. Another importa
nt consideration in employee motivation and improved safety behavior is training
. Two general types of safety training are of central importance: skills trainin
g and training in hazard awareness. Training is a key component to any safety pr
ogram because it is important to employee knowledge of workplace hazards and pro
per work practices and provides the skills necessary to use the knowledge and th
e work practices. Both formal and informal training seem to be effective in enha
ncing employee safety performance (Cohen and Colligan 1998). Formal training pro
grams provide the knowledge and skills for safe work practices, while informal t
raining by rst-line supervisors and fellow employees maintains and sharpens learn
ed skills. All safety programs should have a formalized approach to hazard contr
ol. This often includes an inspection system to de ne workplace hazards, accident
investigations, record keeping, a preventive maintenance program, a machine guar
ding program, review of new purchases to ensure compliance with safety guideline
s, and housekeeping requirements. All contribute to a safety climate that demons
trates to workers that safety is important. However, the effectiveness of speci c
aspects of such a formalized hazard-control approach have been questioned (Cohen
1977; Smith et al. 1978). For instance, formalized inspection programs have bee
n shown to deal with only a small percentage of workplace hazards (Smith et al.
1971). In fact, Cohen (1977) indicated that more frequent informal inspections m
ay be more effective than more formalized approaches. However, the signi cance of
formalized hazard-control programs is that they establish the groundwork for oth
er programs such as work practice improvement and training. In essence, they are
the foundation for other safety approaches. They are also a source of positive
motivation by demonstrating management interest in employees by providing a clea
n workplace free of physical hazards. Smith et al. (1978) have demonstrated that
sound environmental conditions are a signi cant contribution to company safety pe
rformance and employee motivation.
11. PARTICIPATIVE APPROACHES TO RESPOND TO THE EMERGING HAZARDS OF NEW TECHNOLOG
IES
Hazard control for new technologies requires a process that will be dynamic enou
gh to be able to deal with the increasing rate of hazards caused by technologica
l change. Research on successful safety program performance in plants with high
hazard potential has shown a number of factors that contribute to success (Cohen
1977; Smith et al. 1978). These are having a formal, structured program so peop
le know where to go for help, management commitment and involvement in the progr
am, good communications between supervisors and workers, and worker involvement
in the safety and health activities. These considerations are important because
they provide a framework for cooperation between management and employees in ide
ntifying and controlling hazards. These factors parallel the basic underlying pr
andards. From a safety point of view, one may wonder whether the implementation
of ISO 9000 management systems can encourage the development of safer and health
ier work environments. In Sweden, Karltun et al. (1998) examined the in uences on
working conditions, following the implementation of ISO 9000 quality systems in
six small and medium-sized companies. Improvements to the physical work environm
ent triggered by the ISO implementation process were very few. There were improv
ements in housekeeping and production methods. Other positive aspects present in
some of the companies included job enrichment and a better of understanding of
employees role and importance to production. However, the implementation of ISO 9
000 was accompanied by increased physical strain, stress, and feelings of lower
appreciation. According to Karltun and his colleagues (1998), improved working c
onditions could be triggered by the implementation of ISO quality management sta
ndards if additional goals, such as improved working conditions, are considered
by top management and if a participative implementation process is used. Others
have argued that quality management systems and environmental management systems
can be designed to address occupational health and safety (Wettberg 1999;
1186
PERFORMANCE IMPROVEMENT MANAGEMENT
Martin 1999). A study by Eklund (1995) showed a relationship between ergonomics
and quality in assembly work. Tasks that had ergonomic problems (e.g., physical
and psychological demands) were also the tasks that had quality de ciencies. The e
vidence for the integration between quality management and occupational safety a
nd health is weak. However, there is reason to believe that improved health and
safety can be achieved in the context of the implementation of ISO 9000 manageme
nt standards.
11.3.
Social Democracy
One framework for addressing the health and safety issues of new technology is t
he social democratic approach practiced in Norway and Sweden (Emery and Thorsrud
1969; Gardell 1977). This approach is based on the concept that workers have a
right to participate in decisions about their working conditions and how their j
obs are undertaken. In Sweden, there are two central federal laws that establish
the background for health and safety. One, similar to U.S. Occupational Safety
and Health Act, established agencies to develop and enforce standards as well as
to conduct research. The second is the Law of Codetermination, which legislates
the right of worker representatives to participate in decision making on all as
pects of work. This law is effective because over 90% of the Swedish blueand whi
te-collar workforce belong to a labor union and the unions take the lead in repr
esenting the interests of the employees in matters pertaining to working conditi
ons, including health and safety. The Scandinavian approach puts more emphasis o
n the quality of working life in achieving worker health and well being. Thus, t
here is emphasis on ensuring that job design and technology implementation do no
t produce physical and psychological stress. This produces discussion and action
when safety and health problems are rst reported.
11.4.
Hazard Survey
Organizational and job-design experts have long proposed that employee involveme
nt in work enhances motivation and produces production and product quality bene ts
(Lawler 1986). Smith and Beringer (1986) and Zimolong (1997) have recommended t
hat employees be involved in safety programming and hazard recognition to promot
e safety motivation and awareness. An effective example of using this concept in
health and safety is the hazard survey program (Smith 1973; Smith and Beringer
1986). Smith et al. (1971) showed that most occupational hazards were either tra
nsient or due to improper organizational or individual behavior. Such hazards ar
e not likely to be observed during formal inspections by safety staff or complia
nce inspections by state or federal inspectors. The theory proposes that the way
to keep on top of these transient and behavioral hazards is to have them identi e
d on a continuous basis by the employees as they occur through employee particip
ation. One approach that gets employee involvement is the hazard survey. While i
nspection and illness/ injury analysis systems can be expected to uncover a numb
er of workplace hazards, they cannot de ne all of the hazards. Many hazards are dy
namic and occur only infrequently. Thus, they may not be seen during an inspecti
on or may not be reported as a causal factor in an illness or injury. To deal wi
th hazards that involve dynamically changing working conditions and / or worker
behaviors requires a continuously operating hazard-identi cation system. The hazar
d survey is a cooperative program between employees and managers to identify and
control hazards. Since the employee is in direct contact with hazards on a dail
y basis, it is logical to use employees knowledge of hazards in their identi cation
. The information gathered from employees can serve as the basis of a continuous
hazard identi cation system that can be used by management to control dynamic wor
kplace hazards. A central concept of this approach is that hazards exist in many
forms as xed physical conditions, as changing physical conditions, as worker beh
aviors, and as an operational interaction that causes a mismatch between worker
behavior and physical conditions (Smith 1973). This concept de nes worker behavior
as a critical component in the recognition and control of all of these hazards.
Involving workers in hazard recognition sensitizes them to their work environme
nt and acts as a motivator to use safe work behaviors. Such behaviors include us
ing safe work procedures to reduce hazard potential, using compensatory behavior
s when exposed to a known hazard, or using avoidance behaviors to keep away from
known hazards. The hazard survey program also establishes communication between
supervisors and employees about hazards. The rst step in a hazard survey program
is to formalize the lines of communication. A primary purpose of this communica
tion network is to get critical hazard information to decision makers as quickly
as possible so that action can be taken to avert an exposure or accident. Tradi
tional communication routes in most companies do not allow for quick communicati
on between workers and decision makers, and thus serious hazards may not be corr
ected before an exposure or accident occurs. Each company has an established org
anizational structure that can be used to set up a formalized communication netw
ork. For instance, most companies are broken into departments or work units. The
se can serve as the primary segments within which workers report hazards. These
hazards can be dealt with at the departmental level or communicated to higher-le
vel decision makers for action. Once primary communication units are established
, a process to communicate hazard information has to be established. This requir
es structure and rules. The structure of the program should be simple
pervisors and union representatives about speci c hazards and worker perceptions.
This give and take develops an understanding of the other persons perspective and
concerns. It often generates good solutions, especially toward the end of the c
ourse, when an understanding of the course technical material is integrated with
in the speci c context of the plant. After the training, an ergonomics committee c
omposed of top management, select line management, and select union stewards is
established that meets on a regular basis to discuss ergonomic problems and pote
ntial solutions. Employees with ergonomic problems can report them to a member o
f this committee, which typically tends to be a union steward. Semiannual retrai
ning is given to the ergonomics committee on emerging issues that are generated
by the kinds of problems being reported at the company. This approach has been e
xtremely successful in reducing the extent of chronic trauma in electronic assem
bly plants in Wisconsin.
12.
CONCLUSIONS
Designing for successful occupational health and safety performance requires a s
ystematic approach. This includes understanding that the workplace is a system w
here changes in one element lead to in uences on the other system components. It a
lso means that efforts to make improvements must
1188
PERFORMANCE IMPROVEMENT MANAGEMENT
be multifaceted and address all of the elements of the work system. Health and s
afety improvements begin with an understanding of the hazards, the evaluation of
injury and illness experience, the development of interventions, the implementa
tion of improvements, follow-up to evaluate the results of improvements, and con
tinuous efforts of evaluation and improvement. Good programming starts at the to
p of the company and includes all levels of the organizational structure. Employ
ee input and involvement are critical for success. Often there is a need for tec
hnical expertise when dealing with complex or new hazards. In the end, having ev
eryone in the company alert to health and safety issues should lead to improved
health and safety performance.
REFERENCES
American National Standards Institute (ANSI), Safety Color Code, Z535.1-1991, ANSI,
New York. Barreto, S. M., Swerdlow, A. J., Smith, P. G., and Higgins, C. D. (199
7), A Nested Case-Control Study of Fatal Work Related Injuries among Brazilian Ste
elworkers, Occupational and Environmental Medicine, Vol. 54, pp. 599604. Breslow, L
., and Buell, P. (1960), Mortality from Coronary Heart Disease and Physical Activi
ty of Work in California, Journal of Chronic Diseases, Vol. 11, pp. 615626. Bureau
of Labor Statistics (BLS) (1978), Recordkeeping Requirements under the Occupationa
l Safety and Health Act of 1970, U.S. Department of Labor, Washington, DC. Bureau
of Labor Statistics (1998a), http: / / www.osha.gov / oshstats / bltable.html. B
ureau of Labor Statistics (BLS) (1998b), Occupational Injuries and Illnesses in US
Industry, 1997, U.S. Department of Labor, Washington, DC. Bureau of Labor Statist
ics (BLS) (1999), Fatal Workplace Injuries in 1997, U.S. Department of Labor, Washin
gton DC. Century Research Corp. (1973), Are Some People Accident Prone? Century
Research Corp., Arlington, VA. Centers for Disease Control (CDC) (1983), Leading W
ork-Related Diseases and InjuriesUnited States; Musculoskeletal Injuries, Morbidity
and Mortality Weekly Report, Vol. 32, pp. 189191. Cleveland, R. J., Cohen, H. H.
, Smith, M. J., and Cohen, A. (1979), Safety Program Practices in Record-Holding P
lants, DHEW (NIOSH) Publication No. 79-136, U.S. GPO, Washington, DC. Cohen, A. (1
977), Factors in Successful Occupational Safety Programs, Journal of Safety Research
, Vol. 9, No. 4, pp. 168178. Cohen, A., and Colligan, M. J. (1998), Assessing Occ
upational Safety and Health Training: A Literature Review, National Institute fo
r Occupational Safety and Health, Cincinnati. Conard, R. (1983), Employee Work P
ractices, U.S. Department of Health and Human Services, National Institute for O
ccupational Safety and Health, Cincinnati. Eckstrand, G. (1964), Current Status of
the Technology of Training, AMRL Doc. Tech. Rpt. 6486, U.S. Department of Defense
, Washington, DC. Eklund, J. A. E. (1995), Relationships between ergonomics and qu
ality in assembly work, Applied Ergonomics, Vol. 26, No. 1, pp. 1520. Emery, F. E.,
and Thorsrud, E. (1969), The Form and Content of Industrial Democracy, Tavistoc
k Institute, London. Environmental Protection Agency (EPA) (1999), http: / / www
.epa.gov / ocfo / plan / plan.htm. French, J. (1963), The Social Environment and M
ental Health, Journal of Social Issues, Vol. 19. Gardell, B. (1977), Autonomy and Pa
rticipation at Work, Human Relations, Vol. 30, pp. 515533. Hagglund, G. (1981), Appro
aches to Safety and Health Hazard Abatement, Labor Studies Journal, Vol. 6. Hagopi
an, J. H., and Bastress, E. K. (1976), Recommended Industrial Ventilation Guidel
ines, U.S. Government Printing Of ce, Washington, DC. Hopkins, B. L., Conard, R. J
., and Smith, M. J. (1986), Effective and Reliable Behavioral Control Technology, Am
erican Industrial Hygiene Association Journal, Vol. 47, pp. 785791. International
Labour Organization (ILO) (1998), Occupational injury and illness statistics, h
ttp: / / www.ilo.org / public / english / protection / safework / indexold.htm.
Iverson, R. D., and Erwin, P. J. (1997), Predicting Occupational Injury: The Role
of Affectivity, Journal of Occupational and Organizational Psychology, Vol. 70, No
. 2, pp. 113128. Kalimo, R., Lindstrom, K., and Smith, M. J. (1997), Psychosocial A
pproach in Occupational Health, Handbook of Human Factors and Ergonomics, 2nd Ed.,
G. Salvendy, Ed., John Wiley & Sons, New York, pp. 10591084.
1190
PERFORMANCE IMPROVEMENT MANAGEMENT
Smith, M. J., Cohen, H. H., Cohen, A., and Cleveland, R. (1978), Characteristics o
f Successful Safety Programs, Journal of Safety Research, Vol. 10, pp. 515. Smith,
M. J., Colligan, M., and Tasto, D. (1982), Shift Work Health Effects for Food Proc
essing Workers, Ergonomics, Vol. 25, pp. 133144. Smith, M., Anger, W. K., Hopkins,
B., and Conrad, R. (1983), Behavioral-Psychological Approaches for Controlling Emp
loyee Chemical Exposures, in Proceedings of the Tenth World Congress on the Preven
tion of Occupational Accidents and Diseases, International Social Security Assoc
iation, Geneva. Smith, M. J., Karsh, B.-T., and Moro, F. B. (1999), A Review of Re
search on Interventions to Control Musculoskeletal Disorders, in J. Suokas and V.
Rouhiainen, Eds., Work-Related Musculoskeletal Disorders, National Academy Press
, Washington, DC, pp. 200229. Smith, T. J. (1999), Synergism of Ergonomics, Safety
and Quality: A Behavioral Cybernetic Analysis, International Journal of Occupation
al Safety and Ergonomics, Vol. 5, No. 2, pp. 247278. United States Department of
Health and Human Services (HHS) (1989), Promoting Health / Preventing Disease: Y
ear 2000 Objectives for the Nation, HHS, Washington, DC. Wettberg, W. (1999), Heal
th and Environmental Conservation: Integration into a Quality Management System
of a Building Serve Contract Company, in Proceedings of the International Conferen
ce on TQM and Human FactorsTowards Successful Integration, Vol. 2, J. Axelsson, B
. Bergman, and J. Eklund, Eds., Centre for Studies of Humans, Technology and Org
anization, Linkoping, Sweden, pp. 108113. Zimolong, B. (1997), Occupational Risk Ma
nagement, in Handbook of Human Factors and Ergonomics, G. Salvendy, Ed., John Wile
y & Sons, New York, pp. 9891020. Zink, K. (1999), Human Factors and Business Excell
ence, in Proceedings of the International Conference on TQM and Human FactorsToward
s Successful Integration, Vol. 1, J. Axelsson, B. Bergman, and J. Eklund, Eds.,
Centre for Studies of Humans, Technology and Organization, Linkoping, Sweden, pp
. 927.
ADDITIONAL READING
American National Standards Institute (ANSI), Method of Recording Basic Facts Rela
ting to the Nature and Occurrence of Work Injuries, Z16.2-1969, ANSI, New York, 19
69. American National Standards Institute (ANSI), Safety Color Code For Marking Ph
ysical Hazards, Z53.1-1971, ANSI, New York, 1971. American National Standards Inst
itute (ANSI), Speci cations for Accident Prevention Signs, Z35.1-1972, ANSI, New York,
1972. American National Standards Institute (ANSI), Speci cations for Informational
Signs Complementary to ANSI Z35.1-1972, Accident Prevention Signs, Z35.4-1973, AN
SI, New York, 1973. American National Standards Institute (ANSI), Method of Record
ing and Measuring Work Injury Experience, Z16.1-1973, ANSI, New York, 1973. Americ
an National Standards Institute (ANSI), Of ce Lighting, A132.1-1973, ANSI, New York, 1
973. American National Standards Institute (ANSI), Product Safety Signs and Labels
, Z535.4-1991, ANSI, New York, 1991. American National Standards Institute (ANSI),
Environmental and Facility Safety Signs, Z535.21991, ANSI, New York, 1991. American
National Standards Institute (ANSI), Criteria for Safety Symbols, Z535.3-1991, ANSI
, New York, 1991. Centers for Disease Control (CDC), Noise Induced Loss of Hearing
, Morbidity and Mortality Weekly Report, Vol. 35, 1986, pp. 185188. Cooper, C. L.,
and Marshall, J., Occupational Sources of Stress: A Review of the Literature Relat
ing to Coronary Heart Disease and Mental Ill Health, Journal of Occupational Psych
ology, Vol. 49, 1976, pp. 1128. Gyllenhammar, P. G., People at Work, Addison-Wesl
ey, Reading, MA, 1977. Suokas, J., Quality of Safety Analysis, in Quality Management
of Safety and Risk Analysis, Elsevier Science, Amsterdam, pp. 2543, 1993.
APPENDIX Useful Web Information Sources
American Association of Occupational Health Nurses, http: / / www.aaohn.org Amer
ican Board of Industrial Hygiene, http: / / www.abih.org
DESIGN FOR OCCUPATIONAL HEALTH AND SAFETY American College of Occupational and E
nvironmental Medicine, http: / / www.acoem.org American Council of Government In
dustrial Hygienists, http: / / www.acgih.org American Industrial Hygiene Associa
tion, http: / / www.aiha.org American Psychological Association, http: / / www.a
pa.org Bureau of Labor Statistics, http: / / www.bls.gov Centers for Disease Con
trol and Prevention, http: / / www.cdc.gov Department of Justice, http: / / www.
usdoj.gov Department of Labor, http: / / www.dol.gov National Institute for Envi
ronmental Health Sciences, http: / / www.niehs.gov National Institute for Occupa
tional Safety and Health, http: / / www.niosh.gov National Safety Council, http:
/ / www.nsc.org Occupational Safety and Health Administration, http: / / www.os
ha.gov
1191
HUMANCOMPUTER INTERACTION
1193
1.
OVERVIEW
The utilities of information technology are spreading into all walks of life, fr
om the use of selfstanding personal computers and networking to Internet and int
ranet. This technology has allowed for tremendous growth in Web-based collaborat
ion and commerce and has expanded into information appliances (e.g., pagers, cel
lular phones, two-way radios) and other consumer products. It is important that
these interactive systems be designed so that they are easy to learn and easy to
operate, with minimal errors and health consequences and maximal speed and sati
sfaction. Yet it can be challenging to achieve an effective design that meets th
ese criteria. The design of interactive systems has evolved through several para
digm shifts. Initially, designers focused on functionality. The more a system co
uld do, the better the system was deemed to be. This resulted in system designs
whose functionality often could not be readily accessed or utilized, or tended t
o physically stress users (Norman 1988). For example, how many homes have you wa
lked into where the VCR is ashing 12:00? This example shows that even devices tha
t should be simple to con gure can be designed in such a manner that users cannot
readily comprehend their use. Further, the occurrence of repetitive strain injur
ies rose as users interacted with systems that engendered signi cant physical stre
ss. The development of such systems led to a shift in design focus from function
ality to usability. Usability engineering (Nielsen 1993) focuses on developing i
nteractive systems that are ergonomically suitable for the users they support (G
randjean 1979; Smith 1984), as well as cognitively appropriate (Vicente 1999). T
his approach aims to ensure the ease of learning, ease of use, subjective satisf
action, and physical comfort of interactive systems. While these design goals ar
e appropriate and have the potential to engender systems that are effective and
ef cient to use, system designers have found that this focus on usability does not
always lead to the most useracceptable system designs. In recent years, environ
mental concerns (i.e., social, organizational, and management factors) have led
to design practices that incorporate a greater emphasis on studying and understa
nding the semantics of work environments (Vicente 1999), often through ethnograp
hic approaches (Nardi 1997; Takahashi 1998). Through participant-observation pra
ctices, efforts are made to understand more completely the tasks, work practices
, artifacts, and environment that the system will become a part of (Stanney et a
l. 1997). This is often achieved by designers immersing themselves in the target
work environment, thereby becoming accustomed to and familiar with the various
factors of interactive system design. These factors include users capabilities an
d limitations (both cognitive and physical), organizational factors (e.g., manag
ement and social issues), task requirements, and environmental conditions that t
he work environment supports (see Figure 1). Through the familiarity gained by t
his involvement, designers can develop systems that are more uniquely suited to
target users and the organizations for which they work. This chapter provides gu
idelines and data on how to achieve these objectives through the effective desig
n of humancomputer interaction, which takes into account the humans physical, cogn
itive, and social abilities and limitations in reference to interacting with com
puters and / or computer based appliances. In doing so, it relies on the availab
le standards, practices, and research ndings. Much of it is guided by currently a
vailable technology but may also be applicable as technology changes and new app
lications evolve. The overall thrust of the chapter is that good physical design
of the workplace will minimize the probabilities of the occurrence of health co
nsequences; good cognitive design will maximize the utility of interactive syste
ms; and good social and organizational design will effectively integrate these s
ystems into existing work domains. In general, it is suggested that humancomputer
interaction will be optimized when the following are observed:
The system design is ergonomically suited to the user. Interactive design matche
s the mental models of users. Only information needed for decision making is pre
sented. Information of a similar nature is chunked together. The interface is ad
aptive to individual differences due to innate, acquired, or circumstantial reas
ons. The system design supports existing work practices and related artifacts. I
nteractive system design is thus about many interfaces; it considers how users r
elate to each other, how they physically and cognitively interact with systems,
how they inhabit their organizations, and how these interfaces can best be suppo
rted by mediating technologies. Focusing on each of these areas highlights the n
eed for a multidisciplinary interactive system design team. As Duffy and Salvend
y (1999) have documented, in teams that consist of design and manufacturing engi
neers, marketing specialists, and a team leader, even when they have common goal
s, each member retrieves and uses different information and has a different ment
al model that focuses on unique aspects in achieving the same design objectives.
1194
PERFORMANCE IMPROVEMENT MANAGEMENT
TECHNOLOGY
ORGANIZATION
Management
Social
PERSON
Physical
Cognitive
TASK
ENVIRONMENT
Figure 1 Model of the Work System. (Adapted from Smith and Sainfort 1989)
The following sections will focus on different aspects of interactive system des
ign, including ergonomics, cognitive design, and social, organizational, and man
agement factors.
2.
ERGONOMICS
Ergonomics is the science of tting the environment and activities to the capabili
ties, dimensions, and needs of people. Ergonomic knowledge and principles are ap
plied to adapt working conditions to the physical, psychological, and social nat
ure of the person. The goal of ergonomics is to improve performance while at the
same time enhancing comfort, health, and safety. In particular, the ef ciency of
humancomputer interaction, as well as the comfort, health, and safety of users, c
an be improved by applying ergonomic principles (Grandjean 1979; Smith 1984). Ho
wever, no simple recommendations can be followed that will enhance all of these
aspects simultaneously. Compromise is sometimes necessary to achieve a set of ba
lanced objectives while ensuring user health and safety (Smith and Sainfort 1989
; Smith and Cohen 1997). While no one set of rules can specify all of the necess
ary combinations of proper working conditions, the use of ergonomic principles a
nd concepts can help in making the right choices.
2.1.
Components of the Work System
From an ergonomic point of view, the different components of the work system (e.
g., the environment, technology, work tasks, work organization, and people) inte
ract dynamically with each other and
HUMANCOMPUTER INTERACTION
1195
function as a total system (see Figure 1). Since changing any one component of t
he system in uences the other aspects of the system, the objective of ergonomics i
s to optimize the whole system rather than maximize just one component. In an er
gonomic approach, the person is the central focus and the other factors of the w
ork system are designed to help the person be effective, motivated, and comforta
ble. The consideration of physical, physiological, psychological, and social nee
ds of the person is necessary to ensure the best possible workplace design for p
roductive and healthy human computer interaction. Table 1 shows ergonomic recomme
ndations for xed desktop video display terminal (VDT) use that improve the human
interface characteristics. Ergonomic conditions for laptop computer use should c
onform as closely as possible to the recommendations presented in Table 1.
2.2.
Critical Ergonomics Issues in HumanComputer Interaction
A major feature of the ergonomics approach is that the job task characteristics
will de ne the ergonomic interventions and the priorities managers should establis
h for workplace design requirements. The following discussion of critical areasth
e technology, sensory environment, thermal environment, workstation design, and
work practiceswill highlight the major factors that engineers and managers should
be aware of in order to optimize humancomputer interaction and protect user heal
th. Speci c recommendations and guidelines will be derived from these discussions,
but please be advised that the recommendations made throughout this chapter may
have to be modi ed to account for differences in technology, personal, situationa
l, or organizational needs at your facility, as well as improved knowledge about
humancomputer interaction. It cannot be overstated that these considerations rep
resent recommendations and guidelines and not xed speci cations or standards. The r
ealization that any one modi cation in any single part of the work system will aff
ect the whole system and particularly the person (see Figure 1) is essential for
properly applying the following recommendations and speci cations.
2.3.
Ergonomics of Computer Interfaces
Today, the primary display interfaces in humancomputer interaction are the video
display with a cathode ray tube and the at panel screen. In the early 1980s, the
US Centers for Disease Control (CDC 1980) and the U.S. National Academy of Scien
ces de ned important design considerations for the use of cathode ray tubes (NAS 1
983). The Japan Ergonomics Society (JES) established a Committee for Flat Panel
Display Ergonomics in 1996, which proposed ergonomic guidelines for use of produ
cts with at panels, such as liquid crystal displays (LCDs) (JES 1996). These Japa
nese guidelines were subsequently reviewed by the Committee on HumanComputer Inte
raction of the International Ergonomics Association (IEA). The JES guidelines ad
dressed the following issues: (1) light-related environmental factors, (2) devic
e use and posture factors, (3) environmental factors, (4) job design factors, an
d (5) individual user factors. These guidelines will be discussed in appropriate
sections of this chapter. The use of CRTs and at panel displays has been accompa
nied by user complaints of visual fatigue, eye soreness, general visual discomfo
rt, and various musculoskeletal complaints and discomfort with prolonged use (Gr
andjean 1979; Smith et al. 1981; NIOSH 1981; NAS 1983; Smith 1984; JES 1996). Gu
idelines for providing the proper design of the VDT and the environment in which
it is used have been proposed by the Centers for Disease Control (CDC 1980) and
the Human Factors and Ergonomics Society (ANSI 1988), and for the laptop and pa
lm computer, by the Japan Ergonomics Society (JES 1996). The following sections
deal with the visual environment for using desktop computers, but the discussion
can be extrapolated to other types of computer use. The major interfaces of emp
loyees with computers are the screen (CRT, at panel), the keyboard, and the mouse
. Other interfaces are being used more and more, such as voice input, pointers,
handactuated motion devices, and apparatuses for virtual environment immersion.
2.3.1.
The Screen and Viewing
Poor screen images, uctuating and ickering screen luminances, and screen glare cau
se user visual discomfort and fatigue (Grandjean 1979; NAS 1983). There are a ra
nge of issues concerning readability and screen re ections. One is the adequacy of
contrast between the characters and screen background. Screens with glass surfa
ces have a tendency to pick up glare sources in the environment and re ect them. T
his can diminish the contrast of images on the screen. To reduce environmental g
lare, the luminance ratio within the users near eld of vision should be approximat
ely 1:3, and within the far eld approximately 1:10 (NIOSH 1981). For luminance on
the screen itself, the character-to-screen background luminance contrast ratio
should be at least 7:1 (NIOSH 1981). To give the best readability for each opera
tor, it is important to provide VDTs with adjustments for character contrast and
brightness. These adjustments should have controls that are obvious to observe
and manipulate and easily accessible from normal working position (e.g., located
at the front of the screen) (NIOSH 1981).
1196
PERFORMANCE IMPROVEMENT MANAGEMENT
TABLE 1 Ergonomic Recommendations for the VDT Technology, Work Environment, and
Workstation Ergonomic Consideration 1. Viewing screen a. Character / screen cont
rast b. Screen character size c. Viewing distance d. Line refresh rate e. Eye vi
ewing angle from horizon 2. Illumination a. No hardcopy b. With normal hard copy
c. With poor hard copy d. Environmental luminance contrast Near objects Far obj
ects e. Re ectance from surfaces Working surface Floor Ceiling Walls 3. HVAC a. Te
mperaturewinter b. Temperaturesummer c. Humidity d. Air ow 4. Keyboard a. Slope b. K
ey top area c. Key top horizontal width d. Horizontal key spacing e. Vertical ke
y spacing f. Key force 5. Workstation a. Leg clearance b. Leg depth c. Leg depth
with leg extension d. Work surface heightnonadjustable e. Work surface heightadju
stable for one surface f. Work surface heightadjustable for two surfaces 6. Chair
a. Seat pan width b. Seat pan depth c. Seat front tilt d. Seat back inclination
e. Seat pan height adjustment range f. Backrest inclination g. Backrest height
Recommendation 7:1 minimum height
2022 min of visual arc width
7080% of height Usu
ally 50 cm or less, but up to 70 cm is acceptable 70 hz minimum 1040 (from top to
bottom gaze) 300 lux minimum 500 lux 700 lux 1:3 1:10 4060% 30% 8090% 4060% 2024C (687
5F) 2327C (7381F) 5060% 0.150.25 m / sec 015 preferred, 025 acceptable 200 mm2 12
um) 1819 mm 1820 mm 0.25N1.5N (0.50.6N preferred) 51 cm 38 cm 59 cm 70 cm 7080 minimu
m (61 cm preferred minimum) minimum minimum cm
Keyboard surface 5971 cm Screen surface 7080 cm 45 cm minimum 3843 cm 5 forward to 7
backward 110130 3852 cm Up to 130 4551 cm above seat pan surface
2.3.2.
Screen Character Features
Good character design can help improve image quality, which is a major factor fo
r reducing eyestrain and visual fatigue. The proper size of a character is depen
dent on the task and the display parameters (brightness, contrast, glare treatme
nt, etc.) and the viewing distance. Character size that is too small
HUMANCOMPUTER INTERACTION
1197
can make reading dif cult and cause the visual focusing mechanism to overwork. Thi
s produces eyestrain and visual fatigue (NAS 1983). Character heights should pre
ferably be at least 2022 min of visual arc, while character width should be 7080%
of the character height (Smith 1984; ANSI 1988). This approximately translates i
nto a minimum lowercase character height of 3.5 mm with a width of 2.5 mm at a n
ormal viewing distance of 50 cm. Good character design and proper horizontal and
vertical spacing of characters can help improve image quality. To ensure adequa
te discrimination between characters and good screen readability, the character
spacing should be in the range of 2050% of the character width. The interline spa
cing should be 50100% of the character height (Smith 1984; ANSI 1988). The design
of the characters in uences their readability. Some characters are hard to deciph
er, such as lowercase g, which looks like numeral 8. A good font design minimize
s character confusion and enhances the speed at which characters can be distingu
ished and read. Two excellent fonts are Huddleston and Lincoln-Mitre (NAS 1983).
Most computers have a large number of fonts to select from. Computer users shou
ld choose a font that is large enough to be easy for them to read.
2.3.3.
Viewing Distance
Experts have traditionally recommended a viewing distance between the screen and
the operators eye of 4550 cm but no more than 70 cm (Grandjean 1979; Smith 1984).
However, experience in eld studies has shown that users may adopt a viewing dist
ance greater than 70 cm and are still able to work ef ciently and not develop visu
al problems. Thus, viewing distance should be determined in context with other c
onsiderations. It will vary depending on the task requirements, CRT screen chara
cteristics, and individuals visual capabilities. For instance, with poor screen o
r hard copy quality, it may be necessary to reduce viewing distances for easier
character recognition. Typically, the viewing distance should be 50 cm or less d
ue to the small size of characters on the VDT screen. LNCs are often used in sit
uations where the computer is placed on any convenient surface, for example a ta
ble at the airport waiting room. Thus, the viewing distance is de ned by the avail
able surface, not a xed workstation. When the surface is farther away from the ey
es, the font size used should be larger. Proper viewing distance will be affecte
d by the condition of visual capacity and by the wearing of spectacles / lenses.
Persons with myopia (near-sightedness) may nd that they want to move the screen
closer to their eyes; while persons with presbyopia (far-sightedness) or bifocal
lenses may want the screen farther away from their eyes. Many computer users wh
o wear spectacles have a special pair of spectacles with lenses that are matched
to their particular visual defect and a comfortable viewing distance to the scr
een. Eyecare specialist can have special spectacles made to meet computer users s
creen use needs.
2.3.4.
Screen Flicker and Image Stability
The stability of the screen image is another characteristic that contributes to
CRT and LCD quality. Ideally, the display should be completely free of perceptib
le movements such as icker or jitter (NIOSH 1981). CRT screens are refreshed a nu
mber of times each second so that the characters on the screen appear to be soli
d images. When this refresh rate is too low, users perceive screen icker. LCDs ha
ve less dif culty with icker and image stability than CRT displays. The perceptibil
ity of screen icker depends on illumination, screen brightness, polarity, contras
t, and individual sensitivity. For instance, as we get older and our visual acui
ty diminishes, so too does our ability to detect icker. A screen with a dark back
ground and light characters has less icker than screens with dark lettering on a
light background. However, light characters on a dark background show more glare
. In practice, icker should not be observable, and to achieve this a screen refre
sh rate of at least 70 cycles per second needs to be achieved for each line on t
he CRT screen (NAS 1983; ANSI 1988). With such a refresh rate, icker should not b
e a problem for either screen polarity (light on dark or dark on light). It is a
good idea to test a screen for image stability. Turn the lights down, increase
the screen brightness / contrast settings, and ll the screen with letters. Flicke
ring of the entire screen or jitter of individual characters should not be perce
ptible, even when viewed peripherally.
2.3.5.
Screen Swivel and Tilt
Reorientation of the screen around its vertical and horizontal axes can reduce s
creen re ections and glare. Re ections can be reduced by simply tilting the display
slightly back or down or to the left or right, depending on the angle of the sou
rce of glare. These adjustments are easiest if the screen can be tilted about it
s vertical and horizontal axes. If the screen cannot be tilted, it should be app
roximately vertical to help eliminate overhead re ections, thus improving legibili
ty and posture. The perception of screen re ection is in uenced by the tilt of the s
creen up or down and back and forth and by the computer users line of sight towar
d the screen. If the screen is tilted toward sources of glare and these are in t
he computer users line of sight to the screen, the screen images will have poorer
clarity and re ections can produce disability glare (see Section 2.4.4). In fact,
the
1198
PERFORMANCE IMPROVEMENT MANAGEMENT
line of sight can be a critical factor in visual and musculoskeletal discomfort
symptoms. When the line of sight can observe glare or re ections, then eyestrain o
ften occurs. For musculoskeletal considerations, experts agree that the line of
sight should never exceed the straight-ahead horizontal gaze, and in fact it is
best to provide a downward gaze of about 1020 from the horizontal when viewing the
top of the screen and about 40 when viewing the bottom edge of the screen (NIOSH
1981; NAS 1983; Smith 1984; ANSI 1988). This will help reduce neck and shoulder
fatigue and pain. These gaze considerations are much harder to obtain when usin
g LNCs because of the smaller screen size and workstation features (eg., airport
waiting room table).
2.4. 2.4.1.
The Visual Environment Lighting
Lighting is an important aspect of the visual environment that in uences readabili
ty and glare on the screen and viewing in the general environment. There are fou
r types of general workplace illumination of interest to the computer users envir
onment: 1. Direct radiants: The majority of of ce lighting is direct radiants. The
se can be incandescent lights, which are most common in homes, or uorescent light
ing, which is more prevalent in workplaces and stores. Direct radiants direct 90
% or more of their light toward the object(s) to be illuminated in the form of a
cone of light. They have a tendency to produce glare. 2. Indirect lighting: Thi
s approach uses re ected light to illuminate work areas. Indirect lighting directs
90% or more of the light onto the ceiling and walls, which re ect it back into th
e room. Indirect lighting has the advantage of reducing glare, but supplemental
lighting is often necessary, which can be a source of glare. 3. Mixed direct rad
iants and indirect lighting: In this approach, part of the light (about 40%) rad
iates in all directions while the rest is thrown directly or indirectly onto the
ceiling and walls. 4. Opalescent globes: These lights give illumination equally
in all directions. Because they are bright, they often cause glare. Modern ligh
t sources used in these four general approaches to workplace illumination are ty
pically of two kinds: electric lament lamps and uorescent tubes. Following are the
advantages and drawbacks of these two light sources: 1. Filament lamps: The lig
ht from lament lamps is relatively rich in red and yellow rays. It changes the ap
parent colors of objects and so is unsuitable when correct assessment of color i
s essential. Filament lamps have the further drawback of emitting heat. On the o
ther hand, employees like their warm glow, which is associated with evening ligh
t and a cozy atmosphere. 2. Fluorescent tubes: Fluorescent lighting is produced
by passing electricity through a gas. Fluorescent tubes usually have a low lumin
ance and thus are less of a source of glare. They also have the ability to match
their lighting spectrum to daylight, which many employees nd preferable. They ma
y also be matched to other spectrums of light that can t of ce decor or employee pr
eferences. Standard-spectrum uorescent tubes are often perceived as a cold, pale
light and may create an unfriendly atmosphere. Fluorescent tubes may produce icke
r, especially when they become old or defective.
2.4.2.
Illumination
The intensity of illumination or the illuminance being measured is the amount of
light falling on a surface. It is a measure of the quantity of light with which
a given surface is illuminated and is measured in lux. In practice, this level
depends on both the direction of ow of the light and the spatial position of the
surface being illuminated in relation to the light ow. Illuminance is measured in
both the horizontal and vertical planes. At computer workplaces, both the horiz
ontal and vertical illuminances are important. A document lying on a desk is ill
uminated by the horizontal illuminance, whereas the computer screen is illuminat
ed by the vertical illuminance. In an of ce that is illuminated from overhead lumi
naires, the ratio between the horizontal and vertical illuminances is usually be
tween 0.3 and 0.5. So if the illuminance in a room is said to be 500 lux, the ho
rizontal illuminance is 500 lux while the vertical illuminance is between 150 an
d 250 lux (0.3 and 0.5 of the horizontal illuminance). The illumination required
for a particular task is determined by the visual requirements of the task and
the visual ability of the employees concerned. In general, an illuminance in the
range of 300700 lux measured on the horizontal working surface (not the computer
screen) is normally
HUMANCOMPUTER INTERACTION
1199
preferable (CDC 1980; NAS 1983). The JES (1996) recommends of ce lighting levels r
anging from 3001,000 lux for at panel displays. Higher illumination levels are nec
essary to read hard copy and lower illumination levels are better for work that
just uses the computer screen. Thus, a job in which hard copy and a computer scr
een are both used should have a general work area illumination level of about 50
0700 lux, while a job that only requires reading the computer screen should have
a general work area illumination of 300500 lux. Con icts can arise when both hardco
py and computer screens are used by different employees who have differing job t
ask requirements or differing visual capabilities and are working in the same ar
ea. As a compromise, room lighting can be set at the recommended lower (300 lux)
or intermediate level (500 lux) and additional task lighting can be provided as
needed. Task lighting refers to localized lighting at the workstation to replac
e or supplement ambient lighting systems used for more generalized lighting of t
he workplace. Task lighting is handy for illuminating hardcopy when the room lig
hting is set at a low level, which can hinder document visibility. Such addition
al lighting must be carefully shielded and properly placed to avoid glare and re e
ctions on the computer screens and other adjacent working surfaces of other empl
oyees. Furthermore, task lighting should not be too bright in comparison to the
general work area lighting since looking between these two different light level
s may produce eyestrain.
2.4.3.
Luminance
Luminance is a measure of the brightness of a surface, that is, the amount of li
ght leaving the surface of an object, either re ected by the surface (as from a wa
ll or ceiling), emitted by the surface (as from the CRT or LCD characters), or t
ransmitted (as light from the sun that passes through translucent curtains). Lum
inance is expressed in units of candelas per square meter. High-intensity lumina
nce sources (such as windows) in the peripheral eld of view should be avoided. In
addition, the balance among the luminance levels within the computer users eld of
view should be maintained. The ratio of the luminance of a given surface or obj
ect to another surface or object in the central eld of vision should be around 3:
1, while the luminance ratio in the peripheral eld of vision can be as high as 10
:1 (NAS 1983).
2.4.4.
Glare
Large differences in luminance or high-luminance lighting sources can cause glar
e. Glare can be classi ed with respect to its effects (disability glare vs. discom
fort glare) or the source of glare (direct glare vs. re ected glare). Glare that r
esults in an impairment of vision (e.g., reduction of visual acuity) is called d
isability glare, while discomfort glare is experienced as a source of discomfort
to the viewer but does not necessarily interfere with visual performance. With
regard to the source, direct glare is caused by light sources in the eld of view
of the computer user, while re ected glare is caused by re ections from illuminated,
polished, or glossy surfaces or by large luminance differences in the visual en
vironment. In general, glare is likely to increase with the luminance, size, and
proximity of the lighting source to the line of sight. Direct and re ected glare
can be limited through one or more of the following techniques: 1. Controlling t
he light from windows: This can be accomplished by closing drapes, shades, and/
or blinds over windows or awnings on the outside, especially during sunlight con
ditions. 2. Controlling the view of luminaires: (a) By proper positioning of CRT
screen with regard to windows and overhead lighting to reduce direct or re ected
glare and images. To accomplish this, place VDTs parallel to windows and luminai
res and between luminaires rather than underneath them. (b) Using screen hoods t
o block luminaires from view. (c) Recessing light xtures. (d) Using light-focusin
g diffusers. 3. Controlling glare at the screen surface by: (a) Adding antiglare
lters on the VDT screen. (b) Proper adjustment up or down / left or right of the
screen. 4. Controlling the lighting sources using: (a) Appropriate glare shield
s or covers on the lamps. (b) Properly installed indirect lighting systems. Glar
e can also be caused by re ections from surfaces, such as working surfaces, walls,
or the oor covering. These surfaces do not emit light themselves but can re ect it
. The ratio of the amount of light re ected by a surface (luminance) to the amount
of light striking the surface (illuminance) is called re ectance. Re ectance is uni
tless. The re ectance of the working surface and the of ce
1200
PERFORMANCE IMPROVEMENT MANAGEMENT
machines should be on the order of 4060% (ANSI 1988). That is, they should not re e
ct more than 60% of the illuminance striking their surface. This can be accompli
shed if surfaces have a matte nish. Generally, oor coverings should have a re ectanc
e of about 30%, ceilings, of 8090%, and walls, 4060%. Re ectance should increase fro
m the oor to the ceiling. Although the control of surface re ections is important,
especially with regard to glare control, it should not be at the expense of a pl
easant working environment where employees feel comfortable. Walls and ceilings
should not be painted dark colors just to reduce light re ectance, nor should wind
ows be completely covered or bricked up to keep out sunlight. Other, more reason
able luminance control approaches can give positive bene ts while maintaining a ps
ychologically pleasing work environment.
2.5. 2.5.1.
The Auditory Environment Noise
A major advantage of computer technology over the typewriter is less noise at th
e workstation. However, it is not unusual for computer users to complain of both
ersome of ce noise, particularly from of ce conversation. Noise levels commonly enco
untered in of ces are below established limits that could cause damage to hearing
(i.e., below 85 dBA). The JES (1996) proposed that the noise level should not ex
ceed 55 dBA. The expectations of of ce employees are for quiet work areas because
their tasks often require concentration. Annoying noise can disrupt their abilit
y to concentrate and may produce stress. Actually, there are many sources of ann
oyance noise in computer operations. Fans in computers, printers, and other acce
ssories, which are used to maintain a favorable internal device temperature, are
a source of noise. Of ce ventilation fans can also be a source of annoyance noise
. The computers themselves may be a source of noise (e.g., the click of keys or
the high-pitched squeal of the CRT). The peripheral equipment associated with co
mputers, such as printers, can be a source of noise. Problems of noise may be ex
acerbated in open-plan of ces, in which noise is harder for the individual employe
e to control than in enclosed of ces. Acoustical control can rely upon ceiling, oor
and wall, furniture, and equipment materials that absorb sound rather than re ect
it. Ceilings that scatter, absorb, and minimize the re ection of sound waves are
desirable to promote speech privacy and reduce general of ce noise levels. The mos
t common means of blocking a sound path is to build a wall between the source an
d the receiver. Walls are not only sound barriers but are also a place to mount
sound-absorbent materials. In openplan of ces, free-standing acoustical panels can
be used to reduce the ambient noise level and also to separate an individual fr
om the noise source. Full effectiveness of acoustical panels is achieved in conc
ert with the sound-absorbent materials and nishes applied to the walls, ceiling, o
or, and other surfaces. For instance, carpets not only cover the oor but also ser
ve to reduce noise. This is achieved in two ways: (1) carpets absorb the inciden
t sound energy and (2) gliding and shuf ing movements on carpets produce less nois
e than on bare oors. Furniture and draperies are also important for noise reducti
on. Acoustical control can also be achieved by proper space planning. For instan
ce, workstations that are positioned too closely do not provide suitable speech
privacy and can be a source of disturbing conversational noise. As a general rul
e, a minimum of 810 ft between employees, separated by acoustical panels or parti
tions, will provide normal speech privacy.
2.5.2.
Heating, Ventilating, and Air Conditioning (HVAC )
Temperature, humidity, air
loyees performance and comfort. It is unlikely that of ces will produce excessive t
emperatures that could be physically harmful to employees. However, thermal comf
ort is an important consideration in employee satisfaction that can in uence perfo
rmance. Satisfaction is based not on the ability to tolerate extremes but on wha
t makes an individual happy. Many studies have shown that most of ce employees are
not satis ed with their thermal comfort. The de nition of a comfortable temperature
is usually a matter of personal preference. Opinions as to what is a comfortabl
e temperature vary within an individual from time to time and certainly among in
dividuals. Seasonal variations of ambient temperature in uence perceptions of ther
mal comfort. Of ce employees sitting close to a window may experience the temperat
ure as being too cold or hot, depending on the outside weather. It is virtually
impossible to generate one room temperature in which all employees are equally w
ell satis ed over a long period of time. As a general rule, it is recommended that
the temperature be maintained in the range of 2024C (6875F) in winter and 2327C (7381
in summer (NIOSH 1981; Smith 1984). The JES (1996) recommends of ce temperatures
of 2023C in winter and 2427C in summer. Air ows across a persons neck, head, shoulders
, arms, ankles, and knees should be kept low (below 0.15 m / sec in winter and b
elow 0.25 m / sec in summer). It is important that ventilation not
HUMANCOMPUTER INTERACTION
1201
produce currents of air that blow directly on employees. This is best handled by
proper placement of the workstation. Relative humidity is an important componen
t of of ce climate and in uences an employees comfort and well being. Air that is too
dry leads to drying out of the mucous membranes of the eyes, nose, and throat.
Individuals who wear contact lenses may be made especially uncomfortable by dry
air. In instances where intense, continuous near-vision work at the computer is
required, very dry air has been shown to irritate the eyes. As a general rule, i
t is recommended that the relative humidity in of ce environments be at least 50%
and less than 60% (NIOSH 1981; Smith 1984). The JES (1996) recommends humidity l
evels of 5060%. Air that is too wet enhances the growth of unhealthy organisms (m
olds, fungus, bacteria) that can cause disease (legionnaires, allergies).
2.6.
Computer Interfaces
Computer interfaces are the means by which users provide instructions to the com
puter. There are a wide variety of devices for interfacing, including keyboards,
mice, trackballs, joy sticks, touch panels, light pens, pointers, tablets, and
hand gloves. Any mechanical or electronic device that can be tied to a human mot
ion can serve as a computer interface. The most common interfaces in use today a
re the keyboard and the mouse. The keyboard will be used as an example to illust
rate how to achieve proper humancomputer interfaces.
2.6.1.
The Keyboard
In terms of computer interface design, a number of keyboard features can in uence
an employees comfort, health, and performance. The keyboard should be detachable
and movable, thus providing exibility for independent positioning of the keyboard
and screen. This is a major problem with LNCs because the keyboard is built int
o the top of the computer case for portability and convenience. It is possible t
o attach a separate, detachable keyboard to the LNC, and this should be done whe
n the LNC is used at a xed workstation in an of ce or at home. Clearly, it would be
dif cult to have a separate keyboard when travelling and the LNC portability feat
ure is paramount. The keyboard should be stable to ensure that it does not slide
on the tabletop. This is a problem when an LNC is held in the users lap or some
other unstable surface. In order to help achieve a favorable user arm height pos
itioning, the keyboard should be as thin as possible. The slope or angle of the
keyboard should be between 0 and 15, measured from the horizontal. LNCs are limite
d in keyboard angle because the keyboard is often at (0). However, some LNCs have
added feet to the computer case to provide an opportunity to increase the keyboa
rd angle. Adjustability of keyboard angle is recommended. While the ANSI standar
d (ANSI 1988) suggests 025, we feel angles over 15 are not necessary for most activ
ities. The shape of the key tops must satisfy several ergonomic requirements, su
ch as minimizing re ections, aiding the accurate location of the operators nger, pro
viding a suitable surface for the key legends, preventing the accumulation of du
st, and being neither sharp nor uncomfortable when depressed. For instance, the
surface of the key tops, as well as the keyboard itself, should have a matte nish
. The key tops should be approximately 200 mm (ANSI 1988) with a minimum horizon
tal width of 12 mm. The spacing between the key centers should be about 1819 mm h
orizontally and 1820 mm vertically (ANSI 1988). There should be slight protrusion
s on select keys on the home row to provide tactile information about nger positi
on on the keyboard. The force to depress the key should ideally be between 0.5 N
and 0.6 N (ANSI 1988). However, ranges from 0.251.5 N have been deemed acceptabl
e (ANSI 1988). The HFES / ANSI-100 standard is currently being revised, and this
recommendation may change soon. Some experts feel that the keying forces should
be as low as feasible without interfering with motor coordination. Research has
shown that light-touch keys require less operator force in depressing the key (
Rempel and Gerson, 1991; Armstrong et al. 1994; Gerard et al. 1996). The light-t
ouch force keyboards vary between 0.250.40 N. Feedback from typing is important f
or beginning typists because it can indicate to the operator that the keystroke
has been successfully completed. There are two main types of keyboard feedback:
tactile and auditory. Tactile feedback can be provided by a collapsing spring th
at increases in tension as the key is depressed or by a snap-action mechanism wh
en key actuation occurs. Auditory feedback (e.g., click or beep) can indicate that th
ey has been actuated. Of course, there is also visual feedback on the computer s
creen. For experienced typists, the feedback is not useful, as their ngers are mo
ving in a ballistic way that is too fast for the feedback to be useful for modif
ying nger action (Guggenbuhl and Krueger 1990, 1991; Rempel and Gerson 1991; Remp
el et al. 1992). The keyboard layout can be the same as that of a conventional t
ypewriter, that is, the QWERTY design, or some other proven style, such as the D
VORAK layout. However, it can be very dif cult for operators to switch between key
boards with different layouts. Traditional keyboard layout has straight rows and
staggered columns. Some authors have proposed curving the rows to provide a bet
ter t for the hand to reduce biomechanical loading on the ngers (Kroemer 1972). Ho
wever,
1202
PERFORMANCE IMPROVEMENT MANAGEMENT
there is no research evidence that such a design provides advantages for operato
rs performance or health. Punnett and Bergqvist (1997) have proposed that keyboar
d design characteristics can lead to upper-extremity musculoskeletal disorders.
There is controversy about this contention by Punnett and Bergqvist because ther
e are many factors involved in computer typing jobs independent of the keyboard
characteristics that may contribute to musculoskeletal disorders. Some ergonomis
ts have designed alternative keyboards in attempts to reduce the potential risk
factors for musculoskeletal disorders (Kroemer 1972; Nakaseko et al. 1985; Ilg 1
987). NIOSH (1997) produced a publication that describes various alternative key
boards. Studies have been undertaken to evaluate some of these alternative keybo
ards (Swanson et al. 1997; Smith et al. 1998). The research results indicated so
me improvement in hand / wrist posture from using the alternative keyboards, but
no decrease in musculoskeletal discomfort.
2.6.2.
Accessories
The use of a wrist rest when keying can help to minimize extension (backward ben
ding) of the hand. A wrist rest should have a fairly broad surface (approximatel
y 5 cm) with a rounded front edge to prevent cutting pressures on the wrist and
hands. Padding further minimizes skin compression and irritation. Height adjusta
bility is important so that the wrist rest can be set to a preferred level in co
ncert with the keyboard height and slope. Some experts are concerned that restin
g the wrist on a wrist rest during keying could cause an increase in intercarpal
canal pressure. They prefer that wrist rests be used only when the user is not
keying for the purpose of resting the hands and wrist. Thus, they believe users
need to be instructed (trained) about when and how to use a wrist rest. Arm hold
ers are also available to provide support for the hands, wrists, and arms while
keyboarding. However, these may also put pressure on structures that may produce
nerve compression. As with a wrist rest, some experts feel these devices are be
st used only during rest from keying.
2.6.3.
The Mouse
The most often-used computer pointing device is the mouse. While there are other
pointing devices, such as the joystick, touch panel, trackball, and light pen,
the mouse is still the most universally used of these devices. An excellent disc
ussion of these pointing devices can be found in Bullinger et al. (1977). The mo
use provides for an integration of both movement of the cursor and action on com
puter screen objects, simultaneously. Many mice have multiple buttons to allow f
or several actions to occur in sequence. The ease of motion patterns and multipl
e-function buttons give the mouse an advantage over other pointing devices. Howe
ver, a disadvantage of the mouse is the need for tabletop space to achieve the m
ovement function. Trankle and Deutschmann (1991) conducted a study to determine
which factors in uenced the speed of properly positioning a cursor with a mouse. T
he results indicated that the most important factors were the target size and th
e distance traveled. Also of lesser importance was the display size arc. The con
trol / response ratio or the sensitivity of the control to movement was not foun
d to be important. Recently, studies have indicated that operators have reported
musculoskeletal discomfort due to mouse use (Karlqvist et al. 1994; Armstrong e
t al. 1995; Hagberg 1995; Fogelman and Brogmus 1995; Wells et al. 1997).
2.7.
The Workstation
Workstation design is a major element in ergonomic strategies for improving user
comfort and particularly for reducing musculoskeletal problems. Figure 2 illust
rates the relationships among the working surface, VDT, chair, documents, and va
rious parts of the body. Of course, this is for a xed workstation at the of ce or h
ome. Use of LNCs often occurs away from xed workstations where it is dif cult to me
et the requirements described below. However, efforts should be made to meet the
se requirements as much as possible, even when using LNCs. The task requirements
will determine critical layout characteristics of the workstation. The relative
importance of the screen, keyboard, and hard copy (i.e., source documents) depe
nds primarily on the task, and this de nes the design considerations necessary to
improve operator performance, comfort, and health. Data-entry jobs, for example,
are typically hard copy oriented. The operator spends little time looking at th
e screen, and tasks are characterized by high rates of keying. For this type of
task it is logical for the layout to emphasize the keyboard, mouse, and hard cop
y, because these are the primary tools used in the task, while the screen is of
lesser importance. On the other hand, dataacquisition operators spend most of th
eir time looking at the screen and seldom use hard copy. For this type of task,
the screen and the keyboard layout should be emphasized.
2.7.1.
Working Surfaces
The size of the work surface is dependent on the task(s), documents, and technol
ogy. The primary working surface (e.g., supporting the keyboard, display, and do
cuments) should be suf cient to: (1) permit the screen to be moved forward or back
ward to a comfortable viewing distance for a range
HUMANCOMPUTER INTERACTION
1 2 3 6 4 7 5 8 9 18 17 16 14 15 13 12 11 10 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13
. 14. 15. 16. 17. 18. 1. 2.
1203
Screen tilt angle Visual angle between the horizontal and the center of the disp
lay Eye-screen distance Document holder and source document Wrist wrest Elbow an
gle Backrest Elbow rest Lumbar support Seat back angle (from horizontal) Seat pa
n angle (from horizontal) Clearance between leg and seat Knee angle Clearance be
tween leg and table Footrest Table height Home row (middle row height) Screen he
ight to center of screen
Figure 2 De nitions of VDT Workstation Terminology. (Adapted from Helander 1982)
of users, (2) allow a detachable keyboard to be placed in several locations, and
(3) permit source documents to be properly positioned for easy viewing. Additio
nal working surfaces (i.e., secondary working surfaces) may be required in order
to store, lay out, read, and / or write on documents or materials. Often users
have more than one computer, so a second computer is placed on a secondary worki
ng surface. In such a situation, workstations are con gured so that multiple piece
s of equipment and source materials can be equally accessible to the user. In th
is case, additional working surfaces are necessary to support these additional t
ools and are arranged to allow easy movement while seated from one surface to an
other. The tabletop should be as thin as possible to provide clearance for the u
sers thighs and knees. Moreover, it is important to provide unobstructed room und
er the working surface for the feet and legs so that users can easily shift thei
r posture. Knee space height and width and leg depth are the three key factors f
or the design of clearance space under working surfaces (see Figure 2). Recommen
dations for minimum width for leg clearance is 51 cm, while the preferred minimu
m width is 61 cm (ANSI, 1988). The minimum depth under the work surface from the
operator edge of the work surface should be 38 cm for clearance at the knee lev
el and 60 cm at the toe level (ANSI 1988). A good workstation design accounts fo
r individual body sizes and often exceeds minimum clearances to allow for free p
ostural movement. Table height has been shown to be an important contributor to
computer user musculoskeletal problems. In particular, tables that are too high
cause the keyboard to be too high for many operators. The standard desk height o
f 30 in. (76 cm) is often too high for most people to attain the proper arm angl
e when using the keyboard. This puts undue pressure on the hands, wrists, arms,
shoulders, and neck. It is desirable for table heights to vary with the trunk he
ight of the operator. Heightadjustable tables are effective for this. Adjustable
multisurface tables enable good posture by allowing the keyboard and display to
be independently adjusted to appropriate keying and viewing heights for each in
dividual and each task. Tables that cannot be adjusted easily are not appropriat
e when used by several individuals of differing sizes. If adjustable tables are
used, ease of adjustment is essential. Adjustments should be easy to make and op
erators should be instructed (trained) about how to adjust the workstation to be
comfortable and safe. Speci cations for the height of working surfaces vary by wh
ether the table is adjustable or xed in height and depending on a single working
surface or multiple working surfaces. Remember that adjustable-height working su
rfaces are strongly recommended. However, if the working surface height is not a
djustable, the proper height for a nonadjustable working surface is about 70 cm
( oor to top of surface) (ANSI 1988). Adjustable tables allow vertical adjustments
of the keyboard and display. Some allow for independent adjustment of the keybo
ard and display. For single adjustable working surfaces, the working surface hei
ght adjustment should be 7080 cm. For independently adjustable working surfaces f
or the keyboard and screen, the appropriate height range for the keyboard surfac
e is 5971 cm, and 7080 cm for the screen (ANSI 1988).
1204
PERFORMANCE IMPROVEMENT MANAGEMENT
2.7.2.
The Chair
Poorly designed chairs can contribute to computer user discomfort. Chair adjusta
bility in terms of height, seat angle, lumbar support, and armrest height and an
gle reduces the pressure and loading on the musculoskeleture of the back, legs,
shoulders, neck, and arms. In addition, how the chair supports the movement of t
he user (the chairs action) helps to maintain proper seated posture and encourage
s good movement patterns. A chair that provides swivel action encourages movemen
t, while backward tilting increases the number of postures that can be assumed.
The chair height should be adjustable so that the feet can rest rmly on the oor wi
th minimal pressure beneath the thighs. The minimum range of adjustment for seat
height should be 3852 cm (NAS 1983; Smith 1984; ANSI 1988). Modern chairs also p
rovide an action that supports the back (spine) when seated. Examples of such ch
airs are the Leap by Steelcase, Inc. and the Aeron by Herman Miller. To enable s
horter users to sit with their feet on the oor without compressing their thighs,
it may be necessary to add a footrest. A well-designed footrest has the followin
g features: (1) it is inclined upward slightly (about 515), (2) it has a nonskid s
urface, (3) it is heavy enough that it does not slide easily across the oor, (4)
it is large enough for the feet to be rmly planted, and (5) it is portable. The s
eat pan is where the users buttocks sits on the chair. It is the part that direct
ly supports the weight of the buttocks. The seat pan should be wide enough to pe
rmit operators to make slight shifts in posture from side to side. This not only
helps to avoid static postures but also accommodates a large range of individua
l buttock sizes with a few seat pan widths. The minimum seat pan width should be
45 cm and the minimum depth 3843 cm (ANSI 1988). The front edge of the chair sho
uld be well rounded downward to reduce pressure on the underside of the thighs,
which can affect blood ow to the legs and feet. The seat needs to be padded to th
e proper rmness that ensures an even distribution of pressure on the thighs and b
uttocks. A properly padded seat should compress about one-half to one inch when
a person sits on it. Some experts feel that the seat front should be elevated sl
ightly (up to 7), while others feel it should be lowered slightly (about 5) (ANSI
1988). There is little agreement among the experts about which is correct (Grand
jean 1979, 1984). Many chair manufacturers provide adjustment of the front angle
so the user can have the preferred tilt angle, either forward or backward. The
tension for leaning backward and the backward tilt angle of the backrest should
be adjustable. Inclination of chair backrest is important for users to be able t
o lean forward or back in a comfortable manner while maintaining a correct relat
ionship between the seat pan angle and the backrest inclination. A back seat inc
lination of about 110 is considered as the best position by many experts (Grandje
an 1984). However, studies have shown that operators may incline backward as muc
h as 125. Backrests that tilt to allow an inclination of up to 125130 are a good id
ea. The advantage of having an independent tilt angle adjustment is that the bac
krest tilt will then have little or no effect on the front seat height. This als
o allows operators to shift postures easily and often. Chairs with full backrest
s that provide lower back (lumbar) support and upper back (lower shoulder) suppo
rt are preferred. This allows employees to lean backward or forward, adopting a
relaxed posture and resting the back muscles. A full backrest with a height arou
nd 4551 cm is recommended (ANSI 1988). However, some of the newer chair designs d
o not have the bottom of the backrest go all the way to the seat pan. This is ac
ceptable as long as the lumbar back is properly supported. To prevent back strai
n with such chairs, it is recommended that they have midback (lumbar) support si
nce the lumbar region is one of the most highly loaded parts of the spine. For m
ost computer workstations, chairs with rolling castors (or wheels) are desirable
. They are easy to move and facilitate the postural adjustment of users, particu
larly when the operator has to access equipment or materials that are on seconda
ry working surfaces. Chairs should have a ve-star base for tipping stability (ANS
I 1988). Another important chair feature is armrests. Pros and cons for the use
of armrests at computer workstations have been advanced. On the one hand, some c
hair armrests can present problems of restricted arm movement, interference with
keyboard operation, pinching of ngers between the armrest and table, restriction
of chair movement such as under the work table, irritation of the arm or elbows
, and adoption of awkward postures. On the other hand, well-designed armrests or
elbow rests can provide support for resting the arms to prevent or reduce fatig
ue, especially during breaks from typing. Properly designed armrests can overcom
e the problems mentioned because they can be raised, lowered, and angled to t the
users needs. Removable armrests are an advantage because they provide greater exi
bility for individual user preference, especially for users who develop discomfo
rt and pain from the pressure of the armrest on their arms.
2.7.3.
Other Workstation Considerations
An important component of the workstation that can help reduce musculoskeletal l
oading is a document holder. When properly designed and proportioned, document h
olders reduce awkward incli-
HUMANCOMPUTER INTERACTION
1205
nations, as well as frequent movements up and down and back and forth of the hea
d and neck. They permit source documents to be placed in a central location at a
pproximately the same viewing distance and height as the computer screen. This e
liminates needless head and neck movements and reduces eyestrain. In practice, s
ome exibility about the location, adjustment, and position of the document holder
should be maintained to accommodate both task requirements and operator prefere
nces. The document holder should have a matte nish so that it does not produce re e
ctions or a glare source. Privacy requirements include both visual and acoustica
l control of the workplace. Visual control prevents physical intrusions and dist
ractions, contributes to protecting con dential / private conversations, and preve
nts the individual from feeling constantly watched. Acoustical control prevents
distracting and unwanted noisefrom machine or conversationand permits speech priva
cy. While certain acoustical methods and materials such as free-standing panels
are used to control general of ce noise level, they can also be used for privacy.
In open-of ce designs they can provide workstation privacy. Generally, noise contr
ol at a computer workstation can be achieved through the following methods:
Use of vertical barriers, such as acoustical screens or panels. Selection of oor,
ceiling, wall, and workstation materials and nishes according to their power
to control noise.
Placement of workstations to enhance individual privacy. Locating workstations a
way from areas likely to generate noise (e.g., printer rooms, areas with
heavy traf c). Each of these methods can be used individually or combined to accou
nt for the speci c visual and acoustical requirements of the task or individual em
ployee needs. Planning for privacy should not be made at the expense of visual i
nterest or spatial clarity. For instance, providing wide visual views can preven
t the individual from feeling isolated. Thus, a balance between privacy and open
ness enhances user comfort, work effectiveness, and of ce communications. Involvin
g the employee in decisions of privacy can help in deciding the compromises betw
een privacy and openness.
2.8.
Work Practices
Good ergonomic design of computer workstations has the potential to reduce visua
l and musculoskeletal complaints and disorders as well as increase employee perf
ormance. However, regardless of how well a workstation is designed, if operators
must adopt static postures for a long time, they can still have performance, co
mfort, and health problems. Thus, designing tasks that induce employee movement
in addition to work breaks can contribute to comfort and help relieve employees f
atigue.
2.8.1.
Work Breaks
As a minimum, a 15-minute break from working should be taken after 2 hours of co
ntinuous computer work (CDC 1980; NIOSH 1981). Breaks should be more frequent as
visual, muscular, and mental loads are high and as users complain of visual and
musculoskeletal discomfort and psychological stress. With such intense, high-wo
rkload tasks, a work break of 10 minutes should be taken after 1 hour of continu
ous computer work. More frequent breaks for alternative work that does not pose
demands similar to the primary computer work can be taken after 30 minutes of co
ntinuous computer work. Rest breaks provide an opportunity for recovery from loc
1206
PERFORMANCE IMPROVEMENT MANAGEMENT
including typing, pointing, and clicking. This set of standard interaction technique
s is evolving, with a transition from graphical user interfaces to perceptual us
er interfaces that seek to more naturally interact with users through multimodal
and multimedia interaction (Turk and Robertson 2000). In either case, however,
these interfaces are characterized by interaction techniques that try to match u
ser capabilities and limitations to the interface design. Cognitive design effor
ts are guided by the requirements de nition, user pro le development, tasks analysis
, task allocation, and usability goal setting that result from an intrinsic unde
rstanding gained from the target work environment. Although these activities are
listed and presented in this order, they are conducted iteratively throughout t
he system development life cycle.
3.2.
Requirements De nition
Requirements de nition involves the speci cation of the necessary goals, functions,
and objectives to be met by the system design (Eberts 1994; Rouse 1991). The int
ent of the requirements de nition is to specify what a system should be capable of
doing and the functions that must be available to users to achieve stated goals
. Karat and Dayton (1995) suggest that developing a careful understanding of sys
tem requirements leads to more effective initial designs that require less redes
ign. Ethnographic evaluation can be used to develop a requirements de nition that
is necessary and complete to support the target domain (Nardi 1997). Goals speci
fy the desired system characteristics (Rouse 1991). These are generally qualitat
ively stated (e.g., automate functions, maximize use, accommodate user types) an
d can be met in a number of ways. Functions de ne what the system should be capabl
e of doing without specifying the speci cs of how the functions should be achieved
. Objectives are the activities that the system must be able to accomplish in su
pport of the speci ed functions. Note that the system requirements, as stated in t
erms of goals, functions, and objectives, can be achieved by a number of design
alternatives. Thus, the requirements de nition speci es what the system should be ab
le to accomplish without specifying how this should be realized. It can be used
to guide the overall design effort to ensure the desired end is achieved. Once a
set of functional and feature requirements has been scoped out, an understandin
g of the current work environment is needed in order to design systems that effe
ctively support these requirements.
3.3.
Contextual Task Analysis
The objective of contextual task analysis is to achieve a user-centered model of
current work practices (Mayhew 1999). It is important to determine how users cu
rrently carry out their tasks, which individuals they interact with, what tools
support the accomplishment of their job goals, and the resulting products of the
ir efforts. Formerly this was often achieved by observing a user or set of users
in a laboratory setting and having them provide verbal protocols as they conduc
ted task activities in the form of use cases (Hackos and Redish 1998; Karat 1988
; Mayhew 1999; Vermeeren 1999). This approach, however, fails to take into consi
deration the in uences of the actual work setting. Through an understanding of the
work environment, designers can leverage current practices that are effective w
hile designing out those that are ineffective. The results of a contextual task
analysis include work environment and task analyses, from which mental models ca
n be identi ed and user scenarios and task-organization models (e.g., use sequence
s, use ow diagrams, use work ows, and use hierarchies) can be derived (Mayhew 1999)
. These models and scenarios can then help guide the design of the system. As de
picted in Figure 3, contextual task analysis consists of three main steps. Effec
tive interactive system design thus comes from a basis in direct observation of
users in their work environments rather than assumptions about the users or obse
rvations of their activities in contrived laboratory settings (Hackos and Redish
1998). Yet contextual tasks analysis is sometimes overlooked because developers
assume they know users or that their user base is too diverse, expensive, or ti
me consuming to get to know. In most cases, however, observation of a small set
of diverse users can provide critical insights that lead to more effective and a
cceptable system designs. For usability evaluations, Nielsen (1993) found that t
he greatest payoff occurs with just three users.
3.3.1.
Background Information
It is important when planning a task analysis to rst become familiar with the wor
k environment. If analysts do not understand work practices, tools, and jargon p
rior to commencing a task analysis, they can easily get confused and become unab
le to follow the task ow. Further, if the rst time users see analysts they have cl
ipboard and pen in hand, users are likely to resist being observed or change the
ir behaviors during observation. Analysts should develop a rapport with users by
spending time with them, participating in their task activities when possible,
and listening to their needs and concerns. Once users are familiar and comfortab
le with analysts and analysts are likewise versed on work practices, data collec
tion can commence. During this familiarization, analysts can also capture data t
o characterize users.
HUMANCOMPUTER INTERACTION
1207
Construct models of work practices
Collect & analyze data via observation/interviews
Gather background information on users, tasks, environment
Figure 3 The Steps of Contextual Task Analysis.
3.3.2.
Characterizing Users
It is ultimately the users who will determine whether a system is adopted into t
heir lives. Designs that frustrate, stress, or annoy users are not likely to be
embraced. Based on the requirements de nition, the objective of designers should b
e to develop a system that can meet speci ed user goals, functions, and objectives
. This can be accomplished through an early and continual focus on the target us
er population (Gould et al. 1997). It is inconceivable that design efforts would
bring products to market without thoroughly determining who the user is. Yet de
velopers, as they expedite system development to rush products to market, are of
ten reluctant to characterize users. In doing so, they may fail to recognize the
amount of time they spend speculating upon what users might need, like, or want
in a product (Nielsen 1993). Ascertaining this information directly by querying
representative users can be both more ef cient and more accurate. Information abo
ut users should provide insights into differences in their computer experience,
domain knowledge, and amount of training on similar systems (Wixon and Wilson 19
97). The results can be summarized in a narrative format that provides a user pr
o le of each intended user group (e.g., primary users, secondary users, technician
s and support personnel). No system design, however, will meet the requirements
of all types of users. Thus, it is essential to identify, de ne, and characterize
target users. Separate user pro les should be developed for each target user group
. The user pro les can then feed directly into the task analysis by identifying th
e user groups for which tasks must be characterized (Mayhew 1999). Mayhew (1999)
presents a step-by-step process for developing user pro les. First, a determinati
on of user categories is made by identifying the intended user groups for the ta
rget system. When developing a system for an organization, this information may
come directly from preexisting job categories. Where those do not exist, marketi
ng organizations often have target user populations identi ed for a given system o
r product. Next, the relevant user characteristics must be identi ed. User pro les s
hould be speci ed in terms of psychological (e.g., attitudes, motivation), knowled
ge and experience (e.g., educational background, years on job), job and task (e.
g., frequency of use), and physical (e.g., stature, visual impairments) characte
ristics (Mayhew 1999; Nielsen 1993; Wixon and Wilson 1997). While many of these
user attributes can be obtained via user pro le questionnaires or interviews, psyc
hological characteristics may be best identi ed via ethnographic evaluation, where
a sense of the work environment temperament can be obtained. Once this informat
ion is obtained, a summary of the key characteristics for each target user group
can be developed, highlighting their implications to the system design. By unde
rstanding these characteristics, developers can better anticipate such issues as
learning dif culties and specify appropriate levels of interface complexity. Syst
em design requirements involve an assessment of the required levels of such fact
ors as ease of learning, ease of use, level of satisfaction, and workload for ea
ch target user group (see Table 2) Individual differences within a user populati
on should also be acknowledged (Egan 1988; Hackos and Redish 1998). While users
differ along many dimensions, key areas of user differences have been identi ed th
at signi cantly in uence their experience with interactive systems. Users may differ
HUMANCOMPUTER INTERACTION
1209
a domain expert is somehow queried about their task knowledge. It may be bene cial
to query a range of users, from novice to expert, to identify differences in th
eir task practices. In either case, it is important to select individuals that c
an readily verbalize how a task is carried out to serve as informants (Ebert 199
4). When a very detailed task analysis is required, formal techniques such as TA
KD (Diaper 1989; Kirwan and Ainsworth 1992) or GOMS (Card et al. 1983) can be us
ed to delineate task activities (see Chapter 39), TAKD uses knowledge-representa
tion grammars (i.e., sets of statements used to described system interaction) to
represent task-knowledge in a task-descriptive hierarchy. This technique is use
ful for characterizing complex tasks that lack ne-detail cognitive activities (Eb
erts 1994). GOMS is a predictive modeling technique that has been used to charac
terize how humans interact with computers. Through a GOMS analysis, task goals a
re identi ed, along with the operators (i.e., perceptual, cognitive, or motor acts
) and methods (i.e., series of operators) to achieve those goals and the selecti
on rules used to elect between alternative methods. The bene t of TAKD and GOMS is
that they provide an in-depth understanding of task characteristics, which can
be used to quantify the bene ts in terms of consistency (TAKD) or performance time
gains (GOMS) of one design vs. another (see Gray et al. 1993; McLeod and Sherwo
od-Jones 1993 for examples of the effective use of GOMS in design). This deep kn
owledge, however, comes at a great cost in terms of time to conduct the analysis
. Thus, it is important to determine the level of task analysis required for inf
ormed design. While formal techniques such as GOMS can lead to very detailed ana
lyses (i.e., at the perceive, think, act level), often such detail is not requir
ed for effective design. Jeffries (1997) suggests that one can loosely determine
the right level of detail by determining when further decomposition of the task
would not reveal any interesting new subtasks that would enlighten the design. If d
etailed task knowledge is not deemed requisite, informal task-analysis technique
s should be adopted. Interviews are the most common informal technique to gather
task information (Jeffries 1997; Kirwan and Ainsworth 1992; Meister 1985). In t
his technique, informants are asked to verbalize their strategies, rationale, an
d knowledge used to accomplish task goals and subgoals (Ericsson and Simon 1980)
. As each informants mental model of the tasks they verbalize is likely to differ
, it is advantageous to interview at least two to three informants to identify t
he common ow of task activities. Placing the informant in the context of the task
domain and having him or her verbalize while conducting tasks affords more comp
lete task descriptions while providing insights on the environment the task is p
erformed within. It can sometimes be dif cult for informants to verbalize their ta
sk performance because much of it may be automatized (Eberts 1994). When conduct
ing interviews, it is important to use appropriate sampling techniques (i.e., sa
mple at the right time with enough individuals), avoid leading questions, and fo
llow up with appropriate probe questions (Nardi 1997). While the interviewer sho
uld generally abstain from interfering with task performance, it is sometimes ne
cessary to probe for more detail when it appears that steps or subgoals are not
being communicated. Eberts (1994) suggests that the human information-processing
model can be used to structure verbal protocols and determine what information
is needed and what is likely being left out. Observation during task activity or
shadowing workers throughout their daily work activities are time-consuming tas
k-analysis techniques, but they can prove useful when it is dif cult for informant
s to verbalize their task knowledge (Jeffries 1997). These techniques can also p
rovide information about the environment in which tasks are performed, such as t
acit behaviors, social interactions, and physical demands, which are dif cult to c
apture with other techniques (Kirwan and Ainsworth 1992). While observation and
shadowing can be used to develop task descriptions, surveys are particularly use
ful task-analysis tools when there is signi cant variation in the manner in which
tasks are performed or when it is important to determine speci c task characterist
ics, such as frequency (Jeffries 1997; Nielsen 1993). Surveys can also be used a
1210
Form Goal
PERFORMANCE IMPROVEMENT MANAGEMENT
Form Intention
Specify Action
Execute Action
Perceive System State
Interpret System State Evaluate Outcome
Figure 4 Stages of Goal Accomplishment.
essential to provide feedback (Nielsen 1993) to the executed action. Knowing tha
t users often specify alternative methods to achieve a goal or change methods du
ring the course of goal seeking, designers can aim to support diverse approaches
. Further, recognizing that users commit errors emphasizes the need for undo fun
ctionality. The results from a task analysis provide insights into the optimal s
tructuring of task activities and the key attributes of the work environment tha
t will directly affect interactive system design (Mayhew 1999). The analysis enu
merates the tasks that users may want to accomplish to achieve stated goals thro
ugh the preparation of a task list or task inventory (Hackos and Redish 1998; Je
ffries 1997). A model of task activities, including how users currently think ab
out, discuss, and perform their work, can then be devised based on the task anal
ysis. To develop task- ow models, it is important to consider the timing, frequenc
y, criticality, dif culty, and responsible individual of each task on the list. In
seeking to conduct the analysis at the appropriate level of detail, it may be b
ene cial initially to limit the task list and associated model to the primary 1020
tasks that users perform. Once developed, task models can be used to determine t
he functionality necessary to support in the system design. Further, once the ta
sk models and desired functionality are characterized, use scenarios (i.e., conc
rete task instances, with related contextual [i.e., situational] elements and st
ated resolutions) can be developed that can be used to drive both the system des
ign and evaluation (Jeffries 1997).
3.3.4.
Constructing Models of Work Practices
While results from the task analysis provide task- ow models, they also can provid
e insights on the manner in which individuals model these process ows (i.e., ment
al models). Mental models synthesize several steps of a process into an organize
d unit (Allen 1997). An individual may model several aspects of a given process,
such as the capabilities of a tool or machine, expectations of coworkers, or un
derstandings of support processes (Fischer 1991). These models allow individuals
to predict how a process will respond to a given input, explain a process event
, or diagnose the reasons for a malfunction. Mental models are often incomplete
and inaccurate, however, so understandings based on these models can be erroneou
s. As developers design systems, they will develop user models of target user gr
oups (Allen 1997). These models should be relevant (i.e., able to make predictio
ns as users would), accurate, adaptable to changes in user behavior, and general
izable. Pro cient user modeling can assist developers in designing systems that in
teract effectively with users. Developers must recognize that users will both co
me to the system interaction with preconceived mental models of the process bein
g automated and develop models of the automated system interaction. They must th
us seek to identify how users represent their existing knowledge about a process
and how this knowledge ts together in learning and performance so that they can
design systems that engender the development of an accurate mental model of the
system interaction (Carroll and Olson 1988). By understanding how users model pr
ocesses, developers can determine how users currently think and act, how these b
ehaviors can be supported by the interactive system design when advantageous, an
d how they can be modi ed and improved upon via system automation.
3.3.5.
Task Allocation
In moving toward a system design, once tasks have been analyzed and associated m
ental models characterized, designers can use this knowledge to address the rela
tionship between the human and the interactive system. Task allocation is a proc
ess of assigning the various tasks identi ed via the
HUMANCOMPUTER INTERACTION
1211
task analysis to agents (i.e., users), instruments (e.g., interactive systems) o
r support resources (e.g., training, manuals, cheat sheets). It de nes the extent
of user involvement vs. computer automation in system interaction (Kirwan and Ai
nsworth 1992). In some system-development efforts, formal task allocation will b
e conducted; in others, it is a less explicit yet inherent part of the design pr
ocess. While there are many systematic techniques for conducting task analysis (
see Kirwan and Ainsworth 1992), the same is not true of task allocation (Sherida
n 1997a). Further, task allocation is complicated by the fact that seldom are ta
sks or subtasks truly independent, and thus their interdependence must be effect
ively designed into the humansystem interaction. Rather than a deductive assignme
nt of tasks to human or computer, task allocation thus becomes a consideration o
f the multitude of design alternatives that can support these interdependencies.
Sheridan (1997a,b) delineates a number of task allocation considerations that c
an assist in narrowing the design space (see Table 4). In allocating tasks, one
must also consider what will be assigned to support resources. If the system is
not a walk-up-and-use system but one that will require learning, then designers
must identify what knowledge is appropriate to allocate to support resources (e.
g., training courses, manuals, online help). Training computer users in their ne
w job requirements and how the technology works has often been a neglected eleme
nt in of ce automation. Many times the extent of operator training is limited to r
eading the manual and learning by trial and error. In some cases, operators may
have classes that go over the material in the manual and give hands-on practice
with the new equipment for limited periods of time. The problem with these appro
aches is that there is usually insuf cient time for users to develop the skills an
d con dence to adequately use the new technology. It is thus essential to determin
e what online resources will be required to support effective system interaction
. Becoming pro cient in hardware and software use takes longer than just the train
ing course time. Often several days, weeks, or even months of daily use are need
ed to become an expert depending on the dif culty of the application and the skill
of the individual. Appropriate support resources should be designed into the sy
stem to assist in developing this pro ciency. Also, it is important to remember th
at each individual learns at his or her own pace and therefore some differences
in pro ciency will be seen among individuals. When new technology is introduced, t
raining should tie in skills from the former methods of doing tasks to facilitat
e the transfer of knowledge. Sometimes new skills clash with those formerly lear
ned, and then more time for training and practice is necessary to achieve good r
esults. If increased performance or labor savings are expected with the new tech
nology, it is prudent not to expect results too quickly. Rather, it is wise to d
evelop the users skills completely if the most positive results are to be achieve
d.
TABLE 4
Considerations in the Task-Allocation Process Design Issue Strict task requireme
nts can complicate or make infeasible appropriate task allocation Highly repetit
ive tasks are generally appropriate for automation; dealing with the unexpected
or cognitively complex tasks are generally appropriate for humans Users mental mo
dels may uncover expected allocation schemes Bound the design space by assessing
total computer automation vs. total human manual control solutions Sheridan (19
97a,b) offers a 10-point scale of allocation between the extremes that assists i
n assessing intermediate solutions Strict assignments are ineffectual; a general
principle is to leave the big picture to the human and the details to the compu
ter Will the computer and user trade outputs of their processing or will they co
ncurrently collaborate in task performance? While many criteria affect overall s
ystem interaction, a small number of criteria are generally important for an ind
ividual task
1212
PERFORMANCE IMPROVEMENT MANAGEMENT
3.4.
Competitive Analysis and Usability Goal Setting
Once the users and tasks have been characterized, it is sometimes bene cial to con
duct a competitive analysis (Nielsen 1993). Identifying the strengths and weakne
sses of competitive products or existing systems allows means to leverage streng
ths and resolve identi ed weaknesses. After users have been characterized, a task
analysis performed, and, if necessary, a competitive analysis conducted, the nex
t step in interactive system design is usability goal setting (Hackos and Redish
1998; Mayhew 1999; Nielsen 1993; Wixon and Wilson 1993). Usability objectives g
enerally focus around effectiveness (i.e., the extent to which tasks can be achi
eved), intuitiveness (i.e., how learnable and memorable the system is), and subj
ective perception (i.e., how comfortable and satis ed users are with the system) (
Eberts 1994; Nielsen 1993; Shneiderman 1992; Wixon and Wilson 1997). Setting suc
h objectives will ensure that the usability attributes evaluated are those that
are important for meeting task goals; that these attributes are translated into
operational measures; that the attributes are generally holistic, relating to ov
erall system / task performance; and that the attributes relate to speci c usabili
ty objectives. Because usability is assessed via a multitude of potentially con ic
ting measures, often equal weights cannot be given to every usability criterion.
For example, to gain subjective satisfaction, one might have to sacri ce task ef ci
ency. Developers should specify usability criteria of interest and provide opera
tional goals for each metric. These metrics can be expressed as absolute goals (
i.e., in terms of an absolute quanti cation) or as relative goals (i.e., in compar
ison to a benchmark system or process). Such metrics provide system developers w
ith concrete goals to meet and a means to measure usability. This information is
generally documented in the form of a usability attribute table and usability s
peci cation matrix (see Mayhew 1999).
3.5.
User Interface Design
While design ideas evolve throughout the information-gathering stages, formal de
sign of the interactive system commences once relevant information has been obta
ined. The checklist in Table 5 can be used to determine whether the critical inf
ormation items that support the design process have been addressed. Readied with
information, interactive system design generally begins with an initial definit
ion of the design and evolves into a detailed design, from which iterative cycle
s of evaluation and improvement transpire (Martel 1998).
3.5.1.
Initial Design De nition
Where should one commence the actual design of a new interactive system? Often d
esigners look to existing products within their own product lines or competitors
products. This is a sound practice because it maintains consistency with existin
g successful products. This approach may be limiting, however, leading to evolut
ionary designs that lack design innovations. Where can designers obtain the idea
s to fuel truly innovative designs that uniquely meet the needs of their users?
Ethnographic evaluations can lead to many innovative design concepts that would
never be realized in isolation of the work environment (Mountford 1990). The eff
ort devoted to the early characterization of users and tasks, particularly when
conducted in the context of the work environment, often is rewarded in terms of
the generation of innovative design ideas. Mountford (1990) has provided a numbe
r of techniques to assist in eliciting design ideas based on the objects, artifa
cts, and other information gathered during the contextual task analysis (see Tab
le 6). To generate a multitude of design ideas, it is bene cial to use a parallel
design strategy (Nielsen 1993), where more than one designer sets out in isolati
on to generate design concepts. A low level
TABLE 5
Checklist of Information Items
Identi ed necessary goals, functions, and objectives to be met by system design Be
came familiar with practices, tools, and vernacular of work environment Characte
rized user pro les in terms of psychological characteristics, knowledge and experi
ence, job and task characteristics, and physical attributes Acknowledged individ
ual differences within target user population Developed user models that are rel
evant, accurate, adaptable to changes in user behavior, and generalizable Develo
ped a user-centered model of current work practices via task analysis De ned exten
t of user involvement vs. computer automation in system interaction, as well as
required support resources (e.g., manuals) Conducted a competitive analysis Set
usability goals
isual access (Kim and Hirtle 1995). Parunak (1989) accomplished this in a hypert
ext environment by providing between-path mechanisms (e.g., backtracking capabil
ity and guided tours), annotation capabilities that allow users to designate loc
ations that can be accessed directly (e.g., bookmarks in hypertext), and links a
nd ltering techniques that simplify a given topology. It is important to note tha
t a metaphor does not have to be a literal similarity (Ortony 1979) to be effect
ive. In fact, Gentner (1983) and Gentner and Clement (1988) suggest that people
seek to identify relational rather than object attribute comparisons in comprehe
nding metaphors. Based on
1214
PERFORMANCE IMPROVEMENT MANAGEMENT
Gentners structure-mapping theory, the aptness of a metaphor should increase with
the degree to which its interpretation is relational. Thus, when interpreting a
metaphor, people should tend to extend relational rather than object attribute
information from the base to the target. The learning ef cacy of a metaphor, howev
er, is based on more than the mapping of relational information between two obje
cts (Carroll and Mack 1985). Indeed, it is imperative to consider the open-ended
ness of metaphors and leverage the utility of not only correspondence, but also
noncorrespondence in generating appropriate mental models during learning. Never
theless, the structure-mapping theory can assist in providing a framework for ex
plaining and designing metaphors for enhancing the design of interactive systems
. Neale and Carroll (1997) have provided a ve-stage process from which design met
aphors can be conceived (see Table 7). Through the use of this process, develope
rs can generate coherent, wellstructured metaphoric designs. 3.5.2.3. Use Scenar
ios, Use Sequences, Use Flow Diagrams, Use Work ows, and Use Hierarchies Once a me
taphoric design has been de ned, its validity and applicability to task goals and
subgoals can be identi ed via use scenarios, use sequences, use ow diagrams, use wo
rk ows, and use hierarchies (see Figure 5) (Hackos and Redish 1998). Use scenarios
are narrative descriptions of how the goals and subgoals identi ed via the contex
tual task analysis will be realized via the interface design. Beyond the main ow
of task activities, they should address task exceptions, individual differences,
and anticipated user errors. In developing use scenarios, it can be helpful to
reference task allocation schemes (see Section 3.3.5). These schemes can help to
de ne what will be achieved by users via the interface, what will be automated, a
nd what will be rendered to support resources in the use scenarios. If metaphori
c designs are robust, they should be able to withstand the interactions demanded
by a variety of use scenarios with only modest modi cations required. Design conc
epts to address required modi cations should evolve from the types of interactions
envisioned by the scenarios. Once the running of use scenarios fails to generat
e any required design modi cations, their use can be terminated. If parts of a use
scenario are dif cult for users to achieve or designers to conceptualize, use seq
uences can be used. Use sequences delineate the sequence of steps required for a
scenario subsection being focused upon. They specify the actions and decisions
required of the user and the interactive system, the objects needed to achieve t
ask goals, and the required outputs of the system interaction. Task workarounds
and exceptions can be addressed with use sequences to determine if the design sh
ould support these activities. Providing detailed sequence speci cations highlight
s steps that are not appropriately supported by the design and thus require rede
sign. When there are several use sequences supported by a design, it can be help
ful to develop use ow diagrams for a de ned subsection of the task activities. Thes
e diagrams delineate the alternative paths and related intersections (i.e., deci
sion points) users encounter during system interaction. The representative entit
ies that users encounter throughout the use ow diagram become the required object
s and actions for the interface design.
TABLE 7 Stage
Stages of Metaphoric Design Generation Design Outcome Required functions, featur
es, and system capabilities are identi ed (see Section 3.2) Artifacts and other ob
jects in the environment identi ed via the contextual task analysis (see Section 3
.3) can assist in generating design concepts (see Table 6) Identify what users d
o (goals and subgoals), the methods they use to accomplish these objectives (act
ions and objects), and map to the physical elements available in the metaphor; u
se cases can be used for this stage Identify where the metaphor has no analogous
function for desired goals and subgoals Determine where composite metaphors or
support resources are needed (e.g., online help, agent assistance) so problems r
elated to mismatches can be averted
HUMANCOMPUTER INTERACTION
1215
Use Scenario The user picks up t he manual and determines if t he functionality
is available for use...
Use Sequence 1. The user arrives at the terminal. 2. A graphic of the current op
tions appears on the screen. 3. The user reviews the options. 4. The user cannot
readily access the desired functionality. 5. The user picks up the manual. ...
Use Hierarchy Diagram
Use Workflow Diagram
Use Flow Diagram
Figure 5 Design-Generation Techniques.
When interactive systems are intended to yield savings in the required level of
information exchange, use work ows can be used. These ows provide a visualization o
f the movement of users or information objects throughout the work environment.
They can clearly denote if a design concept will improve the information ow. Desi
gners can rst develop use work ows for the existing system interaction and then eli
minate, combine, resequence, and simplify steps to streamline the ow of informati
on. Use hierarchies can be used to visualize the allocation of tasks among worke
rs. By using sticky notes to represent each node in the hierarchy, these represe
ntations can be used to demonstrate the before- and after-task allocations. The
bene ts of the new task allocation engendered by the interactive system design sho
uld be readily perceived in hierarchical ow changes. 3.5.2.4. Design Support Deve
lopers can look to standards and guidelines to direct their design efforts. Stan
dards focus on advising the look of an interface, while guidelines address the u
sability of the interface (Nielsen 1993). Following standards and guidelines can
lead to systems that are easy to learn and use due to a standardized look and f
eel (Buie 1999). Developers must be careful, however, not to follow these source
s of design support blindly. An interactive system can be designed strictly acco
rding to standards and guidelines yet fail to physically t users, support their g
oals and tasks, and integrate effectively into their environment (Hackos and Red
ish 1998). Guidelines aim at providing sets of practical guidance for developers
(Brown 1988; Hackos and Redish 1998; Marcus 1997; Mayhew 1992). They evolve fro
m the results of experiments, theorybased predictions of human performance, cogn
itive psychology and ergonomic design principles, and experience. Several differ
ent levels of guidelines are available to assist system development efforts, inc
luding general guidelines applicable to all interactive systems, as well as cate
gory-speci c (i.e., voice vs. touch screen interfaces) and product-speci c guideline
s (Nielsen 1993). Standards are statements (i.e., requirements or recommendation
s) about interface objects and actions (Buie 1999). They address the physical, c
ognitive, and affective nature of computer interaction. They are written in gene
ral and exible terms because they must be applicable to a wide variety of applica
tions and target user groups. International (e.g., ISO 9241), national (e.g., AN
SI, BSI),
1216
PERFORMANCE IMPROVEMENT MANAGEMENT
military and government (e.g., MIL-STD 1472D), and commercial (e.g., Common User
Access by IBM) entities write them. Standards are the preferred approach in Eur
ope. The European Community promotes voluntary technical harmonization through t
he use of standards (Rada and Ketchell 2000). Buie (1999) has provided recommend
ations on how to use standards that could also apply to guidelines. These includ
e selecting relevant standards; tailoring these select standards to apply to a g
iven development effort; referring to and applying the standards as closely as p
ossible in the interactive system design; revising and re ning the select standard
s to accommodate new information and considerations that arise during developmen
t; and inspecting the nal design to ensure the system design complies with the st
andards where feasible. Developing with standards and guidelines does not preclu
de the need for evaluation of the system. Developers will still need to evaluate
their systems to ensure they adequately meet users needs and capabilities. 3.5.2
.5. Storyboards and Rough Interface Sketches The efforts devoted to the selectio
n of a metaphor or composite metaphor and the development of use scenarios, use
sequences, use ow diagrams, use work ows, and use hierarchies result in a plethora
of design ideas. Designers can brainstorm over design concepts, generating story
boards of potential ideas for the detailed design (Vertelney and Booker 1990). S
toryboards should be at the level of detail provided by use scenarios and work ow
diagrams (Hackos and Redish 1998). The brainstorming should continue until a set
of satisfactory storyboard design ideas has been achieved. The favored set of i
deas can then be re ned into a design concept via interface sketches. Sketches of
screen designs and layouts are generally at the level of detail provided by use
sequences. Cardboard mockups and Wizard of Oz techniques (Newell et al. 1990), t
he latter of which enacts functionality that is not readily available, can be us
ed at this stage to assist in characterizing designs.
3.5.3.
Prototyping
Prototypes of favored storyboard designs are developed. These are working models
of the preferred designs (Hackos and Redish 1998; Vertelney and Booker 1990). T
hey are generally developed with easy-to-use toolkits (e.g., Macromedia Director
, Toolbook, SmallTalk, or Visual Basic) or simpler tools (e.g., hypercard scenar
ios, drawing programs, even paper or plastic mockups) rather than highlevel prog
ramming languages. The simpler prototyping tools are easy to generate and modify
and cost effective; however, they demonstrate little if anything in the way of
functionality, may present concepts that cannot be implemented, and may require
a Wizard to enact functionality. The toolkits provide prototypes that look and feel
more like the nal product and demonstrate the feasibility of desired functionalit
y; however, they are more costly and time consuming to generate. Whether highor
low-end techniques are used, prototypes provide means to provide cost-effective,
concrete design concepts that can be evaluated with target users (usually three
to six users per iteration) and readily modi ed. They prevent developers from exh
austing extensive resources in formal development of products that will not be a
dopted by users. Prototyping should be iterated until usability goals are met.
3.6.
Usability Evaluation of HumanComputer Interaction
Usability evaluation focuses on gathering information about the usability
interactive system so that this information can be used to focus redesign
s via iterative design. While the ideal approach is to consider usability
he inception of the system development process, often it is considered in
of an
effort
from t
later
HUMANCOMPUTER INTERACTION
1217
Psychophysiological measures of subjective perception (e.g., EEGs; heart rate; b
lood pressure) Experimental evaluation (e.g., quantitative data; compare design
alternatives)
There are advantages and disadvantages to each of these evaluative techniques (s
ee Table 8) (Preece 1993, Karat 1997). Thus, a combination of methods is often u
sed in practice. Typically, one would rst perform an expert evaluation (e.g., heu
ristic evaluation) of a system to identify the most obvious usability problems.
Then user testing could be conducted to identify remaining problems that were mi
ssed in the rst stages of evaluation. In general, a number of factors need to be
considered when selecting a usability-evaluation technique or a combination ther
eof (see Table 9) (Dix et al. 1993; Nielsen 1993; Preece 1993). As technology ha
s evolved, there has been a shift in the technoeconomic paradigm, allowing for m
ore universal access of computer technology (Stephanidis and Salvendy 1998). Thu
s, individuals with diverse abilities, requirements, and preferences are now reg
ularly utilizing interactive products. When designing for universal access, part
icipation of diverse user groups in usability evaluation is essential. Vanderhei
den (1997) has suggested a set of principles for universal design that focuses o
n the following: simple and intuitive use; equitable use; perceptible informatio
n; tolerance for error; accommodation of preferences and abilities; low physical
effort; and space for approach and use. Following these principles should ensur
e effective design of interactive products for all user groups. While considerat
ion of ergonomic and cognitive factors can generate effective interactive system
designs, if the design has not taken into consideration the environment in whic
h the system will be used, it may still fail to be adopted. This will be address
ed in the next section.
4.
SOCIAL, ORGANIZATIONAL, AND MANAGEMENT FACTORS
Social, organizational, and management factors related to humancomputer interacti
on may in uence a range of outcomes at both the individual and organizational leve
ls: stress, physical and mental health, safety, job satisfaction, motivation, an
d performance. Campion and Thayer (1985) showed that some of these outcomes may
be con icting. In a study of 121 blue-collar jobs, they found that enriched jobs l
ed to higher job satisfaction but lower ef ciency and reliability. The correlation
s between ef ciency on one hand and job satisfaction and comfort on the other hand
were negative. Another way of looking at all the outcomes has been proposed by
Smith and Sainfort (1989). The objective of the proposed balance theory is to ac
hieve an optimal balance among positive and negative aspects of the work system,
including the person, task and organizational factors, technology, and physical
environment. See Figure 1 for a model of the work system. The focus is not on a
limited range of variables or aspects of the work system but on a holistic appr
oach to the study and design of work systems. In this section we will focus on h
ow social, organizational and management factors related to humancomputer interac
tion in uence both individual and organizational outcomes.
4.1.
Social Environment
The introduction of computer technology into workplaces may change the social en
vironment and social relationships. Interactive systems become a new element of
the social environment, a new communication medium, and a new source of informat
ion. With new computer technologies there may be a shift from face-to-face inter
1218 General Use Used early in usability design life cycle for prediction of exp
ert user performance. Useful in making accurate design decisions early in the us
ability life cycle without the need for a prototype or costly user testing. Adva
ntages Disadvantages Narrow in focus; lack of speci c diagnostic output to guide d
esign; broad assumption on users experience (expert) and cognitive processes; res
ults may differ based on the evaluators interpretation of the task. Even the best
evaluators can miss signi cant usability issues; results are subject to evaluator
bias; does not capture real user behavior. Strongly diagnostic; can focus on en
tire system; high potential return in terms of number of usability issues identi e
d; can assist in focusing observational evaluations. Quickly highlights usabilit
y issues; verbal protocols provide signi cant insights; provides rich qualitative
data. Observation can affect user performance with the system; analysis of data
can be time and resource consuming. Used early in the design life cycle to ident
ify theoretical problems that may pose actual practical usability problems. Used
in iterative design stage for problem identi cation.
TABLE 8
Advantages and Disadvantages of Existing Usability Evaluation Techniques
Evaluation Method
Example Tools / Techniques
Analytic / theory-based
Cognitive Task Analysis GOMS
Expert Evaluation
Design walk-throughs Heuristic evaluations Process / system
Free play Group evaluations
Checklists
Observational evaluation
Direct observation Video Verbal protocols Computer logging Think aloud technique
s Field evaluations Ethnographic studies Facilitated free play
Survey evaluation
questionnaires Structured interviews Ergonomics checklists Focus groups
Used any time in the design life cycle to obtain information on users preferences
and perception of a system.
Provides insights into users opinions and understanding of the system; can be dia
gnostic; rating scales can provide quantitative data; can gather data from large
subject pools.
Psychophysiological measures of satisfaction or workload Used any time in the de
sign life cycle to obtain information on user satisfaction or workload. Used for
competitive analysis in nal testing of the system. Eliminate user bias by employ
ing objective measures of user satisfaction and workload.
User experience important; possible user response bias (e.g., only dissatis ed use
rs respond); response rates can be low; possible interviewer bias; analysis of d
ata can be time and resource consuming; evaluator may not be using appropriate c
hecklist to suit the situation. Invasive techniques are involved that are often
intimidating and expensive for usability practitioners.
Experimental evaluation
EEGs Heart rate Blood pressure Pupil dilation Skin conductivity Level of adrenal
ine in blood Quantitative measures Alternative design comparisons Free play Faci
litated free play Powerful and prescriptive method; provides quantitative data;
can provide a comparison of alternatives; reliability and validity generally goo
d.
Experiment is generally time and resource consuming; focus can be narrow; tasks
and evaluative environment can be contrived; results dif cult to generalize.
1219
1220 TABLE 9
PERFORMANCE IMPROVEMENT MANAGEMENT Factors to Consider in Selecting Usability-Ev
aluation Techniques
Purpose of the evaluation Stage in the development life cycle in which the evalu
ation technique will be carried out Required level of subjectivity or objectivit
y Necessity or availability of test participants Type of data that need to be co
llected Information that will be provided Required immediacy of the response Lev
el of interference implied Resources that may be required or available
and promotion (OTA 1985). On the other hand, home-based work allows workers to s
pend more time with their family and friends, thus increasing social support fro
m family and friends. Telework allows for increased control over work pace and v
ariability of workload. It has been found, however, that electronic communicatio
n and telework have led to feelings of not being able to get away from work and
to the augmentation (rather than substitution) of regular of ce hours (Sproull and
Kiesler 1991; Phizacklea and Wolkowitz 1995). In addition, increased distractio
ns and interruptions may disrupt work rhythm (OTA 1985). From a social point of
view, home-based work has both negative and positive effects. Another important
social factor is intragroup relationships and relationships with coworkers. The
role of computer technologies in in uencing intragroup functioning is multidimensi
onal. If workers spend a lot of time working in isolation at a computer workstat
ion, they may have less opportunity for socialization. This may affect the group
performance, especially if tasks are interdependent. On the other hand, intragr
oup relationships may be improved if workers gain access to better information a
nd have adequate resources to use computers. The positive or negative effects ma
y also vary across jobs and organizations. And they may depend on the characteri
stics of the interactive system (e.g., single- vs. multiple-user computer workst
ation). Aronsson (1989) found that work group cohesion and possibilities for con
tacts with coworkers and supervisors had become worse among low-level jobs (e.g.
, secretary, data-entry operator, planner, of ce service) but had not been affecte
d among medium- to high-level jobs. Changes in job design were related to change
s in social relationships. The higher the change in intensity demands, the lower
the work group cohesion and the possibilities for contacts with coworkers and s
upervisors. That is, increase in workload demands came along with worsening of t
he social environment. The negative effect was more pronounced among low-level j
obs, presumably because higher-level job holders have more resources, such as kn
owledge, power, and access to information and can have a say in the implementati
on / design process as well as more control over their job. Access to organizati
onal resources and expertise is another important facet of the social environmen
t for computer users. Technology can break down or malfunction, and users may ne
ed help to perform certain tasks or to learn new software. In these situations,
access to organizational resources and expertise is critical for the end users,
especially when they are highly dependent on the computer technology to perform
their job or when they use the technology in their contact with customers. Danzi
ger et al. (1993) have studied the factors that determine the quality of end-use
r computing services in a survey of 1869 employees in local governments. Three c
ategories of factors were identi ed that might in uence the quality of computing ser
vices: (1) the structure of service provision (e.g., centralization vs. decentra
lization), (2) the level of technological problems, and (3) the service orientat
ion of computing service specialists. The results do not provide support for the
argument that structural factors are most important; whether computing services
are centralized or decentralized within an organization does not explain the pe
rceived quality of computing services. On the other hand, the results demonstrat
e the importance of the attitudes of the service providers. Computer specialists
who are clearly user oriented, that is, who are communicative and responsive to
user needs and are committed to improving existing applications and proposing a
ppropriate new ones, seem best able to satisfy end users criteria for higher qual
ity computing services. Researchers emphasize the importance of a positive socio
HUMANCOMPUTER INTERACTION
1221
restrained from communicating with their supervisors. However, expectations of r
apid service and faster work completion may impose more supervisory pressure on
workers (Johansson and Aronsson 1984). This is a negative effect of computer tec
hnologies on workersupervisor relationships. A study by Yang and Carayon (1995) s
howed that supervisor support was an important buffer against worker stress in b
oth low and high job demands conditions. Two hundred sixty-two computer users of
three organizations participated in the study. Supervisor social support was an
important buffer against worker stress; however, coworker social support did no
t affect worker stress. The social environment can be in uenced by computer techno
logies in various ways: quality, quantity, and means of communications, social i
solation, extended network of colleagues, work group cohesion, quality of social
interaction among workers, coworkers and supervisors, and social support. Table
10 summarizes the potential effects of computer technologies on the social envi
ronment. There are several strategies or interventions that can be applied to co
unteract the negative in uences of computer technology on the social environment a
nd foster positive effects. Computerized monitoring systems have an important im
pact on how supervisors interact with their employees. It is natural that when s
upervisors are suddenly provided with instantaneous, detailed information about
individual employee performance, they feels a commitment, in fact an obligation,
to use this information to improve the performance of the employees. This use o
f hard facts in interacting with employees often changes the style of supervisio
n. It puts inordinate emphasis on hourly performance and creates a coercive inte
raction. This is a critical mistake in a high-technology environment where emplo
yee cooperation is essential. Supervision has to be helpful and supportive if em
ployee motivation is to be maintained and stress is to be avoided. This means th
at supervisors should not use individual performance data as a basis for interac
tion. The supervisor should be knowledgeable about the technology and serve as a
resource when employees are having problems. If management wants employees to a
sk for help, the relationship with the supervisor has to be positive (not coerci
ve) so that the employee feels con dent enough to ask for help. If employees are c
onstantly criticized, they will shun the supervisor and problem situations that
can harm productivity will go unheeded. Employees are a good source of informati
on about productive ways to work. Their daily contact with the job gives them in
sight into methods, procedures, bottlenecks, and problems. Many times they modif
y their individual work methods or behavior to improve their products and rate o
f output. Often these are unique to the individual job or employee and could not
be adopted as a standardized approach or method. If work systems are set up in
a rigid way, this compensatory behavior cannot occur. Further, if adverse relati
onships exist between supervisors and employees, the employees are unlikely to o
ffer their innovative ideas when developers are conducting a contextual task ana
lysis (see Section 3.3). It is in the interest of the employer to allow employee
s to exercise at least a nominal level of control and decision making over their
own task activity. Here again, the computer hardware and software have to be exi
ble so that individual approaches and input can be accommodated as long as set s
tandards of productivity are met. One approach for providing employee control is
through employee involvement and participation in making decisions about intera
ctive system designfor instance, by helping management select ergonomic furniture
through comparative testing of various products and providing preference data,
or being involved in the determination of task allocations for a new job, or voi
cing opinions about ways to improve the ef ciency of their work unit. Participatio
n is a strong motivator to action and a good way to gain employee commitment to
a work standard or new technology. Thus, participation can be used as a means of
improving the social environment and foster the ef cient use of interactive
TABLE 10 Potential Effects of Computer Technologies on the Social Environment Le
ss face-to-face interaction More computer-mediated communication Change in the q
1222
PERFORMANCE IMPROVEMENT MANAGEMENT
systems. But participation will only be effective as long as employees see tangi
ble evidence that their input is being considered and used in a way that bene ts t
hem. Employees who make positive contributions to the success of the organizatio
n should be rewarded for their efforts. Rewards can be administrative, social, o
r monetary. Administrative rewards can be such things as extra rest breaks, exte
nded lunch periods, and special parking spaces. They identify the person as some
one special and deserving. Another type of reward is social in that it provides
special status to the individual. This is best exempli ed by the receipt of praise
from the supervisor for a job well done. This enhances personal self-esteem. If
the praise is given in a group setting, it can enhance peer group esteem toward
the individual. Monetary rewards can also be used, but these can be a double-ed
ged sword because they may have to be removed during low-pro t periods, and this c
an lead to employee resentment, thus negating the entire purpose of the reward s
ystem. Some organizations use incentive pay systems based on performance data pr
ovided by the computers. Computers can be used to keep track of worker performan
ce continuously (Carayon 1993). That quantitative performance data can then be u
sed to set up incentive pay systems that reward good performers. In general, inc
entive pay systems can lead to increase in output but at the expense of worker h
ealth (Levi 1972). Schleifer and Amick (1989) have shown how the use of a comput
er-based incentive system can lead to an increase in worker stress. Different wa
ys of improving the social environment in computerized workplaces thus include h
elpful and supportive managers and supervisors, increased control over ones job,
employee involvement and participation, and rewards.
4.2.
Organizational Factors
The way work is organized changes with the introduction of computer technologies
, such as changes in work ow. Computer technologies obviously provide opportunitie
s for reorganizing how work ows and have the potential of increasing ef ciency. How
ever, increased worker dependence on the computer is a potential problem, especi
ally when the computer breaks down or slows down. It may affect not only perform
ance but also stress. Organizational redesign may be one way of alleviating prob
lems linked to dependence on the computer. Aronsson and Johansson (1987) showed
that organizational rearrangement was necessary to decrease workers dependence on
the computer system by expanding their jobs with new tasks and allowing them to
rotate between various tasks. Given their technical capabilities, computers can
be used to bring people closer and make them work in groups. The concept of com
puter-supported cooperative work is based on the expectation that the computer f
avors group work. Researchers in this area focus on all aspects of how large and
small groups can work together in using computer technology (Greif 1988). They
develop interactive systems that facilitate group work and study the social, org
anizational, and management impacts of computer-supported work groups. For insta
nce, Grief and Sarin (1988) identi ed data-management requirements of computer gro
up work. New computer technologies allow work to be performed at a distance. Thi
s new work organization has some potential negative and positive effects for wor
kers and management. Bene ts for workers include increased control over work sched
ule and eliminating the commute to work (OTA 1985; Bailyn 1989). Constraints for
workers include social isolation, increased direct and indirect costs (e.g., in
creased heating bill, no health insurance), lack of control over physical enviro
nment, and fewer opportunities for promotion (OTA 1985; Bailyn 1989). Bene ts for
employers include lowered costs (e.g., oor space, direct labor costs, and workers
bene ts), more intensive use of computers (e.g., outside peak hours), increased exi
bility (workers can be used when needed), and increased productivity; while prob
lems include change in traditional management and supervision techniques and los
HUMANCOMPUTER INTERACTION
1223
have negative effects on an organization. Asakura and Fujigaki (1993) examined t
he direct and indirect effects of computerization on worker well being and healt
h in a sample of 4400 of ce workers. The results of their study paralleled Carayon
-Sainfort (1992). A major complaint of of ce employees who have undergone computer
ization is that their workload has increased substantially. This is most true fo
r clerical employees, who typically have an increased number of transactions to
process when computers are introduced into the work routine. This increase in tr
ansactions means more keystrokes and more time at the workstation. These can lea
d to greater physical effort than before and possibly more visual and muscular d
iscomfort. This discomfort reinforces the feeling of increased workload and adds
to employee dissatisfaction with the workload. Quite often the workload of comp
uter users is established by the data-processing department in concert with othe
r staff departments such as human resources and line managers. An important cons
ideration is the cost of the computer equipment and related upkeep such as softw
are and maintenance. The processing capability of the computer(s) is a second cr
itical element in establishing the total capacity that can be achieved. The tech
nology cost, the capability to process work, and the desired time frame to pay f
or the technology are factored together to establish a staf ng pattern and the req
uired workload for each employee. This approach is based on the capacity of the
computer(s) coupled with investment recovery needs and does not necessarily meet
the objective of good human resource utilization. Workload should not be based
solely on technological capabilities or investment recovery needs but must inclu
de important considerations of human capabilities and needs. Factors such as att
entional requirements, fatigue, and stress should be taken into account in estab
lishing the workload. A workload that is too great will cause fatigue and stress
that can diminish work quality without achieving desired quantity. A workload t
hat is too low will produce boredom and stress and also reduce quality and econo
mic bene ts of computerization. Workload problems are not concerned solely with th
e immediate level of effort necessary but also deal with the issue of work press
ure. This is de ned as an unrelenting backlog of work or workload that will never
be completed. This situation is much more stressing than a temporary increase in
workload to meet a speci c crisis. It produces the feeling that things will never
get better, only worse. Supervisors have an important role in dealing with work
pressure by acting as a buffer between the demands of the employer and the dail
y activities of the employees. Work pressure is a perceptual problem. If the sup
ervisor deals with daily workload in an orderly way and does not put pressure on
the employee about a pile-up of work, then the employees perception of pressure
will be reduced and the employee will not suffer from work pressure stress. Work
pressure is also related to the rate of work, or work pace. A very fast work pa
ce that requires all of the employees resources and skills to keep up will produc
e work pressure and stress. This is exacerbated when this condition occurs often
. An important job-design consideration is to allow the employee to control the
pace of the work rather than having this controlled automatically by the compute
r. This will provide a pressure valve to deal with perceived work pressure. A pr
imary reason for acquiring new technology is to increase individual employee pro
ductivity and provide a competitive edge. Getting more work out of employees mea
ns that fewer are needed to do the same amount of work. Often employees feel tha
t this increased output means that they are working harder even though the techn
ology may actually make their work easier. Using scienti c methods helps establish
the fairness of new work standards. Once work standards have been established,
they can serve as one element in an employeeperformance-evaluation scheme. An ad
vantage of computer technology is the ability to have instantaneous information
on individual employee performance in terms of the rate of output. This serves a
s one objective measure of how hard employees are working. But managers have to
understand that this is just one element of employee performance and emphasis on
quantity can have an adverse effect on the quality of work. Therefore, a balanc
1224
PERFORMANCE IMPROVEMENT MANAGEMENT
effect on the cognitive control process. Overall, cognitive demands are associat
ed with job characteristics such as intensity of computer work, the type of comm
unication, and high speed / functions of computers. The implementation of comput
er technology in work organizations can lead to greater demands on cognitive res
ources in terms of memory, attention, and decision making that may have a negati
ve impact on worker health and work performance. If, however, computer systems h
ave been designed with the cognitive capabilities and limitations of the user in
mind (see Section 3), these issues should not occur. There has been interest in
the role of occupational psychosocial stress in the causation and aggravation o
f musculoskeletal disorders for computer users (Smith et al. 1981; Smith 1984; B
ammer and Blignault 1988; Smith and Carayon 1995; Hagberg et al. 1995). It has b
een proposed that work organization factors de ne ergonomic risks to upper-extremi
ty musculoskeletal problems by specifying the nature of the work activities (var
iety or repetition), the extent of loads, the exposure to loads, the number and
duration of actions, ergonomic considerations such as workstation design, tool a
nd equipment design, and environmental features (Smith and Carayon 1995; Carayon
et al. 1999). These factors interact as a system to produce an overall load on
the person (Smith and Sainfort 1989; Smith and Carayon 1995; Carayon et al. 1999
), and this load may lead to an increased risk for upper extremity musculoskelet
al problems (Smith and Carayon 1995; Carayon et al. 1999). There are psychobiolo
gical mechanisms that make a connection between psychological stress and musculo
skeletal disorders plausible and likely (Smith and Carayon 1995; Carayon et al.
1999). At the organizational level, the policies and procedures of a company can
affect the risk of musculoskeletal disorders through the design of jobs, the le
ngth of exposures to stressors, establishing workrest cycles, de ning the extent of
work pressures and establishing the psychological climate regarding socializati
on, career, and job security (Smith et al. 1992; NIOSH 1992, 1993). Smith et al.
(1992), Theorell et al. (1991) and Faucett and Rempel (1994) have demonstrated
that some of these organizational features can in uence the level of self-reported
upper-extremity musculoskeletal health complaints. In addition, the organizatio
n de nes the nature of the task activities (work methods), employee training, avai
lability of assistance, supervisory relations, and workstation design. All of th
ese factors have been shown to in uence the risk of upper-extremity musculoskeleta
l symptoms, in particular among computer users and of ce workers (Linton and Kamwe
ndo 1989; Smith et al. 1992; Lim et al. 1989; Lim and Carayon. 1995; NIOSH 1990,
1992, 1993; Smith and Carayon 1995). The amount of esteem and satisfaction an e
mployee gets from work are tied directly to the content of the job. For many job
s, computerization brings about fragmentation and simpli cation that act to reduce
the content of the job. Jobs need to provide an opportunity for skill use, ment
al stimulation, and adequate physical activity to keep muscles in tone. In addit
ion, work has to be meaningful for the individual. It has to provide for identi ca
tion with the product and the company. This provides the basis for pride in the
job that is accomplished. Computerization can provide an opportunity for employe
es to individualize their work. This lets them use their unique skills and abili
ties to achieve the required standards of output. It provides cognitive stimulat
ion because each employee can develop a strategy to meet his or her goals. This
requires that software be exible enough to accept different types and order of in
put. Then it is the job of the software to transform the diverse input into the
desired product. Usually computer programmers will resist such an approach becau
se it is easier for them to program using standardized input strategies. However
, such strategies build repetition and in exibility into jobs that reduce job cont
ent and meaning. Being able to carry out a complete work activity that has an id
enti able end product is an important way to add meaningfulness to a job. When emp
loyees understand the fruits of their labor, it provides an element of identi cati
on and pride in achievement. This is in contrast to simplifying jobs into elemen
tal tasks that are repeated over and over again. Such simpli cation removes meanin
g and job content and creates boredom, job stress, and product-quality problems.
New computer systems should emphasize software that allows employees to use exi
sting skills and knowledge to start out. These then can serve as the base for ac
quiring new skills and knowledge. Job activities should exercise employee mental
skills and should also require a suf cient level of physical activity to keep the
employee alert and in good muscle tone. Table 11 summarizes the potential impac
ts of computer technologies on organizational factors. Overall, the decision abo
ut the use or design of interactive systems should include considerations for wo
rk load, work pressure, determination of work standards, job content (variety an
d skill use), and skill development. Computerization holds the promise of provid
ing signi cant improvements in the quality of jobs, but it also can bring about or
ganizational changes that reduce employee satisfaction and performance and incre
ase stress. Designing interactive systems that meet both the aims of the organiz
ation and the needs of employees can be dif cult. It requires attention to importa
nt aspects of work that contribute to employee self-esteem, satisfaction, motiva
tion, and health and safety.
ffects on job design. Carayon (1994) also reported on a second study to speci call
y examine whether or not electronic performance monitoring had a direct or indir
ect effect on worker stress. The results revealed that monitored
1226
PERFORMANCE IMPROVEMENT MANAGEMENT
employees reported more supervisor feedback and control over work pace and less
job content than nonmonitored employees. There were no differences between the m
onitored and nonmonitored groups with regard to stress or health. The process by
which computer technologies are implemented is only one of the management facto
rs that affect the effectiveness and acceptance of computer use. Management atti
tudes toward the implementation of computer technologies are very important inso
far as they can affect overall job and organizational design and worker percepti
ons, attitudes, and beliefs regarding the new technologies (Crozier 1964; Smith
1984; Blackler 1988; Kahn and Cooper 1986). Several models that link the organiz
ational effects of computer systems to the process used to implement those syste
ms have been proposed (Blackler and Brown 1987; Robey 1987; Flynn 1989). They al
l emphasize the need to identify potential technical and social impacts, advocat
e general planning, and emphasize the support of workers and managers for succes
sful implementation of new computer technology. Carayon and Karsh (2000) examine
d the implementation of one type of computer technology, that is, imaging techno
logy into two organizations in the Midwest. Results showed that imaging users in
the organization that utilized end-user participation in the implementation of
their imaging system rated their imaging systems better and reported higher job
satisfaction than imaging users in the organization that did not incorporate end
-user participation in the implementation of the system. Studies by Korunka and
his colleagues (Korunka et al. 1993, 1995, 1996) have also demonstrated the bene t
s of end-user participation in technological change on quality of working life,
stress, and health. Management needs to also consider retraining issues when int
roducing new computer technology. Kearsley (1989) de ned three general effects of
computer technology: skill twist (change in required skills), deskilling (reduct
ion in the level of skills required to do a job), and upskilling (increase in th
e level of required skills). Each of these effects has different implications fo
r retraining. For instance, skill twist requires that workers be able and willin
g to learn new skills. Training or retraining are critical issues for the succes
sful implementation and use of new computer technology. Even more critical is th
e need for continuous retraining because of rapid changes in hardware and softwa
re capabilities of computer technologies (Smith et al. 1981; Smith 1984; Kearsle
y 1989). Training can serve to enhance employee performance and add new skills.
Such growth in skills and knowledge is an important aspect of good job design. N
o one can remain satis ed with the same job activities over years and years. Train
ing is a way to assist employees in using new technology to its fullest extent a
nd reduce the boredom of the same old job tasks. New technology by its nature wi
ll require changes in jobs, and training is an important approach not only for k
eeping employees current but also in building meaning and content into their job
s. Computer technologies have the potential to affect both positively and negati
vely the following management factors: organizational structure (e.g., decentral
ization vs. centralization), power distribution, information ow, and management c
ontrol over the production process. Managements strategies for implementing new c
omputer technologies are another important management factor to take into accoun
t to achieve optimal use of these technologies. Table 12 summarizes the potentia
l impacts of computer technologies on management factors. Some of the negative e
ffects of computers on management factors can be counteracted. The rest of this
section proposes various means of ensuring that computerization leads to higher
performance and satisfaction and lower stress. Monitoring employee performance i
s a vital concern of labor unions and employees. Computers provide greatly enhan
ced capability to track employee performance, and this will follow from such clo
se monitoring. Monitoring of employee performance is an important process for ma
nagement. It helps to know how productive your workforce is and where bottleneck
s are occurring. It is vital management information that can be used by top mana
gement to realign resources and to make important management decisions. However,
it is not a good practice to provide individual employee performance informatio
n to
HUMANCOMPUTER INTERACTION
1227
enhance individual performance, it is helpful to give periodic feedback directly
to employees about their own performance. This can be done in a noncoercive way
directly by the computer on a daily basis. This will help employees judge their
performance and also assist in establishing a supervisory climate that is condu
cive to satis ed and productive employees. While computerized monitoring systems c
an be particularly effective in providing employees with feedback, the misuse of
such systems can be particularly counterproductive and cause stress. The follow
ing principles contribute to the effective use of computerized monitoring for pe
rformance enhancement and reduced stress:
Supervisors should not be involved directly in the performance feedback system.
Information
on the performance that is given by the computerized monitoring system should be
directly fed back to the operator. Computerized monitoring systems should give
a comprehensive picture of the operators performance (quantity and quality). Perf
ormance information should not be used for disciplinary purposes. Electronic mon
itoring should not be used for payment purposes such as payment by keystrokes (p
iece rate) or bonuses for exceeding goals. Any kind of change in the workplace p
roduces fears in employees. New technology brings with it changes in staff and t
he way work is done. The fear of the unknown, in this case the new technology, c
an be a potent stressor. This suggests that a good strategy in introducing new t
echnology is to keep employees well informed of expected changes and how they wi
ll affect the workplace. There are many ways to achieve this. One is to provide
informational memorandums and bulletins to employees at various stages of the pr
ocess of decision making about the selection of technology and, during its imple
mentation, on how things are going. These informational outputs have to be at fr
equent intervals (at least monthly) and need to be straightforward and forthrigh
t about the technology and its expected effects. A popular approach being propos
ed by many organizational design experts is to involve employees in the selectio
n, design, and implementation of the new technology. The bene t of this participat
ion is that employees are kept abreast of current information, employees may hav
e some good ideas that can be bene cial to the design process, and participation i
n the process builds employee commitment to the use of the technology. A large e
mployee fear and potent stressor is concern over job loss due to improved ef cienc
y produced by new technology. Many research studies have demonstrated that the a
nticipation of job loss and not really knowing if you will be one of the losers
is much more stressful and more detrimental to employee health than knowing righ
t away about future job loss. Telling those employees who will lose their jobs e
arly provides them with an opportunity to search for a new job while they still
have a job. This gives them a better chance to get a new job and more bargaining
power regarding salary and other issues. Some employers do not want to let empl
oyees know too soon for fear of losing them at an inopportune time. By not being
fair and honest to employees who are laid off, employers can adversely in uence t
he attitudes and behavior of those employees who remain. For those employees who
are retained when new technology is acquired, there is the concern that the new
technology will deskill their jobs and provide less opportunity to be promoted
to a better job. Often the technology attens the organizational structure, produc
ing similar jobs with equivalent levels of skill use. Thus, there is little chan
ce to be promoted except into a limited number of supervisory positions, which w
ill be less plentiful with the new technology. If this scenario comes true, then
employees will suffer from the blue-collar blues that have been prevalent in factor
y jobs. This impacts negatively on performance and stress. Averting this situati
on requires a commitment from management to enhancing job design that builds ski
ll use into jobs as well as developing career paths so that employees have somet
hing to look forward to besides 30 years at the same job. Career opportunities h
ave to be tailored to the needs of the organization to meet production requireme
nts. Personnel specialists, production managers, and employees have to work toge
ther to design work systems that give advancement opportunity while utilizing te
chnology effectively and meeting production goals. One effective technique is to
develop a number of specialist jobs that require unique skills and training. Wo
rkers in these jobs can be paid a premium wage re ecting their unique skills and t
raining. Employees can be promoted from general jobs into specialty jobs. Those
already in specialty jobs can be promoted to other, more dif cult specialty jobs.
Finally, those with enough specialty experience can be promoted into troubleshoo
ting jobs that allow them to rotate among specialties as needed to help make the
work process operate smoothly and more productively. Organizations should take
an active role in managing new computer technologies. Knowing more about the pos
itive and negative potential effects or in uences of computerization on management
factors is an important rst step in improving the management of computer technol
ogies.
1228
PERFORMANCE IMPROVEMENT MANAGEMENT
4.4.
An International Perspective
In order to increase the market for their products and services and thus gain in
creasing pro tability and, where appropriate, shareholder value, corporations are
penetrating the international market. This requires a number of adjustments and
considerations by corporations, including consideration of standard of living, p
revailing economies, government incentives and public policies, and practices in
the country where products and services will be marketed. In addition, it is im
portant to consider the characteristics of the individuals in the country where
products and services will be utilized, such as differences in anthropometric (b
ody size), social, and psychological considerations. Table 13 illustrates with r
egard to computer products designed in the United States and the changes that ne
ed to be made for Chinese in Mainland China (Choong and Salvendy 1998, 1999; Don
g and Salvendy 1999a,b). If both versions of the product were produced, both U.S
. and Chinese users would be expected to achieve the fastest possible performanc
e time with the lowest error rates. Identifying a local expert and following int
ernational standards (Cakir and Dzida 1997) can assist in identifying the modi cat
ions required to ensure a product is well suited to each international target ma
rket.
5.
ITERATIVE DESIGN
Interactive systems are meant to make work more effective and ef cient so that emp
loyees can be productive, satis ed, and healthy. Good design improves the motivati
on of employees to work toward the betterment of the employer. The consideration
of ergonomic, cognitive, social, organizational, and management factors of inte
ractive system design must be recognized as an iterative design process. By cons
idering these factors in an iterative manner, system designs can evolve until th
e desired level of performance and safety are achieved. Additional modi cations an
d resources expenditure will then be unnecessary. This allows valuable resources
to be saved or applied to other endeavors. Table 14 provides a list of general
guidelines as to how these factors can be designed to create an effective, produ
ctive, healthy, and satisfying work environment. The concept of balance is very
important in managing the design, introduction, and use of computer technologies
(Smith and Sainfort 1989). Negative effects or in uences can be counteracted by p
ositive aspects. For instance, if the physical design of the technology cannot b
e changed and is known to be awed, decision makers and users could counteract the
negative in uences of such design by, for instance, providing more control over t
he workrest schedule. By having control over their workrest schedule, workers coul
d relieve some of the physical stresses imposed by the technology by moving arou
nd. If management expects layoffs due to the introduction of computers, actions
should be taken to ensure that workers are aware of these changes. Sharing infor
mation and getting valuable training or skills could be positive ways to counter
act the negative effects linked to layoff. Carayon (1994) has shown that of ce and
computer jobs can be characterized by positive and negative aspects and that di
fferent combinations of positive and negative aspects are related to different s
train outcomes. A job with high job demands, but positive aspects such as skill
utilization, task clarity, job control and social support, led to low boredom an
d a medium level of daily life stress. A job with many negative aspects of work
led to high boredom, workload dissatisfaction, and daily life stress. Another im
portant aspect of the iterative design process is time. Changes in the workplace
occur at an increasing pace, in particular with regard to computer technology.
The term continuous change has been used to characterize the fast and frequent c
hanges in computer technology and its impact on people and organizations (Korunk
a et al. 1997; Korunka and Carayon 1999). The idea behind this is that technolog
ical change is rarely, if ever, a one-time shot. Typically, technology changes a
re closer to continuous rather than discrete events. This is because rapid upgra
des and recon gurations to make the systems work more effectively are usually ongo
ing. In addition to technological changes, time has other important effects on t
he entire work system (Carayon 1997). In particular, the aspects of the computer
ized work system that affect people may change over time. In a longitudinal stud
y
TABLE 13 Attribute
Differences in Design Requirements between Chinese and American Users Chinese Ab
stract Thematic Vertical Relationalconceptual Relational Dependent Dynamics Ameri
can Concrete Functional Horizontal Inferentialcategorical Analytical Independent
N/A
Knowledge regeneration Base menu layout Menu layout Cognitive style Thinking Fie
ld Translation from English to Chinese
1230
PERFORMANCE IMPROVEMENT MANAGEMENT
increasing the mental workload on the job would result in increased job satisfac
tion. The increased job satisfaction would, in turn, be expected to yield decrea
sed labor turn over and decreased absenteeism, frequently resulting in increased
productivity. These interactions illustrate the type of dilemmas system develop
ers can encounter during interactive system design. Involving a multidisciplinar
y team in the development process allows such opposing requirements to be addres
sed better. The team must be supported by ergonomists who understand physical re
quirements, human factors engineers who understand cognitive requirements, and m
anagement that believes in the competitive edge that can be gained by developing
user-centered interactive systems. Close collaboration among these team members
can lead to the development of remarkably effective and highly usable systems t
hat are readily adopted by users.
Acknowledgment
This material is based, in part, upon work supported by the Naval Air Warfare Ce
nter Training Systems Division (NAWC TSD) under contract No. N61339-99-C-0098. A
ny opinions, ndings, and conclusions or recommendations expressed in this materia
l are those of the authors and do not necessarily re ect the views or the endorsem
ent of NAWC TSD.
REFERENCES
Allen, R. B. (1997), Mental Models and User Models, in Handbook of HumanComputer Inte
raction, 2nd Ed., M. Helander, T. K. Landauer, and P. V. Prabhu, Eds., North-Hol
land, Amsterdam, pp. 4963. Armstrong, T. J., Foulke, J. A., Martin, B. J., and Re
mpel, D. (1991), An Investigation of Finger Forces in Alphanumeric Keyboard Work, in
Design for Everyone Volume, Y. Queinnec and F. Daniellou, Eds., Taylor & Franci
s, New York, pp. 7576. Armstrong, T. J., Foulke, J. A., Martin, B. J., Gerson, J.
, and Rempel, D. M. (1994), Investigation of Applied Forces in Alphanumeric Keyboa
rd Work, American Industrial Hygiene Association Journal, Vol. 55, pp. 3035. Armstr
ong, T. J., Martin, B. J., Franzblau, A., Rempel, D. M., and Johnson, P. W. (199
5), Mouse Input Devices and Work-Related Upper Limb Disorders, in Work with Display
Units 94, A. Grieco, G. Molteni, E. Occhipinti, and B. Piccoli, Eds., Elsevier,
Amsterdam, pp. 375380. Aronsson, G. (1989), Changed Quali cation Demands in ComputerMediated Work, Applied Psychology, Vol. 38, pp. 5771. Aronsson, G., and Johanson, G
. (1987), Work Content, Stress and Health in Computer-mediated Work, in Work with Di
splay Units 86, B. Knave and P.-G. Wideback, Eds., Elsevier, Amsterdam, pp. 73273
8. Asakura, T., and Fujigaki, Y. (1993), The Impact of Computer Technology on Job
Characteristics and Workers Health, in HumanComputer Interaction: Application and Ca
se Studies, M. J. Smith and G. Salvendy, Eds., Elsevier, Amsterdam, pp. 982987. A
ydin, C. E. (1989), Occupational Adaptation to Computerized Medical Information Sy
stems, Journal of Health and Social Behavior, Vol. 30, pp. 163179. American Nationa
l Standards Institute (ANSI) (1973), American National Standard Practice for Of ce L
ighting (A. 132.1-1973), ANSI, New York. American National Standards Institute (
ANSI) (1988), American National Standard for Human Factors Engineering of Visual D
isplay Terminal Workstations (ANSI / HFS Standard No. 100-1988), Human Factors and
Ergonomics Society, Santa Monica, CA. Attewell, P., and Rule, J. (1984), Computin
g and Organizations: What We Know and What We Dont Know, Communications of the ACM,
Vol. 27, pp. 11841192. Bailyn, L. (1989), Toward the Perfect Workplace? Communicatio
ns of the ACM, Vol. 32, pp. 460 471. Bammer, G., and Blignault, I. (1988), More Tha
n a Pain in the Arms: A Review of the Consequences of Developing Occupational Ov
eruse Syndromes (OOSs), Journal of Occupational Health and SafetyAustralia and New
Zealand, Vol. 4, No. 5, pp. 389397. Blackler, F., and Brown, C. (1987), Management,
Organizations and the New Technologies, in Information Technology and People: D
esigning for the Future, F. Blackler and D. Oborne, Eds., British Psychological
Society, London, pp. 2343. Blackler, F. (1988), Information Technologies and Organi
zations: Lessons from the 1980s and Issues for the 1990s, Journal of Occupational
HUMANCOMPUTER INTERACTION
1231
Buchanan, D. A., and Boddy, D. (1982), Advanced Technology and the Quality of Work
ing Life: The Effects of Word Processing on Video Typists, Journal of Occupational
Psychology, Vol. 55, pp. 111. Buie, E. (1999), HCI Standards: A Mixed Blessing, Inte
ractions, Vol. VI2, pp. 3642. Bullinger, H. J., Kern, P., and Braun, M. (1977), Con
trols, in Handbook of Human Factors and Ergonomics, 2nd Ed., G. Slavendy, Ed., Joh
n Wiley & Sons, New York. Cakir, A., and Dzida, W. (1997), International Ergonomic
HCI Standards, in Handbook of Human Computer Interaction, 2nd Ed., M. Helander, T.
K. Landauer, and P. V. Prabhu, Eds., NorthHolland, Amsterdam, pp. 407420. Campio
n, M. A., and Thayer, P. W. (1985), Development and Field Evaluation of an Interdi
sciplinary Measure of Job Design, Journal of Applied Psychology, Vol. 70, pp. 2943.
Carayon, P. (1993), Effect of Electronic Performance Monitoring on Job Design and
Worker Stress: Review of the Literature and Conceptual Model, Human Factors, Vol.
35, No. 3, pp. 385395. Carayon, P. (1994a), Effects of Electronic Performance Moni
toring on Job Design and Worker Stress: Results of Two Studies, International Jour
nal of HumanComputer Interaction, Vol. 6, No. 2, pp. 177190. Carayon, P. (1994b), St
ressful Jobs and Non-stressful Jobs: A Cluster Analysis of Of ce Jobs, Ergonomics, V
ol. 37, pp. 311323. Carayon, P. (1997), Temporal Issues of Quality of Working Life
and Stress in HumanComputer Interaction, International Journal of HumanComputer Inte
raction, Vol. 9, No. 4, pp. 325342. Carayon, P., and Karsh, B. (2000), Sociotechnic
al Issues in the Implementation of Imaging Technology, Behaviour and Information T
echnology, Vol. 19, No. 4, pp. 247262. Carayon, P., Yang, C. L., and Lim, S. Y. (
1995), Examining the Relationship Between Job Design and Worker Strain over Time i
n a Sample of Of ce Workers, Ergonomics, Vol. 38, No. 6, pp. 11991211. Carayon, P., S
mith, M. J., and Haims, M. C. (1999), Work Organization, Job Stress, and Workrelat
ed Musculoskeletal Disorders, Human Factors, Vol. 41, No. 4, pp. 644663. Carayon-Sa
infort, P. (1992), The Use of Computers in Of ces: Impact on Task Characteristics an
d Worker Stress, International Journal of HumanComputer Interaction, Vol. 4, No. 3,
pp. 245 261. Card, S. K., Moran, T. P., and Newell, A. L. (1983), The Psychology
of HumanComputer Interaction, Erlbaum, Hillsdale, NJ. Carroll, J. M., and Mack,
R. L. (1985), Metaphor, Computing Systems, and Active Learning, International Journa
l of ManMachine Studies, Vol. 22, pp. 3957. Carroll, J. M., and Olson, J. R. (1988
), Mental Models in HumanComputer Interaction, in Handbook of HumanComputer Interact
ion, M. Helander, Ed., North-Holland, Amsterdam, pp. 4565. Carroll, J. M., and Th
omas, J. C. (1982), Metaphor and the Cognitive Representation of Computing Systems
, IEEE Transactions on System, Man, and Cybernetics, Vol. 12, pp. 107116. Centers f
or Disease Control (CDC) (1980), Working with Video Display Terminals: A Prelimina
ry Health-Risk Evaluation, Morbidity and Mortality Weekly Report, Vol. 29, pp. 3073
08. Choong, Y.-Y., and Salvendy, G. (1998), Design of Icons for Use by Chinese in
Mainland China, Interacting with Computers, Vol. 9, No. 4, pp. 417430. Choong, Y.-Y
., and Salvendy, G. (1999), Implications for Design of Computer Interfaces for Chi
nese Users in Mainland China, International Journal of HumanComputer Interaction, V
ol. 11, No. 1, pp. 2946. Clement, A., and Gotlieb, C. C. (1988), Evaluation of an O
rganizational Interface: The New Business Department at a Large Insurance Firm,
in Computer-Supported Cooperative Work: A Book of Readings, I. Greif, Ed., Morga
n Kaufmann, San Mateo, CA, pp. 609621. Cook, J., and Salvendy, G. (1999), Job Enric
hment and Mental Workload in Computer-Based Work: Implication for Adaptive Job D
esign, International Journal of Industrial Ergonomics, Vol. 24, pp. 1323. Crozier,
M. (1964), The Bureaucratic Phenomenon, University of Chicago Press, Chicago. Cz
aja, S. J., and Sharit, J. (1993), Stress Reactions to Computer-Interactive Tasks
as a Function of Task Structure and Individual Differences, International Journal
of HumanComputer Interaction, Vol. 5, No. 1, pp. 122. Danziger, J. N., Kraemer, K.
L., Dunkle, D. E., and King, J. L. (1993), Enhancing the Quality of Computing Ser
vice: Technology, Structure and People, Public Administration Review, Vol. 53, pp.
161169.
1232
PERFORMANCE IMPROVEMENT MANAGEMENT
Diaper, D. (1989), Task Analysis for HumanComputer Interaction, Ellis Horwood, Ch
ichester. Dong, J., and Salvendy, G. (1999a), Improving Software Interface Usabili
ties for the Chinese Population through Dynamic Translation, Interacting with Comp
uters (forthcoming). Dong, J., and Salvendy, G. (1999b), Design Menus for the Chin
ese Population: Horizontal or Vertical? Behaviour and Information Technology, Vol.
18, No. 6, pp. 467471. Duffy, V., and Salvendy, G. (1999), Problem Solving in an A
MT Environment: Differences in the Knowledge Requirements for an Inter-disciplin
e Team, International Journal of Cognitive Ergonomics, Vol. 3. No. 1, pp. 2335. Ebe
rts, R. E. (1994), User Interface Design, Prentice Hall, Englewood Cliffs, NJ. E
berts, R., Majchrzak, A., Payne, P., and Salvendy, G. (1990), Integrating Social a
nd Cognitive Factors in the Design of HumanComputer Interactive Communication, Inte
rnational Journal of HumanComputer Interaction, Vol. 2, No.1. pp. 127. Egan, D. E.
(1988), Individual Differences in HumanComputer Interaction, in Handbook of HumanComp
uter Interaction, M. Helander, Ed., North-Holland, Amsterdam, pp. 231254. Erickso
n, T. D. (1990), Working with Interface Metaphors, in The Art of HumanComputer Intera
ction, B. Laurel, Ed., Addison-Wesley, Reading, MA, pp. 6573. Ericsson, K. A., an
d Simon, H. A. (1980), Verbal Reports as Data, Psychological Review, Vol. 87, pp. 21
5251. Eveland, J. D., and Bikson, T. K. (1988), Work Group Structure and Computer S
upport: A Field Experiment, AMC Transactions on Of ce Information Systems, Vol. 6, p
p. 354379. Faucett, J., and Rempel, D. (1994), VDT-Related Musculoskeletal Symptoms
: Interactions between Work Posture and Psychosocial Work Factors, American Journa
l of Industrial Medicine, Vol. 26, pp. 597612. Fischer, G. (1991), The Importance o
f Models in Making Complex Systems Comprehensible, in Mental Models and Human Comp
uter Interaction, Vol. 2, M. J. Tauber and D. Ackermann, Eds., North-Holland, Am
sterdam, pp. 336. Flynn, F. M. (1989), Introducing New Technology into the Workplac
e: The Dynamics of Technological and Organizational Change, in Investing in PeopleA
Strategy to Address Americas Workforce Crisis, U.S. Department of Labor, Commiss
ion on Workforce Quality and Labor Market Ef ciency, Washington, DC, pp. 411456. Fo
gelman, M., and Brogmus, G. (1995), Computer Mouse Use and Cumulative Trauma Disor
ders of the Upper Extremities, Ergonomics, Vol. 38, No. 12, pp. 24652475. Gentner,
D. (1983), Structure-Mapping: A Theoretical Framework for Analogy, Cognitive Science
, Vol. 7, pp. 155170. Gentner, D., and Clement, C. (1983),Evidence for Relational S
electivity in the Interpretation of Analogy and Metaphor, Psychology of Learning a
nd Motivation, Vol. 22, pp. 307358. Gerard, M. J., Armstrong, T. J., Foulke, J. A
., and Martin, B. J. (1996), Effects of Key Stiffness on Force and the Development
of Fatigue while Typing, American Industrial Hygiene Association Journal, Vol. 57
, pp. 849854. Gould, J. D., Boies, S. J., and Ukelson, J. (1997), How to Design Usa
ble Systems, in Handbook of HumanComputer Interaction, 2nd Ed., M. Helander, T. K.
Landauer, and P. V. Prabhu, Eds., North-Holland, Amsterdam, pp. 231254. Grandjean
, E. (1979), Ergonomical and Medical Aspects of Cathode Ray Tube Displays, Swiss
Federal Institute of Technology, Zurich. Grandjean, E. (1984), Postural Problems
at Of ce Machine Workstations, in Ergonomics and Health in Modern Of ces, E. Grandjean
, Ed., Taylor & Francis, London. Grandjean, E. (1988), Fitting the Task to the M
an, Taylor & Francis, London. Gray, W. D., John, B. E., and Atwood, M. E. (1993)
, Project Ernestine: Validating a GOMS Analysis for Predicting Real-World Task Per
formance, HumanComputer Interaction, Vol. 6, pp. 287 309. Greif, I., Ed. (1988), Com
puter-Supported Cooperative Work: A Book of Readings, Morgan Kaufmann, San Mateo
, CA. Greif, I., and Sarin, S. (1988), Data Sharing in Group Work, in Computer-Suppo
rted Cooperative Work, I. Greif, Ed., Morgan Kaufmann, San Mateo, CA, pp. 477508.
Guggenbuhl, U., and Krueger, H. (1990), Musculoskeletal Strain Resulting from Key
board Use, in Work with Display Units 89, L. Berlinguet and D. Berthelette, Eds.,
Elsevier, Amsterdam. Guggenbuhl, U., and Krueger, H. (1991), Ergonomic Characteris
tics of Flat Keyboards, in Design for Everyone Volume, Y. Queinnec and F. Daniello
u, Eds., Taylor & Francis, London, pp. 730 732.
HUMANCOMPUTER INTERACTION
1233
Hackos, J. T., and Redish, J. C. (1998), User and Task Analysis for Interface De
sign, John Wiley & Sons, New York. Hagberg, M. (1995), The MouseArm Syndrome Concurre
nce of Musculoskeletal Symptoms and Possible Pathogenesis among VDT Operators, in
Work with Display Units 94, A. Grieco, G. Molteni, E. Occhipinti, and B. Piccoli
, Eds., Elsevier, Amsterdam, pp. 381385. Hagberg, M., Silverstein, B., Wells, R.,
Smith, M. J., Hendrick, H. W., Carayon, P., and Perusse, M. (1995), Work Relate
d Musculoskeletal Disorders (WMDSs): A Reference Book for Prevention, Taylor & F
rancis, London. Haims, M. C., and Carayon, P. (1998), Theory and Practice for the
Implementation of In-house, Continuous Improvement Participatory Ergonomic Program
s, Applied Ergonomics, Vol. 29, No. 6, pp. 461472. Helander, M. G. (1982), Ergonomi
c Design of Of ce Environments for Visual Display Terminals, National Institute fo
r Occupational Safety and Health (DTMD), Cincinnati, OH. Ilg, R. (1987), Ergonomic
Keyboard Design, Behaviour and Information Technology, Vol. 6, No. 3, pp. 303309.
Jeffries, R. (1997), The Role of Task Analysis in Design of Software, in Handbook of
Human Computer Interaction, 2nd Ed., M. Helander, T. K. Landauer, and P. V. Prab
hu, Eds., NorthHolland, Amsterdam, pp. 347359. Johansson, G., and Aronsson, G. (1
984), Stress Reactions in Computerized Administrative Work, Journal of Occupational
Behaviour, Vol. 5, pp. 159181. Kahn, H., and Cooper, C. L. (1986), Computing Stress
, Current Psychological Research and Reviews, Summer, pp. 148162. Karat, J. (1988),
Software Evaluation Methodologies, in Handbook of HumanComputer Interaction, M. Hela
nder, Ed., North-Holland, Amsterdam, pp. 891903. Karat, J., and Dayton, T. (1995)
, Practical Education for Improving Software Usability, in CHI 95 Proceedings, pp.
162169. Karlqvist, L., Hagberg, M., and Selin, K. (1994), Variation in Upper Limb P
osture and Movement During Word Processing with and without Mouse Use, Ergonomics,
Vol. 37, No. 7, pp. 1261 1267. Karwowski, W., Eberts, R., Salvendy, G., and Norl
an, S. (1994), The Effects of Computer Interface Design on Human Postural Dynamics
, Ergonomics, Vol. 37, No. 4, pp. 703724. Kay, A. (1990), User Interface: A Personal
View, in The Art of HumanComputer Interaction, B. Laurel, Ed., Addison-Wesley, Read
ing, MA, pp. 191207. Kearsley, G. (1989), Introducing New Technology into the Workp
lace: Retraining Issues and Strategies, in Investing in PeopleA Strategy to Address
Americas Workforce Crisis, U.S. Department of Labor, Commission on Workforce Qua
lity and Labor Market Ef ciency, Washington, DC, pp. 457491. Kiesler, S., Siegel, J
., and McGwire, T. W. (1984), Social Psychological Aspects of ComputerMediated Com
munication, American Psychologist, Vol. 39, pp. 11231134. Kim, H., and Hirtle, S. C
. (1995), Spatial Metaphors and Disorientation in Hypertext Browsing, Behaviour and
Information Technology, Vol. 14, No. 4, pp. 239250. Kirwan, B., and Ainsworth, L.
K., Eds. (1992), A Guide to Task Analysis, Taylor & Francis, London. Korunka, C
., Weiss, A., and Zauchner, S. (1997), An Interview Study of Continuous Implementati
ons of Information Technology, Behaviour and Information Technology, Vol. 16, No.
1, pp. 316. Korunka, C., and Carayon, P. (1999), Continuous Implementation of Infor
mation Technology: The Development of an Interview Guide and a Cross-Sectional C
omparison of Austrian and American Organizations, International Journal of Human F
actors in Manufacturing, Vol. 9, No. 2, pp. 165183. Korunka, C., Huemer, K. H., L
itschauer, B., Karetta, B., and Kafka-Lutzow, A. (1996), Working with New Techno
logiesHormone Excretion as Indicator for Sustained Arousal, Biological Psychology,
Vol. 42, pp. 439452. Korunka, C., Weiss, A., Huemer, K. H., and Karetta, B. (1995
), The Effects of New Technologies on Job Satisfaction and Psychosomatic Complaint
s, Applied Psychology: An International Review, Vol. 44, No. 2, pp. 123142. Korunka
, C., Weiss, A., and Karetta, B. (1993), Effects of New Technologies with Special
Regard for the Implementation Process per Se, Journal of Organizational Behaviour,
Vol. 14, pp. 331 348.
1234
PERFORMANCE IMPROVEMENT MANAGEMENT
Kroemer, K. H. E. (1972), Human Engineering the Keyboard, Human Factors, Vol. 14, pp
. 51 63. Kroemer, K. H. E., and Grandjean, E. (1997), Fitting the Task to the Hum
an, Taylor & Francis, London. Levi, L. (1972), Conditions of Work and Sympathoadre
nomedullary Activity: Experimental Manipulations in a Real Life Setting, in Stre
ss and Distress in Response to Psychosocial Stimuli, L. Levi, Ed., Acta Medica S
candinavica, Vol. 191, Suppl. 528, pp. 106118. Lim, S. Y., and Carayon, P. (1995)
, Psychosocial and Work Stress Perspectives on Musculoskeletal Discomfort, in Procee
dings of PREMUS 95, Institute for Research on Safety and Security (IRSST), Montr
eal. Lim, S. Y., Rogers, K. J. S., Smith, M. J., and Sainfort, P. C. (1989), A Stu
dy of the Direct and Indirect Effects of Of ce Ergonomics on Psychological Stress
Outcomes, in Work with Computers: Organizational, Management, Stress and Health As
pects, M. J. Smith and G. Salvendy, Eds., Elsevier, Amsterdam, pp. 248255. Lindst
rom, K., and Leino, T. (1989), Assessment of Mental Load and Stress Related to Inf
ormation Technology Change in Baking and Insurance, in ManComputer Interaction Rese
arch, MACINTER-II, F. Klix, N. A. Streitz, Y. Waern, and H. Wandke, Eds., Elsevi
er, Amsterdam, pp. 523 533. Linton, S. J., and Kamwendo, K. (1989), Risk Factors in
the Psychosocial Work Environment for Neck and Shoulder Pain in Secretaries, Jour
nal of Occupational Medicine, Vol. 31, No. 7, pp. 609613. Marcus, A. (1997), Graphi
cal User Interfaces, in Handbook of HumanComputer Interaction, 2nd Ed., M. Helander
, T. K. Landauer, and P. V. Prabhu, Eds., North-Holland, Amsterdam, pp. 423 440.
Martel, A. (1998), Application of Ergonomics and Consumer Feedback to Product Desi
gn at Whirlpool, in Human Factors in Consumer Products, N. Stanton, Ed., Taylor &
Francis, London, pp. 107126. Mayhew, D. J. (1992), Principles and Guidelines in S
oftware User Interface Design, Prentice Hall, Englewood Cliffs, NJ. Mayhew, D. J
. (1999), The Usability Engineering Lifecycle: A Practitioners Handbook for User
Interface Design, Morgan Kaufmann, San Francisco. McLeod, R. W., and Sherwood-Jo
nes, B. M. (1993), Simulation to Predict Operator Workload in a Command System, in A
Guide to Task Analysis, B. Kirwan, and L. K. Ainsworth, Eds., Taylor & Francis,
London, pp. 301310. Mountford, S. J. (1990), Tools and Techniques for Creative Des
ign, in The Art of HumanComputer Interaction, B. Laurel, Ed., Addison-Wesley, Readi
ng, MA, pp. 1730. Nardi, B. A. (1997), The Use of Ethnographic Methods in Design an
d Evaluation, in Handbook of HumanComputer Interaction, 2nd Ed., M. Helander, T. K.
Landauer, and P. V. Prabhu, Eds., North-Holland, Amsterdam, pp. 361366. National
Academy of Sciences (NAS) (1983), Visual Displays, Work and Vision, National Ac
ademy Press, Washington, DC. Neale, D. C., and Carroll, J. M. (1997), The Role of
Metaphors in User Interface Design, in Handbook of HumanComputer Interaction, 2nd E
d., M. Helander, T. K. Landauer, and P. V. Prabhu, Eds., North-Holland, Amsterda
m, pp. 441462. Newell, A. F., Arnott, J. L., Carter, K., and Cruikshank, G. (1990
), Listening Typewriter Simulation Studies, International Journal of ManMachine Studi
es, Vol. 33, pp. 119. Nielsen, J. (1993), Usability Engineering, Academic Press P
rofessional, Boston. National Institute for Occupational Safety and Health (NIOS
H) (1981), Potential Health Hazards of Video Display Terminals, Cincinnati. Nati
onal Institute for Occupational Safety and Health (NIOSH) (1990), Health Hazard
Evaluation ReportHETA 89-250-2046Newsday, Inc., U.S. Department of Health and Huma
n Services, Washington, DC. National Institute for Occupational Safety and Healt
h (NIOSH) (1992), Health Hazard Evaluation ReportHETA 89-299-2230US West Communica
tions, U.S. Department of Health and Human Services, Washington, DC. National In
stitute for Occupational Safety and Health (NIOSH) (1993), Health Hazard Evaluat
ion ReportHETA 90-013-2277Los Angeles Times, U.S. Department of Health and Human S
ervices, Washington, DC.
HUMANCOMPUTER INTERACTION
1235
National Institute for Occupational Safety and Health (NIOSH) (1997), Alternativ
e Keyboards, DHHS / NIOSH Publication No. 97-148, National Institute for Occupat
ional Safety and Health, Cincinnati, OH. Norman, D. A. (1988), The Design of Eve
ryday Things, Doubleday, New York. Norman, D. A. (1990), Why Interfaces Dont Work, in
The Art of HumanComputer Interaction, B. Laurel, Ed., Addison-Wesley, Reading, M
A, pp. 209219. Of ce of Technology Assessment (OTA) (1985), Automation of Americas O
f ces, OTA-CIT-287, U.S. Government Printing Of ce, Washington, DC. Ortony, A. (1979
), Beyond Literal Similarity, Psychological Review, Vol. 86, No. 3, pp. 161180. Parun
ak, H. V. (1989), Hypermedia Topologies and User Navigation, in Hypertext 89 Proceedi
ngs, ACM Press, New York, pp. 4350. Phizacklea, A., and Wolkowitz, C. (1995), Hom
eworking Women: Gender, Racism and Class at Work, Sage, London. Punnett, L., and
Bergqvist, U. (1997), Visual Display Unit Work and Upper Extremity Musculoskele
tal Disorders, National Institute for Working Life, Stockholm. Rada, R., and Ket
chel, J. (2000), Standardizing the European Information Society, Communications of t
he ACM, Vol. 43, No. 3, pp. 2125. Rempel, D., and Gerson, J. (1991), Fingertip Forc
es While Using Three Different Keyboards, in Proceedings of the 35th Annual Human
Factors Society Meeting, Human Factors and Ergonomics Society, San Diego. Rempel
, D., Dennerlein, J. T., Mote, C. D., and Armstrong, T. (1992), Fingertip Impact L
oading During Keyboard Use, in Proceedings of NACOB II: Second North American Cong
ress on Biomechanics (Chicago). Rempel, D., Dennerlein, J., Mote, C. D., and Arm
strong, T. (1994), A Method of Measuring Fingertip Loading During Keyboard Use, Jour
nal of Biomechanics, Vol. 27, No. 8, pp. 11011104. Rice, R. E., and Case, D. (198
3), Electronic Message Systems in the University: A Description of Use and Utility
, Journal of Communication, Vol. 33, No. 1, pp. 131152. Robey, D. (1987), Implementat
ion and the Organizational Impacts of Information Systems, Interfaces, Vol. 17, pp
. 7284. Rouse, W. B. (1991), Design for Success: A Human-Centered Approach to Des
igning Successful Products and Systems, John Wiley & Sons, New York. Sauter, S.
L., Gottlieb, H. S., Rohrer, K. N., and Dodson, V. N. (1983), The Well-Being of
Video Display Terminal Users, Department of Preventive Medicine, University of W
isconsin-Madison, Madison, WI. Schleifer, L. M., and Amick, B. C., III (1989), Sys
tem Response Time and Method of Pay: Stress Effects in Computer-Based Tasks, Inter
national Journal of HumanComputer Interaction, Vol. 1, pp. 2339. Sheridan, T. B. (
1997a), Task Analysis, Task Allocation and Supervisory Control, in Handbook of HumanC
omputer Interaction 2nd Ed., M. Helander, T. K. Landauer, and P. V. Prabhu, Eds.
, North-Holland, Amsterdam, pp. 87105. Sheridan, T. B. (1997b), Supervisory Control
, in Handbook of Human Factors and Ergonomics, 2nd Ed., G. Salvendy, Ed., John Wil
ey & Sons, New York, pp. 12951327. Shneiderman, B. (1992), Designing the User Int
erface, 2nd Ed., Addison-Wesley, Reading, MA. Smith, M. J. (1984a), Health Issues
in VDT Work, in Visual Display Terminals: Usability Issues and Health Concerns, J.
Bennett, D. Case, and M. J. Smith, Eds., Prentice Hall, Englewood Cliffs, NJ, p
p. 193228. Smith, M. J. (1984b), The Physical, Mental and Emotional Stress Effects
of VDT Work, Computer Graphics and Applications, Vol. 4, pp. 2327. Smith, M. J. (19
95), Behavioral Cybernetics, Quality of Working Life and Work Organization in Comp
uter Automated Of ces, in Work With Display Units 94, A. Grieco, G. Molteni, E. Occh
ipinti, and B. Piccoli, Eds., Elsevier, Amsterdam, pp. 197202. Smith, M. J., and
Carayon, P. (1995), Work Organization, Stress and Cumulative Trauma Disorders, in Be
yond Biomechanics: Psychosocial Aspects of Cumulative Trauma Disorders, S. Moon
and S. Sauter, Eds., Taylor & Francis, London. Smith, M. J., and Sainfort, P. C.
(1989), A Balance Theory of Job Design for Stress Reduction, International Journal
of Industrial Ergonomics, Vol. 4, pp. 6779. Smith, M. J., Cohen, B. G. F., Stamme
rjohn, L. W., and Happ, A. (1981), An Investigation of Health Complaints and Job S
tress in Video Display Operations, Human Factors, Vol. 23, pp. 387400.
1236
PERFORMANCE IMPROVEMENT MANAGEMENT
Smith, M. J., Carayon, P., Sanders, K. J., Lim, S. Y., and LeGrande, D. (1992a),
Employee Stress and Health Complaints in Jobs with and without Electronic Perform
ance Monitoring, Applied Ergonomics, Vol. 23, No. 1, pp. 1727. Smith, M. J., Salven
dy, G., Carayon-Sainfort, P., and Eberts, R. (1992b), HumanComputer Interaction, in H
andbook of Industrial Engineering, 2nd Ed., G. Salvendy, Ed., John Wiley & Sons,
New York. Smith, M. J., Karsh, B., Conway, F., Cohen, W., James, C., Morgan, J.
, Sanders, K., and Zehel, D. (1998), Effects of a Split Keyboard Design and Wrist
Rests on Performance, Posture and Comfort, Human Factors, Vol. 40, No. 2, pp. 32433
6. Sproull, L., and Kiesler, S. (1988), Reducing Social Context Cues: Electronic M
ail in Organizational Communication, in Computer-Supported Cooperative Work: A Boo
k of Readings, I. Greif, Ed., Morgan Kaufmann, San Mateo, CA, pp. 683712. Sproull
, L. and Kiesler, S. (1991), Connections, MIT Press, Cambridge, MA. Stanney, K.
M., Maxey, J., and Salvendy, G. (1997), Socially-Centered design, in Handbook of Hum
an Factors and Ergonomics, 2nd Ed., G. Salvendy, Ed., John Wiley & Sons, New Yor
k, pp. 637656. Stanton, N. (1988), Product Design with People in Mind, in Human Facto
rs in Consumer Products, N. Stanton, Ed., Taylor & Francis, London, pp. 117. Step
hanidis, C., and Salvendy, G. (1998), Toward an Information Society for All: An In
ternational Research and Development Agenda, International Journal of HumanComputer
Interaction, Vol. 10, No. 2, pp. 107134. Swanson, N. G., Galinsky, T. L., Cole,
L. L., Pan, C. S., and Sauter, S. L. (1997), The Impact of Keyboard Design on Comf
ort and Productivity in a Text-Entry Task, Applied Ergonomics, Vol. 28, No. 1, pp.
916. Takahashi, D. (1998), Technology Companies Turn to Ethnographers, Psychiatris
ts, The Wall Street Journal, October 27. Theorell, T., Ringdahl-Harms, K., Ahlberg
-Hulten, G., and Westin, B. (1991), Psychosocial Job Factors and Symptoms from the
Locomotor SystemA Multicausal Analysis, Scandinavian Journal of Rehabilitation Med
icine, Vol. 23, pp. 165173. Trankle, U., and Deutschmann, D. (1991), Factors In uenci
ng Speed and Precision of Cursor Positioning Using a Mouse, Ergonomics, Vol. 34, N
o. 2, pp. 161174. Turk, M., and Robertson, G. (2000), Perceptual User Interfaces, Com
munications of the ACM, Vol. 43, No. 3, pp. 3334. Vanderheiden, G. C. (1997), Desig
n for People with Functional Limitations Resulting from Disability, Aging, or Ci
rcumstance, in Handbook of Human Factors and Ergonomics, 2nd Ed., G. Salvendy, Ed.
, John Wiley & Sons, New York, pp. 20102052. Verhoef, C. W. M. (1988), Decision Mak
ing of Vending Machine Users, Applied Ergonomics, Vol. 19, No. 2, pp. 103109. Verme
eren, A. P. O. S. (1999), Designing Scenarios and Tasks for User Trials of Home El
ectronics Devices, in Human Factors in Product Design: Current Practice and Future
Trends, W. S. Green and P. W. Jordan, Eds., Taylor & Francis, London, pp. 4755.
Vertelney, L., and Booker, S. (1990), Designing the Whole Product User Interface, in
The Art of HumanComputer Interaction, B. Laurel, Ed., Addison-Wesley, Reading, M
A, pp. 5763. Vicente, K. J. (1999), Cognitive Work Analysis: Towards Safe, Produc
tive, and Healthy ComputerBased Work, Erlbaum, Mahwah, NJ. Wells, R., Lee, I. H.
and Bao, S. (1997), Investigations of Upper Limb Support Conditions for Mouse Use
, in Proceedings of the 29th Annual Human Factors Association of Canada. Wilson, J
. R., and Haines, H. M. (1997), Participatory Ergonomics, in Handbook of Human Facto
rs and Ergonomics 2nd Ed., G. Salvendy, Ed., John Wiley & Sons, New York, pp. 49
0513. Wixon, D., and Wilson, C. (1997), The Usability Engineering Framework for Pro
duct Design and Evaluation, in Handbook of HumanComputer Interaction, 2nd Ed., M.
Helander, T. K. Landauer, and P. V. Prabhu, Eds., North-Holland, Amsterdam, pp.
653688. Yang, C.-L. (1994), Test of a Model of Cognitive Demands and Worker Stress
in Computerized Of ces, Ph.D. Dissertation, Department of Industrial Engineering, U
niversity of WisconsinMadison. Yang, C. L., and Carayon, P. (1995), Effect of Job
Demands and Social Support on Worker Stress: A Study of VDT Users, Behaviour and I
nformation Technology, Vol. 14, No. 1, pp. 3240.
IV.A
Project Management
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
1242
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
1.
1.1.
INTRODUCTION
Projects and Processes
A project is an organized endeavor aimed at accomplishing a speci c nonroutine or
low-volume task (Shtub et al. 1994). Due to sheer size (number of man-hours requ
ired to perform the project) and specialization, teams perform most projects. In
some projects the team members belong to the same organization, while in many o
ther projects the work content of the project is divided among individuals from
different organizations. Coordination among individuals and organizations involv
ed in a project is a complex task. To ensure success, integration of deliverable
s produced at different geographical locations by different individuals from dif
ferent organizations at different times is required. Projects are typically perf
ormed under a time pressure, limited budgets, tight cash ow constraints, and unce
rtainty. Thus, a methodology is required to support the management of projects.
Early efforts in developing such a methodology focused on tools. Tools for proje
ct scheduling such as the Gantt chart and the critical path method (CPM) were de
veloped along with tools for resource allocation, project budgeting and project
control (Shtub et al. 1994). The integration of different tools into a complete
framework that supports project management efforts throughout the entire project
life cycle (see Section 1.2 below) was achieved by the introduction of projectmanagement processes. A project-management process is a collection of tools and
techniques that are used on a prede ned set of inputs to produce a prede ned set of
outputs. Processes are connected to each other as the input to some of the proje
ct-management processes is created (is an output) by other processes. The collec
tion of interrelated project-management processes forms a methodology that suppo
rts the management of projects throughout their life cycle, from the initiation
of a new project to its (successful) end. This chapter presents a collection of
such interrelated processes. The proposed framework is based on the Project Mana
gement Body of Knowledge (PMBOK), developed by the Project Management Institute
(PMI) (PMI 1996). The purpose of this chapter is to present the processes and th
e relationship between them. A detailed description of these processes is availa
ble in the PMBOK. PMI conducts a certi cation program based on the PMBOK. A Projec
t Management Professional (PMP) certi cate is earned by passing an exam and accumu
lating relevant experience in the projectmanagement discipline.
1.2.
The Project Life Cycle
Since this is a temporary effort designed to achieve a speci c set of goals, it is
convenient to de ne phases that the project goes through. The collection of these
phases is de ned as the project life cycle. Analogous to a living creature, a pro
ject is born (initiated), performed (lives), and terminated (dies), always follo
wing the same sequence. This simple life-cycle model of three phases is conceptu
ally helpful in understanding the project-management processes because each proc
ess can be de ned with respect to each phase. However, this simple life-cycle mode
l is not detailed enough for implementation (in some projects each phase may spa
n several months or years). Thus, more speci c lifecycle models for families of si
milar projects were developed. A speci c life-cycle model is a set of stages or ph
ases that a family of projects goes through. The projects phases are performed in
sequence or concurrently. The project life cycle de nes the steps required to ach
ieve the project goals as well as the content of each step. Thus, the literature
on software projects is based on speci c lifecycle models, such as the spiral mod
1244
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
PMBOK processes may need modi cations in certain projects, the PMBOK is a well-acc
epted source of information and therefore the following de nition of processes is
based on the PMBOK. The PMBOK classi es processes in two ways: 1. By sequence Init
iating processes Planning processes Executing processes Controlling processes Cl
osing processes 2. By management function: Processes in integration management P
rocesses in scope management Processes in time management Processes in cost mana
gement Processes in quality management Processes in human resource management Pr
ocesses in communication management Processes in risk management Processes in pr
ocurement management The application of these processes in a speci c organization
requires customization, development of supporting tools, and training.
3.
3.1.
PROJECT INTEGRATION MANAGEMENT
Processes
Project integration management involves three processes: 1. Project plan develop
ment 2. Project plan execution 3. Overall change control The purpose of these th
ree processes is to ensure coordination among the various elements of the projec
t. Coordination is achieved by getting inputs from other processes, integrating
the information contained in these inputs, and providing integrated outputs to t
he decision makers and to other processes. The project life-cycle model plays an
important role in project integration management because project plan developme
nt is performed during the early phases of the project while project plan execut
ion and project change control are performed during the other phases of the proj
ect. With a well-de ned life-cycle model, it is possible to de ne the data, decision
making, and activities required at each phase of the project life cycle and con
sequently train those responsible for performing the processes adequately.
3.2.
Description
The project plan and its execution are the major outputs of this process. The pl
an is based on inputs from other planning processes (discussed later) such as sc
ope planning, schedule development, resource planning, and cost estimating along
with historical information and organizational policies. The project plan is co
ntinuously updated based on corrective actions triggered by approved change requ
ests and analysis of performance measures. Execution of the project plan produce
s work resultsthe deliverables of the project.
4.
4.1.
PROJECT SCOPE MANAGEMENT
Processes
Project scope management involves ve processes: 1. Initiation 2. Scope planning
PROJECT MANAGEMENT CYCLE 3. Scope de nition 4. Scope veri cation 5. Scope change con
trol
1245
The purpose of these processes is to ensure that the project includes all work (
and only that work) required for its successful completion. In the following dis
cussion, scope relates to the product scope (de ned as the features and functions
to be included in the product or service) and the project scope (de ned as the wor
k that must be done in order to deliver the product scope).
4.2.
Description
The scope is de ned based on a description of the needed product or service. Alter
native products or services may exist. Based on appropriate selection criteria a
nd a selection methodology, the best alternative is selected and a project chart
er is issued along with a nomination of a project manager. The project manager s
hould evaluate different alternatives to produce the selected product or service
and implement a methodology such as costbene t analysis to select the best alterna
tive. Once an alternative is selected, a work breakdown structure (WBS) is devel
oped. The WBS is a hierarchical presentation of the project scope in which the u
pper level is the whole project and at which the lowest-level work packages are
de ned. Each work package is assigned to a manager (organizational unit) responsib
le for its scope.
5.
5.1.
PROJECT TIME MANAGEMENT
Processes
Project Time Management involves ve processes: 1. 2. 3. 4. 5. Activity de nition Ac
tivity sequencing Activity duration estimating Schedule development Schedule con
trol
The purpose of time management is to ensure timely completion of the project. Th
e main output of time management is a schedule that de nes what is to be done, whe
n, and by whom. This schedule is used throughout the project execution to synchr
onize between people and organizations involved in the project and as a basis fo
r control.
5.2.
Description
Each work package in the WBS is decomposed into the activities required to compl
ete its prede ned scope. A list of activities is constructed and the time to compl
ete each activity is estimated. Estimates can be deterministic (point estimates)
or stochastic (distributions). Precedence relations among activities are de ned,
and a model such as a Gantt chart, activity on arc (AOA), or activity on nodes (
AON) network is constructed (Shtub et al. 1994). An initial schedule is develope
d based on the model. This unconstrained schedule is a basis for estimating requ
ired resources and cash. Based on the constraint imposed by due dates, cash and
resource availability, and resource requirements of other projects, a constraine
d schedule is developed. Further tuning of the schedule may be possible by chang
ing the resource combination assigned to activities (these resource combinations
are known as modes). The schedule is implemented by the execution of activities
. Due to uncertainty, a schedule control is required to detect deviations and de
cide how to react to such deviations and change requests. The schedule control s
ystem is based on performance measures such as actual completion of deliverables
(milestones), actual starting time of activities, and actual nishing time of act
ivities. Changes to the baseline schedule are required whenever a change in the
project scope is implemented.
6.
6.1.
PROJECT COST MANAGEMENT
Processes
Project cost management involves four processes: 1. Resource planning 2. Cost es
timating
use one can get from a product before it has to be replaced due to technical or
economical considerations. 6. Serviceability: This measure re ects the speed, cour
tesy, competence, and ease of repair should the product fail. The reliability of
a product and its serviceability complement each other. A
are required to ensure that the total work content of the project is assigned a
nd performed by the work packages and integration of the deliverables produced b
y the work packages into the nal product results is possible according to the pro
ject plans. The organizational plan de nes roles and responsibilities as well as s
taf ng requirements and the OBS of the project. Based on the organizational plan,
staff assignment is performed. Availability of staff is compared to the requirem
ents and gaps identi ed. These gaps are lled by staff-acquisition activities. The a
ssignment of available staff to the project and the acquisition of new staff res
ult in the creation of a project team that may be a combination of employees ass
igned full-time to the project, full-time employees assigned part-time to the pr
oject, and part-timers. The assignment of staff to the project is the rst step in
the team-development process. To succeed in achieving the project goals, a team
spirit is needed. The transformation of a group of people assigned to a project
into a high-performance team requires leadership, communication skills, and neg
otiation skills as well as the ability to motivate people, coach and mentor them
, and deal with con icts in a professional yet effective way.
1248
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
9.
9.1.
PROJECT COMMUNICATIONS MANAGEMENT
Processes
Project communications management involves four processes: 1. 2. 3. 4. Communica
tions Planning Information Distribution Performance reporting Administrative clo
sure
These processes are required to ensure timely and appropriate generation, collecti
on, dissemination, storage, and ultimate disposition of project information (PMI 1
996). The processes are tightly linked with organizational planning. The communi
cation within the project team, with stakeholders, and with the external environ
ment can take many forms, including formal and informal communication, written o
r verbal communication, and planned or ad hoc communication. The decision regard
ing communication channels, the information that should be distributed, and the
best form of communication for each type of information is crucial in supporting
teamwork and coordination.
9.2.
Description
Communications planning is the process of selecting the communication channels,
the modes of communication and the contents of the communication among project p
articipants, stakeholders, and the environment. Taking into account the informat
ion needs, the available technology, and constraints on the availability and dis
tribution of information, the communications-management plan speci es the frequenc
y and the methods by which information is collected, stored, retrieved, transmit
ted, and presented to the parties involved in the project. Based on the plan, da
ta-collection as well as datastorage and retrieval systems can be implemented an
d used throughout the project life cycle. The project communication system, whic
h supports the transmission and presentation of information, should be designed
and established early on to facilitate the transfer of information. Information
distribution is based on the communication-management plan. It is the process of
implementing the plan throughout the project life cycle to ensure proper and ti
mely information to the parties involved. In addition to the timely distribution
of information, historical records are kept to enable analysis of the project r
ecords to support organizational and individual learning. Information related to
performances of the project is important. Performance reporting provides stakeh
olders with information on the actual status of the project, current accomplishm
ents, and forecasts of future project status and progress. Performance reporting
is essential for project control because deviations between plans and actual pr
ogress trigger control actions. To facilitate an orderly closure of each phase a
nd of the entire project, information on actual performance levels of the projec
t phases and product is collected and compared to the project plan and product s
peci cations. This veri cation process ensures an ordered formal acceptance of the p
roject products by the stakeholders and provides a means for record keeping that
supports organizational learning.
10.
10.1.
PROJECT RISK MANAGEMENT
Processes
Project risk management involves four processes: 1. 2. 3. 4. Risk Risk Risk Risk
identi cation quanti cation response development response control
These processes are designed to evaluate the possible risk events that might in ue
nce the project and integrate proper measures to handle uncertainty in the proje
ct-planning monitoring and control activities.
10.2.
Description
A risk event is a discrete random occurrence that (if occurring) affects the pro
ject. Risk events are identi ed based on the dif culty to achieve the required proje
ct outcome (the characteristics of the product or service), constraints on sched
ules and budgets, and the availability of resources. The environment in which th
e project is performed is also a potential source of risk. Historical informatio
n
cal level, outsourcing can elevate resource shortages, help in closing knowledge
gaps, and increase the probability of project success. Management of the outsou
rcing process from supplier selection to contract closeout is an important part
of the management of many projects.
11.2.
Description
The decision on what parts in the project scope and product scope to purchase fr
om outside the performing organization, how to do it, and when to do it is criti
cal to the success of most projects. This is not only because signi cant parts of
many project budgets are candidates for outsourcing, but because the level of un
certainty and consequently the risk involved in outsourcing are quite different
from the levels of uncertainty and risk of activities performed in-house. In ord
er to gain a
1250
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
competitive advantage from outsourcing, well-de ned planning, execution, and contr
ol of outsourcing processes supported by data and models (tools) are needed. The
rst step in the process is to consider what parts of the project scope and produ
ct scope to outsource. These are decisions regarding sources of capacity and kno
w-how that can help the project in achieving its goal. A con ict may exist between
the goals of the project and other goals of the stakeholders. For example, subc
ontracting may help a potential future competitor develop know-how and capabilit
ies. This was the case with IBM when it outsourced the development of the Disk O
perating System (DOS) for the IBM PC to Microsoft and the development of the CPU
to Intel. The analysis should take into account the cost, quality, speed, risk,
and exibility of in-house vs. outsourcing. Outsourcing decisions should also tak
e into account the long-term or strategic factors discussed earlier. Once a deci
sion is made to outsource, a solicitation process is required. Solicitation plan
ning deals with the exact de nition of the goods or services required, estimates o
f the cost and time required, and preparation of a list of potential sources. Du
ring solicitation planning, a decision model can be developed, such as a list of
required attributes with a relative weight for each attribute and a scale for m
easuring the attributes of the alternatives. A simple scoring model, as well as
more sophisticated decision support models prepared at the solicitation-planning
phase, can help in reaching consensus among stakeholders and making the process
more objective. Solicitation can take many forms: a request for proposal (RFP)
advertised and open to all potential sources is one extreme, while a direct appr
oach to a single preferred (or only) source is another extremewith many variation
s in between, such as the use of electronic commerce. The main output of the sol
icitation process is one or more proposals for the goods or services required. A
well-planned solicitation-planning process followed by a well-managed solicitat
ion process is required to make the next step, source selection, ef cient and effe
ctive. Source selection is required whenever more than one adequate source is av
ailable. If a proper selection model is developed during the solicitation-planni
ng process and all the data required for the model are collected from the potent
ial suppliers during the solicitation process, source selection is easy, ef cient,
and effective. Based on the evaluation criteria and organizational policies, pr
oposals are evaluated and ranked. Negotiations with the highest-ranked suppliers
can take place to get the best and nal offer, and the process is terminated when
a contract is signed. If, however, solicitation planning and the solicitation p
rocess do not yield a clear set of criteria and a selection model, source select
ion may become a dif cult and time-consuming process; it may not end with the best
supplier selected or the best possible contract signed. It is dif cult to compare
proposals that are not structured according to clear RFP requirements; in many
cases important information is missing in the proposals. Throughout the life cyc
le of a project, contracts are managed as part of the execution and change contr
ol efforts. Work results are submitted and evaluated, payments are made, and, wh
en necessary, change requests are issued. When these are approved, changes are m
ade to the contracts. Contract management is equivalent to the management of a w
ork package performed in-house, and therefore similar tools are required during
the contract-administration process. Contract closeout is the closing process th
at signi es formal acceptance and closure. Based on the original contract and all
the approved changes, the goods or services provided are evaluated and, if accep
ted, payment is made and the contract closed. Information collected during this
process is important for future projects and supplier selection because effectiv
e management is based on such information.
12. THE LEARNING ORGANIZATION: CONTINUOUS IMPROVEMENT IN PROJECT MANAGEMENT
12.1. Individual Learning and Organizational Learning
Excellence in project management is based on the ability of individuals to initi
ate, plan, execute, control, and terminate the project scope and product scope s
ce, additional applications became commercially available. These range from tool
s that automate other project management processes (such as risk-management tool
s like @Risk) to tools that help manage areas that are ancillary to, but have a
direct impact upon, project management (such as multiproject resource management
tools like ResSolution). With the availability of all of these different types
of tools, it is often a dif cult proposition deciding which tools, if any, are app
ropriate for a speci c organization. In the next sections, we will discuss the pro
cesses that are involved in, or have an impact upon, project management and see
how the use of computer tools can facilitate these processes.
3.
THE PROJECT CONCENTRIC CIRCLE MODEL
In any given organization, generally two types of work activities take place. Th
e rst type, the one with which most people are familiar, is operations work. The
activities in operations work have the following characteristics:
They are repetitive (they occur over and over from scal quarter to scal quarter an
d from scal
year to scal year).
Their end result is essentially the same (production of
ns reports, etc.).
1254
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
The second type, which we are addressing in this chapter, is project work. As on
e might expect, project work is characterized by work that is (1) not repetitive
, but rather time limited (it has a speci c beginning and end), and (2) produces a
unique product or service. Project management is the set of activities involved
in managing project work. When looking at the set of processes involved in or s
urrounding project management, it is useful to use a framework that the author h
as called the Project Concentric Circle Model.* This model is depicted in Figure
1. The model consists of three concentric circles. Each circle represents a lev
el at which project management processes, or processes affecting the project man
agement processes, take place. Each will be brie y discussed below.
3.1.
Project Management Core Processes
The center circle represents the project management core processes, or processes
that function within individual projects. The reader can nd a detailed descripti
on of these processes in (PMI 1996) (PMBOK Guide). In brief, these are areas tha
t address the following project management activities at the individual project
level:
Time management Scope management Cost management Risk management Quality managem
ent Human resources management Procurement management Communications management
Integration management
The PMBOK Guide also describes the ve project management processes throughout whi
ch activities in each above-noted area of management need to be performed. These
processes are portrayed in Figure 2. It is these management areas and processes
that most organizations associate with project management. And some assume that
attending to these alone will ensure successful organizational project work. Th
at, however, is a fatal error for many organizations. In order to achieve a high
level of competence in project management, two other levels must also be addres
sed.
3.2.
Project Management Support Processes
The second circle represents the project management support processes level and
includes processes that occur outside of the day-to-day activities of the indivi
dual project teams. The activities within these processes generally comprise ope
rational activities, not project activities, that support project
Figure 1 Project Concentric Circle Model.
* The reader is advised that the Project Concentric Circle Model is a copyright
of Oak Associates, Inc. Any reproduction or use of the model without the express
consent of Oak Associates, Inc. and the publisher is strictly prohibited.
1256
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
An example of one type of tool frequently used in project management is a list o
f items that, when completed, would signify the completion of a project delivera
blea punch list is one such list that is regularly used to this day in constructi
on projects. More and more, these tools are being incorporated into computer app
lications. In this section, we will take a look at tools that are available, or
are being constructed, to automate the concentric circle processes.
4.1.
Automating the Project Management Core Processes
Before proceeding, a brief word of caution is in order. It is the mistaken belie
f of many that in order to manage projects effectively, one merely needs to purc
hase a project management tool and become trained in use of the tool (the buy em a
tool and send em to school approach). This is possibly the worst approach that coul
d be taken to improve the effectiveness of project management in an organization
. We have already noted that project management predated the commercially availa
ble tools to aid that endeavor, so we know that it is possible to manage project
s effectively without the use of automation. The single most important thing to
remember about these tools is that it is not the tool, but rather the people usi
ng the tool, who manage the projects. In order for people to use the tools prope
rly, they must rst master the techniques upon which these tools are based. As an
example, to develop useful data for scope, time, and cost management, the succes
sful tool user must have a working knowledge of scope statement development; wor
k de nition (through work breakdown structures or other such techniques): activity
estimating (three-point estimating of both effort and duration); precedence dia
gramming method (also known as project network diagramming); and progress-evalua
tion techniques (such as earned value). Expecting success through the use of a t
ool without a thorough prior grounding in these techniques is like expecting som
eone who has no grounding in the basics of writing (grammar, syntax, writing tec
hnique) to use a word-processing application to produce a novel. Some novels on
the market notwithstanding, it just does not happen that way. With this rmly in m
ind, lets look at the types of tools one might use in modern project management.
4.1.1.
Scope, Time, Cost, and Resource Management
The preponderance of tools on the market today are those that aid project manage
rs in time and cost management (commonly called schedule and budget management).
In addition, many of these tools include resource management. These tools can b
e helpful in:
Developing activity lists (project scope) and displaying work breakdown structur
es Noting activity estimates (in some cases, calculating most likely estimates for t
hree-point
estimating techniques) Assigning dependencies (precedence structure) among activ
ities Calculating and displaying precedence diagrams (PERT charts) Calculating a
nd displaying project schedules (Gantt charts) Assigning individual or group res
ources Setting and displaying calendars (both for the project and for individual
resources) Calculating project costs (for various types of resources) Entering
time card and resource usage data
Product
Project Management
Hardware
Media
Issue RFP
Select Vendors
Requirements
Design
Figure 3 Work Breakdown Structure.
1258
1500
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Units
1000 500 0 1 2 3 4 5 6 7 8 Reporting Period
Actual Costs
9
10
11
12
Planned Progress
Earned Value
Planned Progress Actual Costs Earned Value
1 20 30 20
2 50 60 50
3 110 151 110
4 225 337 225
5 425 567 345
6 695 827 475
7 875
8 1005
9 1105
10 1230
11 1260
12 1280
Figure 6 Earned Value S Curves.
One type of risk that all project managers face is that associated with project
schedules. A typical method for handling this risk is to run Monte Carlo simulat
ions on the project precedence diagram (PERT chart). This is done by (1) assigni
ng random durations (within prede ned three-point activity estimates) to individua
l activities, (2) calculating project duration over and over for hundreds (somet
imes thousands) of repetitions, and (3) analyzing the distribution of probable o
utcomes of project duration. There are a number of tools on the market that perf
orm these tasks. Two of the most popular are @Risk and Risk . Other ne tools are al
so available that perform similarly. These tools can perform simulations on prac
tically any project calculations that lend themselves to numerical analysis. The
output of these tools is the analysis of probable project durations in both num
erical and graphical formats (see Figure 7). Why is it important to use tools li
ke these to help us manage risk? Quite simply, single-point project estimates ar
e rarely, if ever, met. Project managers need to understand the probable range o
f outcomes of both project cost and duration so they can make informed decisions
around a host of project issues (e.g., setting project team goals, deciding whe
n to hire project personnel). They also need this information to set proper expe
ctations and conduct intelligent discussions with the project team members, seni
or managers, and customers. The correct use of such tools can help project manag
ers do just that. In addition to these tools, other tools are available to help
track the status of potential risk events over the course of a project. One such
tool, Risk Radar, was designed to help managers of softwareintensive developmen
t programs. Regardless of the intended target audience, this tool can be quite h
elpful for any type of project risk-tracking effort. With the proper input of ri
sk data, it displays a
Probability of Occurrence
Higher Risk Probability of Outcomes
Lower Risk
Figure 7 Project Outcome Probability Curve.
1260
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
With the availability of Internet communications and the advent of tools similar
to those noted above, tool sets such as these should be readily available for u
se by project teams.
4.2.
Automating the Organizational Support Processes
Like the communications tools discussed above, tools that are useful in the orga
nizational support processes are those that have been used for some time in oper
ations management. Operations management processes are, after all, operations wo
rk (line or functional management) as opposed to project work (project managemen
t). Since operations management has been taught in business schools for decades,
there are tools on the market that can aid in various aspects of these endeavor
s. While these tools are too numerous to discuss here, some have been designed w
ith the express purpose of supporting project management activity. Among them ar
e:
Multiproject resource-management tools: These tools help line managers manage sc
arce resources across the many projects in which their organizations are involved. They
include tools such as ResSolution* and Business Engine. Project portfolio manag
ement tools: These are tools that help senior managers balance the accomplishmen
t of their organizational goals across the range of projects, both ongoing and p
otential, in their organizations. They address issues such as budget, bene ts, mar
ket, product line, probability of success, technical objectives, and ROI to help
them prioritize and undertake projects. One such tool that does this is the pro
ject portfolio module of Portfolio Plus. Activity and project historical databas
es: These are tools that help a project team and all managers more accurately es
timate the outcomes of their projects. Among the many problems that arise in pro
jects, an unrealistic expectation about project outcomes is one of the most agran
t. One reason for these unrealistic expectations is the poor quality of activity
-level estimates. One way to increase the accuracy of these estimates is to empl
oy three-point estimating techniques, which have been referred to above. An even
better way of increasing the accuracy of estimates is to base them on historica
l data. Were one to examine the activities that are performed over time in an or
ganizations projects, it would become apparent that many of the same types of act
ivities are performed over and over from one project to another. In some cases,
nearly 80% of these activities are repeated from one project to another. Unfortu
nately, in many organizations, such historical data is rarely available for proj
ect team members to use for estimating. Consequently, three-point estimating tec
hniques need to be universally employed, project after project. Once organizatio
ns develop, maintain, and properly employ accurate activity historical databases
, the need for the relatively less accurate three-point estimates (remember that
single-point estimates are much less accurate than three-point estimates) will
be reduced, thereby resulting in more accurate estimates at both the activity an
d project levels. Finally, we should mention that while integration of all the t
ypes of tools discussed is probably technologically possible, it is not always e
ither necessary or desirable. In fact, it is the authors belief that in some inst
ances, particularly in the case of multiple-project resource management, it is b
etter to do detailed management of project resources within the context of the c
enter circle and less detailed management at a higher level within the context o
f the outer circles without daily integration of the two activities.
5.
IMPLEMENTING CAPM
The selection, implementation, and use of these tools are not tasks to be taken
lightly. And while the process may at rst seem daunting, there are ways to make i
t easier. A number of sources can aid in the selection process. At least two pub
lications presently do annual software surveys in which they compare the capabil
ities of various project management tools. Many of these tools perform the funct
ions discussed above. These publications are IIE Solutions, a monthly publicatio
n of the Institute of Industrial Engineers, and PMnet, a monthly publication of
the Project Management Institute. The National Software Testing Laboratories (NT
SL) also tests and compares software programs. It makes these comparisons in ove
r 50 categories of tool capabilities for project management software. The major
areas of comparison include:
Performance Versatility
* In the interest of fairness and full disclosure, the author must acknowledge t
hat the organization in which he is a principal is a reseller of both ResSolutio
n and Project 98 Plus software, both of which are cited in this chapter.
surely not vanish, however, is the ever-increasing need of project managers and
organizations for tools to help them accomplish their complex and dif cult jobs.
While once just nice to have, these tools are now a necessity. So what does the
future hold for computer-aided project management? More and more tools are expan
ding from those aimed at individual use to those available for workgroups. Proje
cts are, after all, team endeavors. MS Project 2000 includes an Internet browser
module called Microsoft Project Central that is aimed at allowing the collabora
tive involvement of project stakeholders in planning and tracking projects, as w
ell as access to important project information. With the ever-increasing demand
for accurate project information, coupled with the crossgeographical nature of m
any project efforts, Web-based project communications tools will surely also bec
ome a requirement and not just a convenience. The author has worked with a few c
ompanies that have already developed these tools for their internal use. It is a
lso inevitable that at some point in the not too distant future, complete tool s
ets that incorporate and integrate many of the varied ca-
1262
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
pabilities described in the paragraphs above will also become available for comm
ercial use. It is only a matter of time before such software applications will b
e developed and appear on the shelves of your electronic shopping sites. However
, despite the advances in technology that will inevitably lead to this availabil
ity, the ageold problems of successful selection, introduction, and implementati
on of these tools will remain. If organizations take the time to accomplish thes
e tasks in a cogent and supportive way, the tools will continue to be a signi cant
bene t in the successful implementation of the project management processes.
REFERENCES
Project Management Institute (PMI) (1996), A Guide to the Project Management Bod
y of Knowledge, PMI, Upper Darby, PA.
ADDITIONAL READING
Graham, R. J., and Englund, R. L., Creating an Environment for Successful Projec
ts, Jossey-Bass, San Francisco, 1977. Kerzner, H., Project Management: A Systems
Approach to Planning, Scheduling, and Controlling, 5th Ed., Van Nostrand Reinho
ld, New York, 1995. Martin, J., An Information Systems Manifesto, Prentice Hall,
Englewood Cliffs, NJ, 1984. National Software Testing Laboratory (NTSL), Project
Management Programs, Software Digest, Vol. 7, No. 16, 1990. Oak Associates, Inc.,
Winning Project Management, Course Notebook, Maynard, MA, 1999.
APPENDIX Trademark Notices
Artemis is a registered trademark of Artemis Management Systems. Business Engine
is a registered trademark of Business Engine Software Corp. GRANEDA Dynamic is
a registered trademark of Netronic Software GMBH. Harvard Project Manager is a r
egistered trademark of Harvard Software, Inc. Macintosh and MacProject are regis
tered trademarks of Apple Computer Corp. Microsoft Project, Microsoft Project 98
, and Microsoft Project 2000 are registered trademarks of Microsoft Corp. Portfo
lio Plus is a registered trademark of Strategic Dynamics, Ltd. ResSolution is a
registered trademark of, and Project 98 Plus is a product of, Scheuring Projektm
anagement. Primavera and SureTrak are registered trademarks of Primavera Systems
, Inc. PS7 is a registered trademark of Scitor Corp. Project Workbench is a regi
stered trademark of Applied Business Technology Corp. @Risk is a registered trad
emark of Palisade Corp. Risk is a registered trademark of ProjectGear, Inc. Risk
Radar is a product of Software Program Managers Network. SuperProject is a regis
tered trademark of Computer Associates, Inc.
1263
1264 8.4.
MANAGEMENT, PLANNING, DESIGN, AND CONTROL The Learning Organization 1277 1277 12
78 APPENDIX: TABLE OF CONTENTS FOR A SOW DOCUMENT
REFERENCES ADDITIONAL READING
1278
1. INTRODUCTION: DIVISION OF LABOR AND ORGANIZATIONAL STRUCTURES
1.1. Introduction
Division of labor is a management approach based on the breaking up of a process
into a series of small tasks so that each task can be assigned to a different w
orker. Division of labor narrows the scope of work each worker has to learn enab
ling workers to learn new jobs quickly and providing an environment where each w
orker can be equipped with special tools and techniques required to do his job.
Some advantages of division of labor and specialization are:
The fast development of a high degree of skill (specialization) The saving of se
t-up time required to change from one type of work to another The use of special
-purpose, usually very ef cient, machines, tools, and techniques developed
for speci c tasks. These bene ts do not come for free. Division of labor requires in
tegration of the outputs produced by the different workers into the nal product.
Thus, some of the ef ciency gained by specialization is lost to the additional eff
ort of integration management required. A common way to achieve integration is b
y a proper organizational structure, a structure that de nes roles and responsibil
ities of each person as well as the inputs required and the tools and techniques
used to produce that persons outputs. This chapter discusses division of labor i
n projects. Section 1 deals with different organizational structures. Section 2
focuses on the work breakdown structure (WBS) as a tool that supports division o
f labor in projects. Section 3 discusses the relationship between the project or
ganizational structure and the WBS, and Section 4 presents types of work breakdo
wn structures along with a discussion on the design of a WBS. Section 5 discusse
s the building blocks of a WBS, known as work packages, and Section 6 discusses
how the WBS should be used and managed in projects. Finally, Sections 7 and 8 pr
esent the issues of individual learning and organizational learning in the conte
xt of the support a WBS can provide to the learning process.
1.2.
Organizational Structures
Organizations are as old as mankind. Survival forced people to organize into fam
ilies, tribes, and communities to provide for basic needs (security, food, shelt
er, etc.) that a single person had dif culty providing. Kingdoms and empires of th
e ancient world emerged as more formal organizations. While these organizations
had long-term goals, other organizations were created to achieve speci c unique go
als within a limited time frame. Some ambitious undertakings that required the c
oordinated work of many thousands of people, like the construction of the Pyrami
ds, Great Wall of China, or the Jewish Temple, motivated the development of ad h
oc organizations. As organizations grew larger and spread geographically, commun
ication lines and clear de nitions of roles became crucial. Formal organizations b
ased on a hierarchical structure were established. The hierarchical structure em
erged due to the limited ability of a supervisor to manage too many subordinates
. This phenomenon, known as the limited span of control, limits the number of su
bordinates one can supervise effectively. The role, responsibility, and authorit
y of each person in the organization were de ned. A typical example is documented
in the Bible where Moses divided the people of Israel into groups of 10 and clus
tered every 10 of these basic groups into larger groups of 100, and so on. The u
nderlying assumption in this case is that the span of control is 10. Clustering
was based on family relationships. Formal authority was de ned and lines of commun
ication were established to form a hierarchical organizational structure. The id
ea of a formal organization, where roles are de ned and communication lines are es
tablished, is a cornerstone in the modern business world. Division of labor and
specialization are basic building blocks in modern organizations. There is a lar
ge variety of organizational designs; some are designed to support repetitive (o
ngoing) operations, while others are designed to support unique one-time efforts
. A model known as the organizational structure is frequently used to represent
lines of communication, authority, and responsibility in business organizations.
1266
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
While the tasks assigned to each functional unit are repetitive and can be learn
ed by repetition, the work content of each project must be de ned and properly all
ocated to individuals and organizations participating in the project. The work b
reakdown structure (WBS) is the tool commonly used to ensure proper division of
labor and integration of the project deliverables.
2. HIERARCHIES IN THE PROJECT ENVIRONMENT: THE NEED FOR A WORK BREAKDOWN STRUCTU
RE
2.1. Introduction
As discussed in Section 1, the natural division of labor that exists in a functi
onal organization is missing in projects. It is important to divide the total sc
ope of the project (all the work that has to be done in the project) among the i
ndividuals and organizations that participate in it in a proper way, a way that
ensures that all the work that has to be done in the project (the project scope)
is allocated to participants in the project while no other work (i.e., work tha
t is not in the project scope) is being done. A framework composed of two hierar
chical structures known as the work breakdown structure (WBS) and the organizati
onal breakdown structure (OBS) is used for dividing the project scope amongst th
e participating individuals and organizations in an ef cient and effective way, as
discussed next.
2.2.
The Scope
In a project context the term scope refers to:
The product or service scope, de ned as the features and functions to be included
in the product
of service
The project scope, de ned as the work that must be done in order to deliver a prod
uct or service
with the speci ed features and functions The project total scope is the sum of pro
ducts and services it should provide. The work required to complete this total s
cope is de ned in a document known as the statement of work, or scope of work (SOW
). All the work that is required to complete the project should be listed in the
SOW along with explanations detailing why the work is needed and how it relates
to the total project effort. An example of a table of contents of a SOW documen
t is given in the Appendix. This example may be too detailed for some (small) pr
ojects, while for other (large) projects it may not cover all the necessary deta
ils. In any case, a clearly written SOW establishes the foundation for division
of labor and integration.
2.3.
Implementing Division of Labor in Projects
The SOW is translated into a hierarchical structure called the work breakdown st
ructure (WBS). There are many de nitions of a WBS: 1. The Project Management Insti
tute (PMI) de nes the WBS as follows: A deliverableoriented grouping of project eleme
nts which organizes and de nes the total scope of the project. Each descending lev
el represents an increasingly detailed de nition of a project component. Project c
omponents may be products or services (PMI 1996). 2. MIL-STD-881A de nes WBS as a prod
uct-oriented family tree composed of hardware, services and data which result fr
om project engineering efforts during the development and production of a defens
e material item, and, which completely de nes the project, program. A WBS displays
and de nes the product(s) to be developed or produced and relates the elements of
work to be accomplished to each other and to the end product (U.S. Department of
Defense 1975). Whatever de nition is used, the WBS is a hierarchical structure in
which the top level represents the total work content of the project while at th
e lowest level there are work elements or components. By allocating the lower-le
vel elements to the participating individuals and organization, a clear definiti
on of responsibility is created. The WBS is the tool with which division of labo
r is de ned. It should be comprehensivethat is, cover all the work content of the p
roject and logicalto allow clear allocation of work to the participating individu
al and organizations as well as integration of the deliverables produced by the
participants into the project-required deliverables.
2.4.
Coordination and Integration
Division of labor is required whenever the work content of the project exceeds w
hat a single person can complete within the required time frame or when there is
no single person who can master all
data, output from other work packages, etc.) Required resources to perform the w
ork package Cost of performing the work package Tests and design reviews
When a work package is subcontracted, the de nition is part of the contract. Howev
er, when the work package is performed internally, it is important to de ne the co
ntent of the work package as well as all other points listed above to avoid misu
nderstanding and a gap in expectations between the performing organization and t
he project manager. A special tool called the responsibility assignment matrix (
RAM) relates the project organization structure to the WBS to help ensure that e
ach element in the project scope of work is assigned to an individual. As an exa
mple, consider a project in which six work packages, A, B, C, D, E, and F, are p
erformed by an organization with three
1268
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
departments, I, II, and III. Assuming that in addition to the project manager th
e three department heads, the CEO, and the controller are involved in the projec
t, an example RAM follows:
Work Package Person CEO Controller Project manager Head Department I Head Depart
ment II Head Department III
A S R A P I P
B R S P A
C R S I A I
D R S P P A
E R S I A P
F R S A I
Legend: P: A: R: I: S: Participant Accountable Review required Input required Si
gn-off required
3.3.
Authority
Along with the responsibility for on-time completion of the content of the work
package and performance according to speci cations, the work package managers must
have proper authority. Authority may be de ned as a legal or a rightful power to
command or act. A clear de nition of authority is important for both project scope
and product scope. For example, the work package managers may have the authorit
y to delay noncritical activities within their slack but have no authority to de
lay critical activitiesthis is the authority of the project manager only. In a si
milar way, the work package manager may have the authority to approve changes th
at do not affect the product form t or function, while approval of all other chan
ges is by the project manager. The authority of work package managers may differ
according to their seniority in the organization, the size of the work package
they are responsible for, geographical location and whether they are from the sa
me organization as the project manager. Clear de nition of authority must accompan
y the allocation of work packages in the WBS to individuals and organizations.
3.4.
Accountability
Accountability means assuming liability for something either through a contract
or by ones position of responsibility. The project manager is accountable for his
own performances as well as the performances of other individuals to whom he de
legates responsibility and authority over speci c work contentthe managers of work
packages. The integration of the WBS with the OBS through the responsibility ass
ignment matrix (RAM) is a major tool that supports the mapping of responsibility
, authority, and accountability in the project. To make the WBS an effective too
l for project management, it should be properly designed and maintained througho
ut the project life cycle. The projects work content may be presented by differen
t WBS models, and the decision which one to select is an important factor affect
ing the probability of project success.
4. THE DESIGN OF A WORK BREAKDOWN STRUCTURE AND TYPES OF WORK BREAKDOWN STRUCTUR
ES
4.1. Introduction
The WBS serves as the taxonomy of the project. It enables all the project stakeh
olderscustomers, suppliers, the project team itself, and othersto communicate effe
ctively throughout the life cycle of the project. For each project, one can desi
gn the WBS in several different ways, each emphasizing a particular point of vie
w. However, different WBS patterns call for different organizational structures
or management practices during the implementation of the project. Thus, the desi
gn of the WBS at the early stage of the project life cycle may have a signi cant i
mpact on the project success. Often the individuals who prepare the WBS are not
aware of the crucial role they play in determining future coordination and under
standing among the operational units who eventually execute the work
Breaking the work by geography lends itself quite easily to the assignment of ve
plant managers, each responsible for the entire work required for establishing h
is plant. In a way, this amounts to breaking the project into ve identical subpro
jects, each duplicating the activities undertaken by the others. Obviously, this
will be the preferred mode when the circumstances (culture, language, type of g
overnment, law system, etc.) are dramatically different in the different countri
es. This type of WBS will t decentralized management practices in which local pla
nt managers are empowered with full authority (and responsibility) for the activ
ities relating to their respective plants.
Thermal Oxidation
Plant 2
Figure 1 WBS by Technology.
The WBS, in conjunction with the OBS, de nes the way the project is to be managed.
It
relates each work activity to the corresponding organizational unit that is resp
onsible for delivering the work. The WBS enables smooth communications among the
project team members and between them and customers, suppliers, regulators, etc
. The WBS serves as an archive that can later facilitate knowledge transfer to o
ther projects or learning by new members of the workforce. The WBS is an effecti
ve tool for resource management. Disadvantages: The WBS requires a signi cant amou
nt of effort to build and maintain. The WBS encourages rigid structure for the p
roject. Thus, it reduces managerial exibility to initiate and lead changes during
the project life cycle.
Expansion Project
Facilities .. .
Equipment ...
Raw Materials
Land Acquisition
Building
Infrastructure
Purchasing
Installation
Testing
Requirement Planning
Supply Chain Design
Figure 4 WBS by Logistics.
1272
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
There are many legitimate ways to view a project, and occasionally, depending on
circumstances, one approach may be preferred to others. Yet the WBS forces the project
designer to choose one approach and remain with it throughout the project life
cycle.
5.
5.1.
WORK PACKAGES: DEFINITION AND USE
Introduction
At the lowest levels of the WBS and OBS, integration between the two hierarchica
l structures takes place. The assignment of speci c work content (project scope) t
o a speci c individual or organization creates the building bocks of the project-m
anagement framework: the work packages. In the following section, a detailed dis
cussion of the de nition and meaning of work packages is presented along with a di
scussion of the cost accounts that accompany each work package, translating its
work content into monetary values for the purpose of budgeting and cost control.
5.2.
De nition of Work Packages
The PERT Coordinating Group (1962) de ned a work package (WP) as The work required t
o complete a speci c job or process, such as a report, a design, a documentation r
equirement or portion thereof, a piece of hardware, or a service. PMI (1996) state
s: [A] work package is a deliverable at the lowest level of the WBS. Unfortunately,
there is no accepted de nition of the WPs nor accepted approach to link them with
other related structures (foremost among them the OBS). In most cases, the WPs a
re de ned in an informal, intuitive manner and without proper feedback loops to ve
rify their de nition. One of the dif culties in de ning the WPs is the trade-off betwe
en the level of detail used to describe the WPs and the managerial workload that
is involved. On the one hand, one wishes to provide the project team with an un
ambiguous description of each work element to avoid unnecessary confusion, overl
ap, and so on. On the other hand, each WP requires a certain amount of planning,
reporting, and control. Hence, as we increase the level of detail, we also incr
ease the overhead in managing the project. To overcome this problem, some organi
zations set guidelines in terms of person-hours, dollar-value, or elapsed time t
o assist WBS designers in sizing the WPs. These guidelines are typically set to
cover a broad range of activities, and therefore they ignore the speci c content o
f each WP. Hence, they should be applied with care and with appropriate adjustme
nts in places where the work content requires them. Planning the work by the WPs
and structuring it through the WBS is closely related to another important plan
ning activitycosting the project. By dividing the project into small, clearly de ne
d activitiesthe WPswe provide a better information basis to estimate the costs inv
olved. For example, consider the activity of design of the FAB processes. It is
much easier to estimate its components when they are considered separately (desi
gning the silicon melting and cooling process, silicon cutting, grounding and sm
oothing, etc.). Furthermore, the separate components may require different costestimation procedures or expertise. Another consideration is related to the stat
istical nature of the cost-estimation errors. The estimation of the cost for eac
h WP involves a random error that, assuming no particular bias, can be either po
sitive or negative. As the cost estimates are aggregated up the WBS hierarchy, s
ome of these errors cancel each other and the relative size of the aggregated er
ror decreases. This observation holds as long as there is no systematic bias in
the estimation procedure. If such a bias exists (e.g., if all the time and cost
estimates were in ated to protect against uncertainties), then further decompositi
on of the WPs may eventually have a negative effect on the overall cost estimate
. In practice, in many scenarios there are limits to the precision that can be a
chieved in time and cost estimations. Beyond these limits, the errors remain con
stant (or may even grow). Thus, from the precision perspective, division into sm
aller WPs should be carried out as long as it improves the estimation accuracy,
and not beyond that point.
5.3.
De nition of Cost Accounts
Cost accounts are a core instrument used in planning and managing the nancial asp
ects of a project. Three fundamental processes depend on the cost accounts: cost
ing individual activities and aggregating them to the project level for the purp
ose of preparing project cost estimates; budgeting the project; and controlling
the expenses during the project execution. The rst issue, costing the project and
its activities, requires the project planner to choose certain costing procedur
es as well as cost classi cation techniques. Costing procedures range from the tra
ditional methods to state-of-the-art techniques. For example, absorption cost ac
counting, a traditional method that is still quite popular, relates all costs to
a speci c measure (e.g., absorbing all material, equipment, energy, and managemen
t cost into the cost of a person-hour) and cost new products or services by that
measure. An example of a more advanced cost accounting technique is activity-ba
sed costing (ABC), which separately analyzes each activity and measures its cont
ribution to particular products or services.
6.2.
Equipment-Replacement Projects
Every technology-intensive rm is challenged from time to time with equipment-repl
acement projects. This type of project is especially common in the high-tech sec
tor, where the frequency of such projects is now measured in months rather than
years. The WBS presented in Figure 6 focuses at its second level on the division
among activities related to the facility and its infrastructure (requiring civi
l engineering expertise), activities related to the equipment itself (requiring
mechanical engineering expertise), and activities related to manpower (requiring
human resource expertise). Unlike the previous example, the third level is not
identical across the three entities of the second level. A greater level of deta
il is needed to describe the equipment-related activities, and so the correspond
ing WBS branch is more developed.
6.3.
Military Projects
To demonstrate the wide-range applicability of the WBS concept, we close this se
ction with an example of a military operation. An army unit (say, a brigade) is
faced with a mission to capture a riverbank, construct a bridge, and secure an a
rea (bridgehead) across the river, thus enabling the movement of a larger force
in that direction. Figure 7 illustrates how this mission can be planned through
WBS principles. The second level of the WBS is arranged by the major military fu
nctions
1274
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 5 WBS for an R&D Project.
that are involved, and the third level is arranged by the life-cycle (illustrate
d here with a basic distinction between all precombat activities and during-comb
at activities).
7.
7.1.
CHANGE CONTROL OF THE WORK BREAKDOWN STRUCTURE
Introduction
Projects are often done in a dynamic environment in which technology is constant
ly updated and advanced. In particular, projects in high-tech organizations go t
hrough several major changes and many minor changes during their life cycle. For
example, the development of a new ghter plane may take over a decade. During thi
s time, the aircraft goes through many changes as the technology that supports i
t changes rapidly. It is quite common to see tens of thousands of change proposa
ls submitted during such projects with thousands of them being implemented. With
out effective control over this process, all such projects are doomed to chaos.
Changing elements in the WBS (deleting or adding work packages or changing the c
ontents of work packages) belong to an area known as con guration management (CM).
CM de nes a set of procedures that help organizations in maintaining information
on the functional and physical design characteristics of a system or project and
support the control of its related activities. CM procedures are designed to en
able keeping track of what has been done in the project until a certain time, wh
at is being done at that time, and what is planned for the future. CM is designe
d to support management in evaluating proposed technological
Equipment replacement project
Facility Design Construct Design
Equipment Procure Install
Personnel develop training programs select appropriate personnel train personnel
design changes in layout design changes in infrastructure
remove existing equipment prepare site for new equipment
design new equipment design changes in manufacturing processes prepare for insta
llation
purchase equipment components assemble equipment
install equipment test the new equipment
Figure 6 WBS for an Equipment-Replacement Project.
Intelligence
prepare antidisruption devices
Figure 7 WBS of Military Operation.
1275
1276
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
changes. It relies on quality assurance techniques to ensure the integrity of th
e project (or product) and lends itself easily to concurrent engineering, where
certain activities are done in parallel to shorten time-to-market and secure smo
oth transitions among the design, production, and distribution stages in the pro
ject life cycle. Change control involves three major procedures, which are outli
ned below.
7.2.
Change Initiation
Changes can be initiated either within the project team or from outside sources.
Either way, to keep track of such initiatives and enable organizational learnin
g, a formal procedure of preparing and submitting a change request is necessary.
A complete change request should include the following information:
Pointers and identi ers: These will enable the information system to store and ret
rieve the data
in the request. Typical data are request i.d.; date, name or originator; project
i.d.; con guration item affected (e.g., work packages i.d., part number i.d.). De
scription of change: A technical description (textual, diagrammatic, etc.) that
provides full explanation on the content of the proposed change, the motivation
for the proposal, the type of change (temporary or permanent), and suggested pri
ority. Effects: A detailed list of all possible areas that might be affected (co
st, schedule, quality, etc.) along with the estimated extent of the effect.
7.3.
Change Approval
Change requests are forwarded to a team of experts who are capable of evaluating
and prioritizing them. This team, often known as the change control board (CCB)
, evaluates each proposed change, taking into account its estimated effects on t
he various dimensions that are involved. Foremost among these criteria are the c
ost, schedule, and quality (or performance) issues. However, other criteria, suc
h as contractual agreements and environmental (or collateral) effects, are also
considered. The review is done both in absolute and relative terms. Thus, a prop
osed change may be rejected even if it leads to overall improvement in all relev
ant areas if there are other changes that promise even better effects. If the bo
ard approves a change, it needs to specify whether the change is temporary or pe
rmanent. Example of temporary changes are construction of a partial pilot produc
t for the purpose of running some tests that were not planned when the project w
as launched, releasing an early version of a software to a beta site to gain more in
sights on its performance, and so on. It can be expected that approval of tempor
ary changes will be obtained more easily and in shorter time spans than the appr
oval of permanent changes. The CCB is responsible for accumulating and storing a
ll the information on the change requests and the outcomes of the approval proce
ss. Maintaining a database that contains all the relevant information on the cha
nges usually facilitates this process. The database should enable easy access to
future queries, thus facilitating continuous organizational learning.
7.4.
Change Implementation
Changes that were approved, on either a temporary or a permanent basis, are to b
tion models enhances the ability of organizations to compete in cost, time, and
performance.
8.4.
The Learning Organization
The transfer of information within or between projects or the retrieval of infor
mation from past projects provides an environment that supports the learning org
anization. However, in addition to these mechanisms, a system that seeks continu
ous improvement from project to project is required. This can be done if the lif
e cycle of each project is examined at its nal stage and conclusions are drawn re
garding the pros and cons of the management tools and practices used. Based on a
thorough investigation of each project at its completion, current practices can
be modi ed and improved, new practices can be added, and, most importantly, a new
body of knowledge can be created. This body of knowledge can, in turn, serve as
a basis for teaching and training new project managers in how to manage a proje
ct right.
REFERENCES
PERT Coordination Group (1962), DoD and NASA Guide: PERT Cost Systems Design. Pr
oject Management Institute (PMI) (1996), A Guide to the Project Management Body
of Knowledge, PMI, Upper Darby, PA.
1278
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
U.S. Department of Defense (1975), A Work Breakdown Structure for Defense Military
Items, MIL-STD 881, U.S. Department of Defense, Washington, DC.
ADDITIONAL READING
Globerson, S., Impact of Various Work Breakdown Structures on Project Conceptualiz
ation, International Journal of Project Management, Vol. 12, No. 3, 1994, pp. 16517
9. Raz, T., and Globerson, S., Effective Sizing and Content De nition of Work Packag
es, Project Management Journal, Vol. 29, No. 4, 1998, pp. 1723. Shtub, A., Bard, J.
, and Globerson, S., Project Management Engineering, Technology and Implementati
on, Prentice Hall, Englewood Cliffs, NJ, 1994.
APPENDIX Table of Contents for a SOW document
1280
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Quality assurance and reliability Quality assurance plan Quality procedures Qual
ity and reliability analysis Quality reports Quality control and reviews Documen
tation Operational requirements Technical speci cations Engineering reports Testin
g procedures Test results Product documentation Production documentation Drawing
s De nition of interfaces Operation, maintenance, and installation instructions So
ftware documentation (in the software itself and in its accompanying documents)
Warranty Customer-furnished equipment
IV.B
Product Planning
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
1284
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 1 Product life cycles and Development Times.
New, powerful CAD technologies make it possible to check design varieties in rea
l time employing virtual reality tools. The use of virtual prototypes, especiall
y in the early phases of product development, enables time- and cost-ef cient deci
sion making.
1.3.
Communication Technologies
ATM (asynchronous transfer mode) networks (Ginsburg 1996) and gigabit ethernet n
etworking (Quinn and Russell 1999) enable a quick and safe exchange of relevant
data and thus support the development process tremendously. The Internet provide
s access to relevant information from all over the world in no time, such as via
the World Wide Web or e-mail messages. Communication and cooperation are furthe
r supported by CSCW tools (Bullinger et al. 1996) like videoconferencing and e-m
ail. The distributed teams need technical support for the development of a produ
ct to enable synchronous and asynchronous interactions. Furthermore, the Interne
t provides a platform for the interaction of distributed experts within the fram
ework of virtual enterprises. The technologies used support the consciousness of
working temporarily in a network, which includes, for example, the possibility
of accessing the same les. All these new technologies have been the focus of scie
nti c and industrial interest for quite a while now. However, the understanding ho
w these new technologies can be integrated into one continuous process chain has
been neglected. Combining these technologies effectively enables the product-de
velopment process to be reduced decisively. Rapid product development is a holis
tic concept that describes a rapid development process achieved mainly by combin
ing and integrating innovative prototyping technologies as well as modern CSCW (
computer-supported cooperative work) tools. Objectives of the new concept of RPD
are to:
Shorten the time-to-market (from the rst sketch to market launch) Develop innovat
ive products by optimizing the factors of time, cost, and quality Increase quali
ty from the point of view of the principle of completeness
2.
2.1.
CHARACTERISTICS OF RAPID PRODUCT DEVELOPMENT
The Life Cycle
Simultaneous engineering (SE) considers the complete development process and thu
s carries out the planning on the whole. RPD, on the other hand, considers singl
e tasks and the respective expert team responsible for each task. SE sets up the
framework within which RPD organizes the rapid, resultoriented performance of f
unctional activities. The mere application of SE organization on the functional
level leads to a disproportionate coordination expenditure. The overall RPD appr
oach is based on the idea of an evolutionary design cycle (Bullinger et al. 1996
). In contrast to traditional approaches with de ned design phases and respective
documents, such as speci cation lists or concept matrices, the different design ph
ases are result oriented.
Data integration Communication and coordination media Documents De nition Data man
agement Learning / experiences
Product
Unique approval by Continuous testing and responsible source rede nition of concep
ts Homogeneous Individual according to according to project progress modularizat
ion Standardized product and process data (STEP) For next / from previous Within
the project project
1286
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
In the early phases of product development, those decisions are made that are re
levant for the creation of value. The organization needs to be exible enough to p
rovide appropriate competencies for decisions and responsibilities.
2.3.
The Process
RPD concentrates on the early phases of product development. SE already achieved
the reduction of production times. RPD now aims to increase the variety of prot
otypes through an evolutionary iterative process in order to enable comprehensiv
e statements to be made about the product. The interdisciplinary teams work toge
ther from the start. The key factors here are good communication and coordinatio
n of everyone involved in the process. Thus, the time for nding a solution to the
following problems is reduced:
There are no customer demands. Therefore, without any concrete forms many soluti
ons are
possible, but not the solution that is sought.
The potential of new or alternative technologies results from the integration of
experts, whose
knowledge can in uence the whole process.
The changing basic conditions of the development in the course of the process ma
ke changes
necessary in already nished task areas (e.g. risk estimation of market and techno
logy). These possible basic conditions have to be integrated into the RPD proces
s to reduce time and costs of the whole process. The application of processes de
termines the product development and its effectiveness and ef ciency. Product data
generation and management process can be distinguished. Hence, it is important
for the SE as well as the RPD approach to achieve a process orientation in which
both product data generation and management process are aligned along the value
chain. In a traditional SE approach, innovation results from an initial product
concept and product speci cation, whereas the RPD concept will be checked and red
e ned according to the project progress. RPD therefore makes it possible to integr
ate new technologies, market trends, and other factors for a much longer period.
Thus, it leads to highly innovative products. Design iterations are a desirable
and therefore promoted element of RPD. The change of design concepts and speci ca
tions is supported by a tting framework, including the testing and the most-impor
tant evaluation of the design for further improvement.
2.4.
The Human and Technical Resources
Common SE approaches are based on standardized and static product data integrati
on, whereas RPD requires dynamic data management in semantic networks in order t
o enable short control cycles. Short paths and multidisciplinary teams for quick
decisions are essential for both approaches. Moreover, RPD requires team-orient
ed communication systems, which open up new ways of cooperation. They need to of
fer support not only for management decisions, but also for decision making duri
ng the generation of product data. In RPD, the people and machines involved are
of great importance. The people involved need free space for the development wit
hin the framework of the evolutionary concept, and well as the will to use the p
ossibilities for cooperation with other colleagues. This means a break with the
Taylorized development process. The employees need to be aware that they are tak
ing part in a continually changing process. The technical resources, especially
machines with hardware and software for the production of digital and physical p
rototypes, have to meet requirements on the usability of data with unclear featu
res regarding parameters. They have to be able to build rst prototypes without de
tailed construction data. The quality of the statements that can be made by mean
s of the prototypes depends on how concrete or detailed they are. For optimal co
operation of the single technologies, it is important to use data that can be re
ad by all of them.
2.5.
The Product
The results of the product-development process are the documents of the generate
d product, such as product models, calculations, certi cates, plans, and bills of
materials as well as the respective documents of the process, such as drawings o
f machine tools, process plans, and work plans. The aim of all documentation is
to support information management. A documentation focusing on product and proce
ss data guarantees project transparency for all the persons involved. The standa
rdization of the whole product data is a basic prerequisite for evolutionary and
phase-oriented approaches. STEP (standard for the exchange of product model dat
a), as probably the most promising attempt to standardize product data and appli
cation interfaces, offers applicable solutions for quite a few application elds,
such as automotive and electronic design, rapid prototyping, and ship building.
Documents re ecting parts of the complete product data generated, such as speci cati
ons, bills of materials, and process data, represent an important difference bet
ween SE and RPD. Whereas in an SE documents are synchronized at a certain time (
e.g., milestones), the RPD process documents are subject to
nding advant
and disadvantages of each plan. Additionally, the planner is not limited to one
plan.
1288 pated.
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Failures within obscure and complex structures typical of RPD are detected rathe
r than antici Since sensitivity for the different gures increases, there is a supp
ort mechanism with regard to
the knowledge about the situation.
The evaluation of plans based on quanti able
f plans.
1290
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
with several physical prototypes before the end product can be produced. This me
ans that employing the DMU considerably reduces the time-to-market. The DMU plat
form also offers the possibility for a technical integration of product concepti
on, design, construction, and packaging. Digital prototyping offers enormous adv
antages to many different applications, such as aircraft construction, shipbuild
ing, and the motor industry. Fields of application for digital prototyping in ca
r manufacturing are, for example:
Evaluation of components by visualization Evaluation of design variations Estima
tion of the surface quality of the car body Evaluation of the cars interior Ergon
omic valuation with the aid of virtual reality
To sum up, creating physical or virtual prototypes of the entire system is of ut
most importance, especially in the early phases of the product-development proce
ss. The extensive use of prototypes provides a structure, a discipline and an ap
proach that increases the rate of learning and integration within the developmen
t process.
3.4.
The Engineering Solution Center
The use of recent information and communication technology, interdisciplinary te
amwork, and an effective network is essential for the shortening of development
times, as we have demonstrated. The prerequisites for effective cooperative work
are continuous, computer-supported process chains and new visualization techniq
ues. In the engineering solution center (ESC), recent methods and technologies a
re integrated into a continuous process chain, which includes all phases of prod
uct development, from the rst CAD draft to the selection and fabrication of suita
ble prototypes to the test phase. The ESC is equipped with all the necessary tec
hnology for fast and cost-ef cient development of innovative products. Tools, like
the Internet, CAD, and FEM simulations, are integrated into the continuous ow of
data. Into already existing engineering systems (CAD, knowledge management, dat
abases, etc.) computer-based information and communication technologies are inte
grated that support the cooperative engineering effectively. Thus, the engineeri
ng solution center offers, for example, the complete set of tools necessary for
producing a DMU. A particular advantage here is that these tools are already com
bined into a continuous process chain. All respective systems are installed, and
the required interfaces already exist. An important part of the ESC is the powe
r wall, a recent, very effective, and cost-ef cient visualization technology. It o
ffers the possibility to project 3D CAD models and virtual prototypes onto a hug
e canvas. An unlimited number of persons can view the 3D simultaneously. The pow
er wall is a cost-ef cient entrance into large 3D presentations because it consist
s of only one canvas. Another essential component of the ESC is the engineering
/ product-data management (EDM / PDM) system. The EDM encompasses holistic, stru
ctured, and consistent management of all processes and the whole data involved i
n the development of innovative products, or the modi cation
Figure 3 Digital Prototyping in the Product-Development Process.
1292
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Another distinction to be made regarding communication is that between the synch
ronous exchange of knowledge, where partners can communicate at the same time, a
nd the asynchronous exchange, where the transmission and the reception of the me
ssage do not happen simultaneously, such as the sending of documents via e-mail.
In most cases of cooperation, exactly this relation is made: implicit knowledge
is exchanged synchronously via phone or face to face, and explicit knowledge is
exchanged asynchronously via documents. As a consequence, implicit knowledge re
mains undocumented and explicit knowledge is not annotated. This is a great obst
acle to rapid reception of knowledge. Support here is offered by telecooperation
systems, which allow communication from computer to computer. Besides documents
, pictures, sketches, CAD models, and videos, language can also be exchanged. Th
is way, documents, pictures, and so on can be explained and annotated. By using
special input devices, it is possible to incorporate handwritten notes or sketch
es. This facilitates the documentation of implicit knowledge. These systems have
a further advantage for the integration of knowledge. Learning theory tells us
that the reception of new knowledge is easier if several input channels of the l
earner are occupied simultaneously. Knowledge-intensive cooperation processes ne
ed coordination that is fundamentally different from conventional regulations an
d control mechanisms. A particular feature of knowledge-intensive processes is t
hat they can hardly be planned. It is impossible to know anything about future k
nowledge it is not available today. The more knowledge advances in a project, the
higher the probability that previous knowledge will be replaced by new knowledg
e. Gaining new knowledge may make a former agreement obsolete for one partner. A
s a consequence, it will sometimes be necessary to give up the previous procedur
e, with its xed milestones, work packages, and report cycles, after a short time.
The ve modules of RPD can be of great help here: 1. 2. 3. 4. 5. Plan and conceiv
e Design Prototyping Check and Evaluate
However, they do not proceed sequentially, as in traditional models. A complete
product is not in a certain, exactly-de ned development phase. These development p
rojects have an interlocked, networked structure of activities. Single states ar
e determined by occurrences, like test results, which are often caused by extern
al sources, sometimes even in different companies. Instead of a sequential proce
dure, an iterative, evolutionary process is initiated. But this can function onl
y if the framework for communication is established as described above. Traditio
nal product-development processes aspire to a decrease in knowledge growth with
preceding development time. According to the motto Do it right from the start, one a
im is usually to minimize supplementary modi cations. New knowledge is not appreci
ated; it might provoke modi cations of the original concept. RPD, on the other han
d, is very much knowledge oriented. Here, the process is kept open for new ideas
and changes, such as customer demands and technical improvements. This necessit
ates a different way of thinking and alternative processes. Knowledge-integrated
processes are designed according to standards different from those usually appl
ied to business process reengineering. The aim is not a slimming at all costs, b
ut rather the robustness of the process. The motto here is If the knowledge curren
tly available to me is not enough to reach my target, I have enough time for the
necessary quali cation and I have the appropriate information technology at my di
sposal to ll the gap in my knowledge. The knowledge-management process has to be co
nsidered a direct component of the value-added process. According to Probst et a
l. (1997), the knowledge-management process consists of the following steps: set
ting of knowledge targets, identi cation of knowledge, knowledge acquisition, know
ledge development, distribution of knowledge, use of knowledge, and preservation
and evaluation of knowledge. For each of these knowledge process modules, the t
hree elds of organization, human resource management, and information technology
are considered. After recording and evaluating the actual state, the modules are
optimized with regard to these mentioned elds. The number of iterations is in uenc
ed by the mutual dependencies of the modules (Prieto et al. 1999). From the expe
riences already gained from R&D cooperation projects the following conclusion ca
n be drawn: if the knowledge-management modules are integrated into the developm
ent process and supported by a suitable IT infrastructure, the exchange of knowl
edge between team members becomes a customersupplier relationship, as is the case
in the value-added process itself. This enables effective and ef cient coordinati
on of the project.
For distributed, interdisciplinary teams it is of great signi cance that the diffe
rent persons involved in a project base their work on a common knowledge basis.
The cooperating experts have to be able to acquire a basic knowledge of their pa
rtners work contents and processes and their way of thinking in only a short time
. Function models and design decisions cannot be communicated without a common b
asis. Knowledge integration is therefore the basis of communication and coordina
tion in a cooperation. Without a common understanding of the most important term
s and their context, it is not possible to transport knowledge to the partners.
As a consequence a coordination of common activities becomes impossible. Growing
knowledge integration takes time and generates costs. On the other hand, it mel
iorates the cooperation because few mistakes are made and results are increasing
ly optimal in a holistic sense. A particular feature of knowledge integration in
R&D projects is its dynamic and variable character due to turbulent markets and
highly dynamic technological developments. Experiences gained in the eld of team
work made clear that the rst phase of a freshly formed (project) team has to be d
evoted to knowledge integration, for the reasons mentioned above. The task of kn
owledge integration is to systematize knowledge about artifacts, development pro
cesses, and cooperation partners, as well as the respective communication and co
ordination tools, and to make them available to the cooperation partners. The si
gni cance of knowledge integration will probably increase if the artifact is new,
as in the development of a product with a new functionality or, more commonly, a
highly innovative product. If the project partners have only a small intersecti
on of project-speci c knowledge, the integration of knowledge is also very importa
nt because it necessitates a dynamic process of building knowledge. To nd creativ
e solutions, it is not enough to know the technical vocabulary of the other expe
rts involved. Misunderstandings are often considered to be communication problem
s, but they can mostly be explained by the dif cult process of knowledge integrati
on. Which knowledge has to be integrated? First, the knowledge of the different
subject areas. Between cooperating subject areas, it is often not enough simply
to mediate facts. In this context, four levels of knowledge have to be taken int
o consideration. In ascending order, they describe an increasing comprehension o
f coherence within a subject area. 1. Knowledge of facts (know-what) forms the basis
for the ability to master a subject area. This knowledge primarily re ects the le
vel of acquiring book knowledge. 2. Process knowledge (know-how) is gained by the exp
through the daily application of his knowledge. Here, the transfer of his theor
etical knowledge takes place. To enable an exchange on this level, it is necessa
ry to establish space for experiences, which allows the experts to learn togethe
r or from one another. 3. The level of system understanding (know-why) deals with th
e recognition of causal and systemic cohesion between activities and their cause
and effect chains. This knowledge enables the expert to solve more complex prob
lems that extend into other subject areas. 4. Ultimately, the level of creative
action on ones own initiative, the care-why, has to be regarded (e.g., motivation). T
he linkage of the care-why demands a high intersection of personal interests and tar
gets. Many approaches to knowledge integration concentrate mainly on the second
level. Transferred to the knowledge integration in a R&D cooperation, this means
that it is not enough to match the know-what knowledge. The additional partial
integration of the know-how and the know-why is in most cases enough for a succe
ssful execution of single operative activities. The success of the whole project
, however, demands the integration of the topmost level of the care-why knowledg
e. A precondition here is common interests between the project partners. This is
also a major difference between internal cooperation and cooperation with exter
nal partners. In transferring internal procedures to projects with external part
ners, the importance of the care-why is often neglected because within a company
the interests of the cooperation partners do not differ very much; it is one co
mpany, after all (Warschat and Ganz 2000). The integration process of the four l
evels of knowledge described above is based on the exchange of knowledge between
project partners. Some knowledge is documented as CAD models or sketches, repor
ts, and calculations. This is explicit knowledge, but a great share of knowledge
is not documented and based on experience; it is implicit. This model, develope
d by Nonaka and Takeuchi (1997), describes the common exchange of knowledge as i
t occurs in knowledge management.
5.
SUMMARY AND PERSPECTIVE
The RPD concept is based fundamentally on the early and intensive cooperation of
experts from different disciplines. This concept therefore makes it possible to
bring together the various sources
1294
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
of expert knowledge in the early phases of product development. Thus, all availa
ble sources of information can be used right from the beginning. The initial inc
omplete knowledge is incrementally completed by diverse experts. Cooperation wit
hin and between the autonomous multifunctional teams is of great importance here
. The selection and use of suitable information and communication technology are
indispensable. Information exchange is considerably determined by the local and
temporal situation of cooperation partners. If the cooperating team members are
situated at one place, ordinary, natural communication is possible and sensible
. Nevertheless, technical support and electronic documentation might still be he
lpful. In case cooperation partners are located at different places technical su
pport is indispensable. For this, CSCW and CMC (computer-mediated communication)
tools are applied, such as shared whiteboard applications, chat boxes, electron
ic meeting rooms, and audio / videoconferencing. The currently existing systems
make it possible to bridge local barriers. However, they neglect the requirement
s of face-to-face communication and cooperation. For instance, it is necessary t
o establish appropriate local and temporal relations among team members. The com
munication architecture, therefore, should enable the modeling of direct and ind
irect interactions between individuals. Because of the dynamic of the developmen
t process, these relations change. The system should therefore possess suf cient ex
ibility to enable keeping track of the modi cations. Furthermore, the communicatio
n basis should be able to represent information not as isolated, but as in the r
elevant context. During product development, especially within creative sectors,
frequent and rather short ad hoc sessions are preferred. This form of spontaneo
us information exchange between decentralized development teams requires compute
r-mediated communication and cooperation techniques, which permit a faster appro
ach and lead to closer cooperation between experts. This results in a harmonized
product development, which maintains the autonomy of decentralized teams. Along
with the short iteration cycles, the interdisciplinary teams are an essential f
eature of the RPD concept. They operate autonomously and are directly responsibl
e for their respective tasks. Additionally, the increasing complexity of product
s and processes requires an early collaboration and coordination. Thus, it is ne
cessary to make knowledge of technology, design, process, quality, and costs ava
ilable to anyone involved in the development process. Conventional databases are
not suf cient for an adequate representation of the relevant product and process
knowledge. On the one hand, current systems do not consider the dynamic of the d
evelopment process suf ciently. On the other hand, it is not possible to assess th
e consequences of ones de nition. However, this is a fundamental prerequisite for e
ffective cooperation. To cope with the given requirements, it is necessary to re
present the knowledge in the form of an active semantic network (ASN). This is c
haracterized by active independent objects within a connected structure, which e
nables the modeling of cause-and-effect relations. The objects in this network a
re not passive, but react automatically to modi cations. This fact provides the po
ssibility of an active and automatic distribution of modi cations throughout the w
hole network. In contrast to conventional systems, ASN contains, in addition to
causal connections, representations of methods, communication, and cooperation s
tructures as well as the knowledge required to select the suitable manufacturing
technique. Furthermore, negative and positive knowledge (rejected and followedup alternatives) are stored therein. These acquired perceptions will support the
current and future development process. The ASN should exhibit following functi
ons and characteristics:
Online dialog capability Dynamicness Robustness Version management Transparency
All in all, the ASN makes it possible to represent and to manage the design, qua
lity, and cost knowledge together with the know-how of technologies and process
planning in the form of the dynamic chains of cause and effect explained here. T
hus, the ASN forms the basis for the concept of RPD.
REFERENCES
Allen, J. F. (1991), Temporal Reasoning and Planning, in Temporal Reasoning and Plan
ning, M. B. Morgan and Y. Overton, Eds., Morgan Kaufmann, San Francisco, pp. 168.
Bullinger, H.-J., and Warschat, J. (1996), Concurrent Simultaneous Engineering
Systems: The Way to Successful Product Development, Springer, Berlin. Bullinger,
H.-J., Warschat, and J., Wo rner, K. (1996), Management of Complex Projects as Co
operative Task, in Proceedings of the 5th International Conference on Human Aspect
s of Advanced
This chapter presents an overall framework and systematic methodology for pursui
ng the above three objectives of human-centered design. There are four design is
sues of particular concern within this framework. The rst concern is formulating
the right problemmaking sure that system objectives and requirements are right. A
ll too often, these issues are dealt with much too quickly. There is a natural t
endency to get on with it, which can have enormous negative consequences when requir
ements are later found to be inadequate or inappropriate. The second issue is de
signing an appropriate solution. All well-engineered solutions are not necessari
ly appropriate. Considering the three objectives of human-centered design, as we
ll as the broader context within which systems typically operate, it is apparent
that the excellence of the technical attributes of a design are necessary but n
ot suf cient to ensure that the system design is appropriate and successful. Given
the right problem and an appropriate solution, the next concern is developing i
t to perform well. Performance attributes should include operability, maintainab
ility, and supportabilitythat is, using it, xing it, and supplying it. Supportabil
ity includes spare parts, fuel, and, most importantly, trained personnel. The fo
urth design concern is ensuring human satisfaction. Success depends on people us
ing the system and achieving the bene ts for which it was designed. However, befor
e a system can be used, it must be purchased, usually by other stakeholders, whi
ch in turn depends on it being technically approved by yet other stakeholders. T
hus, a variety of types of people have to be satis ed.
1298
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
4.
DESIGN METHODOLOGY
Concepts such as user-centered design, user-friendly systems, and ergonomically
designed systems have been around for quite some time. Virtually everybody endor
ses these ideas, but very few people know what to do in order to realize the pot
ential of these concepts. What is needed, and what this chapter presents, is a m
ethodological framework within which human-centered design objectives can be sys
tematically and naturally pursued.
4.1.
Design and Measurement
What do successful products and systems have in common? The fact that people buy
and use them is certainly a common attribute. However, sales are not a very use
ful measure for designers. In particular, using lack of sales as a way to uncove
r poor design choices is akin to using airplane crashes as a method of identifyi
ng design awsthis method works, but the feedback provided is a bit late. The quest
ion, therefore, is one of determining what can be measured early that is indicat
ive of subsequent poor sales. In other words, what can be measured early to nd ou
t if the product or system is unlikely to y? If this can be done early, it should
be possible to change the characteristics of the product or system so as to avo
id the predicted failure. This section focuses on the issues that must be addres
sed and resolved for the design of a new product or system to be successful. Sev
en fundamental measurement issues are discussed and a framework for systematical
ly addressing these issues is presented. This framework provides the structure w
ithin which the remainder of this chapter is organized and presented.
4.2.
Measurement Issues
Figure 1 presents seven measurement issues that underlie successful design (Rous
e 1987). The natural ordering of these issues depends on ones perspective. From a nut
s-and-bolts engineering point of view, one might rst worry about testing (i.e., g
etting the system to work), and save issues such as viability until much later.
In contrast, most stakeholders are usually rst concerned with viability and only
worry about issues such as testing if problems emerge. A central element of huma
n-centered design is that designers should address the issues in Figure 1 in the
same order that stakeholders address these issues. Thus, the last concern is, Doe
s it run? The rst concern is, What matters?, or, What constitutes bene ts and costs?
Figure 1 Measurement Issues.
4.2.7.
Testing
Does the system run, compute, and so on? This is a standard engineering question
. It involves issues of physical measurement and instrumentation for hardware, a
nd runtime inspection and debugging tools for software.
4.3.
A Framework for Measurement
The discussion thus far has emphasized the diversity of measurement issues from
the perspectives of both designers and stakeholders. If each of these issues wer
e pursued independently, as if they were ends in themselves, the costs of measur
ement would be untenable. Yet each issue is important and should not be neglecte
d. What is needed, therefore, is an overall approach to measurement that balance
s the allocation of resources among the issues of concern at each stage of desig
n. Such an approach should also integrate intermediate measurement results in a
way that provides maximal bene t to the evolution of the design product. These goa
ls can be accomplished by viewing measurement as a process involving the four ph
ases shown in Figure 2.
4.3.1.
Naturalist Phase
This phase involves understanding the domains and tasks of stakeholders from the
perspective of individuals, the organization, and the environment. This underst
anding includes not only peoples activities, but also prevalent values and attitu
des relative to productivity, technology, and change in general. Evaluative asse
ssments of interest include identi cation of dif cult and easy aspects of tasks, bar
riers to and potential avenues of improvement, and the relative leverage of the
various stakeholders in the organization.
1300
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
N A T U R A L IS T
TECHNO LO G Y F E A S IBIL IT Y
M A R K ET IN G
TECHNO LO G Y DEVELOPM ENT
E N G IN EE R IN G
TECHNO LO G Y R E F IN E M E N T
SALES AND S E R VIC E
Figure 2 A Framework for Measurement.
4.3.2.
Marketing Phase
Once one understands the domain and tasks of current and potential stakeholders,
one is in a position to conceptualize alternative products or systems to suppor
t these people. Product concepts can be used for initial marketing in the sense
of determining how users react to the concepts. Stakeholders reactions are needed
relative to validity, acceptability, and viability. In other words, one wants t
o determine whether or not people perceive a product concept as solving an impor
tant problem, solving it in an acceptable way, and solving it at a reasonable co
st.
4.3.3.
Engineering Phase
One now is in a position to begin trade-offs between desired conceptual function
ality and technological reality. As indicated in Figure 2, technology developmen
t will usually have been pursued prior to and in parallel with the naturalist an
d marketing phases. This will have at least partially ensured that the product c
oncepts shown to stakeholders were not technologically or economically ridiculou
s. However, one now must be very speci c about how desired functionality is to be
provided, what performance is possible, and the time and dollars necessary to pr
ovide it.
4.3.4.
Sales and Service Phase
As this phase begins, the product should have successfully been tested, veri ed, d
emonstrated, and evaluated. From a measurement point of view, the focus is now o
n validity, acceptability, and viability. It is also at this point that one ensu
res that implementation conditions are consistent with the assumptions underlyin
g the design basis of the product or system.
4.3.5.
The Role of Technology
1302
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
seldom true, because there are actually extremely few products and systems that
are designed from scratch. Even when designing the initial spacecraft, much was
drawn from previous experiences in aircraft and submarines.
5.2.
Methods and Tools for Measurement
How does one identify stakeholders, and in particular, how does one determine th
eir needs, preferences, values, and so on? Observation is, of course, the necess
ary means. Initially, unstructured direct observations may be appropriate. Event
ually, however, more formal means should be employed to assure unbiased, convinc
ing results. Table 2 lists the methods and tools appropriate for answering these
types of questions.
5.2.1.
Magazines and Newspapers
To gain an initial perspective on what is important to a particular class of sta
keholders or a particular industry, one should read what they read. Trade magazi
nes and industry newspapers publish what interests their readers. One can capita
lize on publishers insights and knowledge by studying articles for issues and con
cerns. For example, is cost or performance more important? Is risk assessment, o
r equivalent, mentioned frequently? One should pay particular attention to adver
tisements because advertisers invest heavily in trying to understand customers ne
eds, worries, and preferences. One can capitalize on advertisers investments by s
tudying the underlying messages and appeals in advertisements. It is useful to c
reate a le of articles, advertisements, brochures, and catalogs that appear to ch
aracterize the stakeholders of interest. Many of these types of materials can al
so be found on Internet websites. The contents of this le can be slowly accumulat
ed over a period of many months before it is needed. This accumulation might be
initiated in light of long-term plans to move in new directions. When these long
-term plans become short-term plans, this le can be accessed, the various items j
uxtaposed, and an initial impression formed relatively quickly. For information
available via Internet websites, this le can be readily updated when it is needed
.
5.2.2.
Databases
Many relatively inexpensive sources of information about stakeholders are availa
ble via online databases. This is especially true for Internet-based information
sources. With these sources, a wide variety of questions can be answered. How l
arge is the population of stakeholders? How are they distributed, organizational
ly and geographically? What is the size of their incomes? How do they spend it?
Such databases are also likely to have information on the companies whose advert
isements were identi ed in magazines and newspapers. What are their sales and pro ts
? What are the patterns of growth? Many companies make such information readily
available on their Internet websites. By pursuing these questions, one may be ab
le to nd characteristics of the advertisements of interest that discriminate betw
een good and poor sales growth and pro ts. Such characteristics might include lead
ing-edge technology, low cost, and / or good service.
5.2.3.
Questionnaires
Once magazines, newspapers, and databases are exhausted as sources of informatio
n, attention should shift to seeking more speci c and targeted information. An ine
xpensive approach is to mail, e-mail, or otherwise distribute questionnaires to
potential stakeholders to assess how they spend their time, what they perceive a
s their needs and preferences, and what factors in uence their decisions. Question
s should be brief, have easily understandable responses, and be straightforward
to answer. Multiple-choice questions or answers in terms of rating scales are mu
ch easier to answer than openended essay questions, even though the latter may p
rovide more information. Low return rate can be a problem with questionnaires. I
ncentives can help. For example, those who respond can be promised a complete se
t of the results. In one effort, an excellent response rate was obtained when a
few randomly selected respondents were given tickets to Disney World. Results fr
om questionnaires can sometimes be frustrating. Not infrequently, analysis of th
e results leads to new questions that one wishes had been on the original questi
onnaire. These new questions can, however, provide the basis for a follow-up age
nda.
5.2.4.
Interviews
Talking with stakeholders directly is a rich source of information. This can be
accomplished via telephone or even e-mail, but face to face is much better. The
use of two interviewers can be invaluable in enabling one person to maintain eye
contact and the other to take notes. The use of two interviewers also later pro
vides two interpretations of what was said.
TABLE 2 Advantages Use is very easy and inexpensive. Coverage is both broad and
quantitative. Disadvantages
Methods and Tools for the Naturalist Phase
Methods and Tools
Purpose
Magazines and newspapers
Databases
Questionnaires
Interviews Quickly able to come up to speed on topics.
Large population can be inexpensively queried. Face-to-face contact allows in-de
pth and candid interchange.
Basis and representativeness of information may not be clear. Available data may
only roughly match information needs. Low return rates and shallow nature of re
sponses. Dif culty of gaining access, as well as time required to schedule and con
duct. Cost of use and possible inhibition on creating in-house expertise.
Experts
Determine customers and users interests via articles and advertisements. Access de
mographic, product, and sales information. Query large number of people regardin
g habits, needs, and preferences. In-depth querying of small number of people re
garding activities, organization, and environment. Access domain, technology, an
d / or methodological expertise.
TABLE 3 Advantages
Methods and Tools for the Marketing Phase Disadvantages
Methods and Tools
Purpose
Questionnaires
Interviews
Low return rates and shallow nature of responses. Dif culty of gaining access, as
well as time required to schedule and conduct.
Scenarios
Query large number of people regarding preferences for products functions. In-dep
th querying of small number of people regarding reactions to and likely use of p
roducts functions. Provide feeling for using product in terms of how functions wo
uld likely be used.
Mock-ups
Provide visual look and feel of product.
Prototypes
1304
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Usually, interviewees thoroughly enjoy talking about their jobs and what types o
f products and systems would be useful. Often one is surprised by the degree of
candor people exhibit. Consequently, interviewees usually do not like their comm
ents tape-recorded. It is helpful if interviewees have rst lled out questionnaires
, which can provide structure for the interview as they explain and discuss thei
r answers. Questionnaires also ensure that they will have thought about the issu
es of concern prior to the interview. In the absence of a prior questionnaire, t
he interview should be carefully structured to avoid unproductive tangents. This
structure should be explained to interviewees prior to beginning the interview.
5.2.5.
Experts
People with specialized expertise in the domain of interest, the technology, and
/ or the market niche can be quite valuable. People who were formerly stakehold
ers within the population of interest tend to be particularly useful. These peop
le can be accessed via e-mail or informal telephone calls (which are surprisingl
y successful), gathered together in invited workshops, and / or hired as consult
ants. While experts knowledge can be essential, it is very important that the lim
itations of experts be understood. Despite the demeanor of many experts, very fe
w experts know everything! Listen and lter carefully. Further, it is very unlikel
y that one expert can cover a wide range of needs. Consider multiple experts. Th
is is due not only to a need to get a good average opinion. It is due to the nec
essity to cover multiple domains of knowledge.
5.3.
Summary
The success of all of the above methods and tools depends on one particular abil
ity of designers the ability to listen. During the naturalist phase, the goal is
understanding stakeholders rather than convincing them of the merits of particul
ar ideas or the cleverness of the designers. Designers will get plenty of time t
o talk and expound in later phases of the design process. At this point, however
, success depends on listening.
6.
MARKETING PHASE
The purpose of the marketing phase is introducing product concepts to potential
customers, users, and other stakeholders. In addition, the purpose of this phase
includes planning for measurements of viability, acceptability, and validity. F
urther, initial measurements should be made to test plans, as opposed to the pro
duct, to uncover any problems before proceeding. It is important to keep in mind
that the product and system concepts developed in this phase are primarily for
the purpose of addressing viability, acceptability, and validity. Beyond that wh
ich is suf cient to serve this purpose, minimal engineering effort should be inves
ted in these concepts. Beyond preserving resources, this minimalist approach avo
ids, or at least lessens, ego investments in concepts prior to knowing whether or no
t the concepts will be perceived to be viable, acceptable, and valid. These type
s of problem can also be avoided by pursuing more than one product concept. Pote
ntial stakeholders can be asked to react to these multiple concepts in terms of
whether or not each product concept is perceived as solving an important problem
, solving it in an acceptable way, and solving it at a reasonable cost. Each per
son queried can react to all concepts, or the population of potential stakeholde
rs can be partitioned into multiple groups, with each group only reacting to one
concept. The marketing phase results in an assessment of the relative merits of
the multiple product concepts that have emerged up to this point. Also derived
is a preview of any particular dif culties that are likely to emerge later. Concep
ts can be modi ed, both technically and in terms of presentation and packaging, to
decrease the likelihood of these problems.
6.1.
Methods and Tools for Measurement
How does one measure the perceptions of stakeholders relative to the viability,
acceptability, and validity of alternative product and system concepts? Table 3
(see page 1303) lists the appropriate methods and tools for answering this quest
ion, as well as their advantages and disadvantages.
6.1.1.
Questionnaires
This method can be used to obtain the reactions of a large number of stakeholder
s to alternative functions and features of a product or system concept. Typicall
y, people are asked to rate the desirability and perceived feasibility of functi
ons and features using, for example, scales of 1 to 10. Alternatively, people ca
n be asked to rank order functions and features. As noted when questionnaires we
re discussed earlier, low return rate can be a problem. Further, one typically c
annot have respondents clarify their answers, unless telephone or in person foll
ow-ups are pursued. This tends to be quite dif cult when the sample population is
large.
At some point, one has to move beyond the list of words and phrases that describ
e the functions and features envisioned for the product or system. An interestin
g way to move in this direction is by using stories or scenarios that embody the
functionality of interest and depict how these functions might be utilized. The
se stories and scenarios can be accompanied by a questionnaire within which resp
ondents are asked to rate the realism of the depiction. Further, they can be ask
ed to consider explicitly, and perhaps rate, the validity, acceptability, and vi
ability of the product functionality illustrated. It is not necessary, however,
to explicitly use the words validity, acceptability, and viability in the questio
ds should be chosen that are appropriate for the domain being studiedfor example,
viability may be an issue of cost in some domains and not in others. It is very
useful to follow up these questionnaires with interviews, or at least e-mail qu
eries, to clarify respondents comments and ratings. Often the explanations and cl
ari cations are more interesting and valuable than the ratings.
6.1.4.
Mock-ups
Mock-ups are particularly useful when the form and appearance of a product or sy
stem are central to stakeholders perceptions. For products such as automobiles an
d furniture, form and appearance are obviously central. However, mock-ups can al
so be useful for products and systems where appearance does not seem to be cruci
al. For example, computer-based systems obviously tend to look quite similar. Th
e only degree of freedom is what is on the display. One can exploit this degree
of freedom by producing mock-ups of displays using photographs or even viewgraph
s for use with an overhead projector. One word of caution, however. Even such lo
w-budget presentations can produce lasting impressions. One should make sure tha
t the impression created is such that one wants it to last. Otherwise, as noted
earlier, one may not get an opportunity to make a second impression.
6.1.5.
Prototypes
Prototypes are a very popular approach and, depending on the level of functional
ity provided, can give stakeholders hands-on experience with the product or syst
em. For computer-based products, rapid prototyping methods and tools have become
quite popular because these methods and tools enable the creation of a function
ing prototype in a matter of hours. Thus, prototyping has two important advantag
es. Prototypes can be created rapidly and enable hands-on interaction. With thes
e advantages, however, come two important disadvantages. One disadvantage is the
tendency to produce ad hoc prototypes, typically with the motivation of having
something to show stakeholders. It is very important that the purpose of the pro
totype be kept in mind. It is a device with which to obtain initial measurements
of validity, acceptability, and viability. Thus, one should make sure that the
functions and features depicted are those for which these measurements are neede
d. One should not, therefore, put something on a display simply because it is in
tuitively appealing. This can be a dif cult impulse to avoid.
1306
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
The second disadvantage is the tendency to become attached to ones prototypes. At
rst, a prototype is merely a device for measurement, to be discarded after the a
ppropriate measurements are made. However, once the prototype is operational, th
ere is a tendency for people, including the creators of the prototype, to begin
to think that the prototype is actually very close to what the nal product or sys
tem should be like. In such situations, it is common to hear someone say, Maybe wi
th just a few small changes here and there . . . Prototypes can be very important.
However, one must keep their purpose in mind and avoid rabid prototyping! Also,
care must be taken to avoid premature ego investments in prototypes. The framew
ork for design presented in this chapter can provide the means for avoiding thes
e pitfalls.
6.2.
Summary
During the naturalist phase, the goal was to listen. In the marketing phase, one
can move beyond just listening. Various methods and tools can be used to test h
ypotheses that emerged from the naturalist phase, and obtain potential stakehold
ers reactions to initial product and system concepts. Beyond presenting hypothese
s and concepts, one also obtains initial measurements of validity, acceptability
, and viability. These measurements are in terms of quantitative ratings and ran
kings of functions and features, as well as more free- ow comments and dialogue.
7.
ENGINEERING PHASE
The purpose of the engineering phase is developing a nal design of the product or
system. Much of the effort in this phase involves using various design methods
and tools in the process of evolving a conceptual design to a nal design. In addi
tion to synthesis of a nal design, planning and execution of measurements associa
ted with evaluation, demonstration, veri cation, and testing are pursued.
7.1.
Four-Step Process
In this section, a four-step process for directing the activities of the enginee
ring phase and documenting the results of these activities is discussed. The ess
ence of this process is a structured approach to producing a series of design do
cuments. These documents need not be formal documents. They might, for example,
only be a set of presentation materials. Beyond the value of this approach for c
reating a human-centered design, documentation produced in this manner can be pa
rticularly valuable for tracing design decisions back to the requirements and ob
jectives that motivated the decisions. For example, suggested design changes are
much easier to evaluate and integrate into an existing design when one can ef cie
ntly determine why the existing design is as it is. It is important to note that
the results of the naturalist and marketing phases should provide a strong head
start on this documentation process. In particular, much of the objectives docu
ment can be based on the results of these phases. Further, and equally important
, the naturalist and marketing phases will have identi ed the stakeholders in the
design effort, and are likely to have initiated relationships with many of them.
7.2.
Objectives Document
The rst step in the process is developing the objectives document. This document
contains three attributes of the product or system to be designed: goals, functi
ons, and objectives. Goals are characteristics of the product or system that des
igners, users, and customers would like the product or system to have. Goals are
often philosophical choices, frequently very qualitative in nature. There are u
sually multiple ways of achieving goals. Goals are particularly useful for provi
ding guidance for later choices. Functions de ne what the product or system should
do, but not how it should be done. Consequently, there are usually multiple way
s to provide each function. The de nition of functions subsequently leads to analy
sis of objectives. Objectives de ne the activities that must be accomplished by th
e product or system in order to provide functions. Each function has at least 1,
and often 5 to 10, objectives associated with it. Objectives are typically phra
sed as imperative sentences beginning with a verb. There are two purposes for wr
iting a formal document listing goals, functions, and objectives. First, as note
d earlier, written documents provide an audit trail from initial analyses to the
as-built product or system. The objectives document provides the foundation for all
subsequent documents in the audit trail for the engineering phase. The second p
urpose of the objectives document is that it provides the frameworkin fact, the o
utlinefor the requirements document. All stakeholders should be involved in the d
evelopment of the objectives document. This includes at least one representative
from each type of stakeholder group. This is important because this document de n
es what the eventual product or system will and will not do. All subsequent deve
lopment
1308
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
gineering activities. Human-centered design is primarily concerned with ensuring
that the plethora of engineering methods and tools discussed in this Handbook a
re focused on creating viable, acceptable, and valid design solutions.
8.
SALES AND SERVICE PHASE
Initiation of the sales and service phase signals the accomplishment of several
important objectives. The product or system will have been successfully tested,
veri ed, demonstrated, and evaluated. In addition, the issues of viability, accept
ability, and validity will have been framed, measurements planned, and initial m
easurements executed. These initial measurements, beyond the framing and plannin
g, will have exerted a strong in uence on the nature of the product or system.
8.1.
Sales and Service Issues
In this phase, one is in a position to gain closure on viability, acceptability,
and validity. One can make the measurements necessary to determining whether th
e product or system really solves the problem that motivated the design effort,
solves it in an acceptable way, and provides bene ts that are greater than the cos
ts of acquisition and use. This is accomplished using the measurement plan that
was framed in the naturalist phase, developed in the marketing phase, and re ned i
n the engineering phase. These measurements should be performed even if the prod
uct or system is presoldfor example, when a design effort is the consequence of a
winning proposal. In this case, even though the purchase is assured, one should
pursue closure on viability, acceptability, and validity in order to gain futur
e projects. There are several other activities in this phase beyond measurement.
One should ensure that the implementation conditions for the product or system
are consistent with the assumed conditions upon which the design is based. This
is also the point at which the later steps of stakeholder acceptance plans are e
xecuted, typically with a broader set of people than those who participated in t
he early steps of the plan. This phase also often involves technology-transition
considerations in general. The sales and service phase is also where problems a
re identi ed and remedied. To the greatest extent possible, designers should work
with stakeholders to understand the nature of problems and alternative solutions
. Some problems may provide new opportunities rather than indicating shortcoming
s of the current product or system. It is important to recognize when problems g
o beyond the scope of the original design effort. The emphasis then becomes one
of identifying mechanisms for de ning and initiating new design efforts to address
these problems. The sales and service phase also provides an excellent means fo
r maintaining relationships. One can identify changes among stakeholders that oc
cur because of promotions, retirements, resignations, and reorganizations. Furth
er, one can lay the groundwork for, and make initial progress on, the naturalist
phase, and perhaps the marketing phase, for the next project, product, or syste
m.
8.2.
Methods and Tools for Measurement
How does one make the nal assessments of viability, acceptability, and validity?
Further, how does one recognize new opportunities? Unstructured direct observati
on can provide important information. However, more formal methods are likely to
yield more de nitive results and insights. Table 4 lists the methods and tools ap
propriate for answering these types of questions.
8.2.1.
Sales Reports
Sales are an excellent measure of success and a good indicator of high viability
, acceptability, and validity. However, sales reports are a poor way of discover
ing a major design inadequacy. Further, when a major problem is detected in this
manner, it is quite likely that one may not know what the problem is or why it
occurred.
8.2.2.
Service Reports
Service reports can be designed, and service personnel trained, to provide much
more than simply a record of service activities. Additional information of inter
est concerns the speci c nature of problems, their likely causes, and how stakehol
ders perceive and react to the problems. Stakeholders suggestions for how to avoi
d or solve the problems can also be invaluable. Individuals names, addresses, and
telephone numbers can also be recorded so that they can be contacted subsequent
ly.
8.2.3.
Questionnaires
Questionnaires can be quite useful for discovering problems that are not suf cient
to prompt service calls. They also can be useful for uncovering problems with t
he service itself. If a record is maintained of all stakeholders, this populatio
n can regularly be sampled and queried regarding problems, as well as ideas for
solutions, product upgrades, and so on. As noted before, however, a primary disa
dvantage of questionnaires is the typical low return rate.
1310
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
8.2.4.
Interviews
Interviews can be a rich source of information. Stakeholders can be queried in d
epth regarding their experiences with the product or system, what they would lik
e to see changed, and new products and systems they would like. This can also be
an opportunity to learn how their organizations make purchasing decisions, in t
erms of both decision criteria and budget cycles. While sales representatives an
d service personnel can potentially perform interviews, there is great value in
having designers venture out to the sites where their products and systems are u
sed. Such sorties should have clear measurement goals, questions to be answered,
an interview protocol, and so on, much in the way that is described in earlier
sections. If necessary, interviews can be conducted via telephone or e-mail, wit
h only selected face-to-face interviews to probe more deeply.
8.3.
Summary
The sales and service phase brings the measurement process full circle. An impor
tant aspect of this phase is using the above tools and methods to initiate the n
ext iteration of naturalist and marketing phases. To this end, as was emphasized
earlier, a primary prerequisite at this point is the ability to listen.
9.
CONCLUSIONS
This chapter has presented a framework for human-centered design. Use of this fr
amework will ensure a successful product or system in terms of viability, accept
ability, validity, and so on. In this way, human-centered design provides the ba
sis for translating technology opportunities into market innovations.
REFERENCES
Billings, C. E. (1996). Aviation Automation: The Search for a Human-Centered App
roach, Erlbaum, Hillsdale, NJ. Booher, H. R., Ed. (1990), MANPRINT: An Approach
to Systems Integration, Van Nostrand Reinhold, New York. Card, S. K., Moran, T.
P., and Newell, A. (1983), The Psychology of HumanComputer Interaction, Erlbaum,
Mahwah, NJ. Norman, D. A., and Draper, S. W., Eds. (1986), User Centered System
Design: New Perspectives on HumanComputer Interaction, Erlbaum, Hillsdale, NJ. Ro
use, W. B. (1987), On Meaningful Menus for Measurement: Disentangling Evaluative I
ssues in System Design, Information Processing and Management, Vol. 23, pp. 593604.
Rouse, W. B. (1991), Design for Success: A Human-Centered Approach to Designing
Successful Products and Systems, John Wiley & Sons, New York. Rouse, W. B. (199
4), Best Laid Plans, Prentice Hall, Englewood Cliffs, NJ.
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
1311
1312
12000 10000 Realtive Cost 8000 6000 4000 2000 0 1
Design
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
10000
10
Design Testing
100
Process Planning
1000
Pilot Production Final Production
Different Stages in the Product Cycle
Figure 1 Comparative Cost of an Engineering Change in Different Stages in the Pr
oduct Cycle. (Source: Shina 1991)
Conventional engineering practice in the past has resulted in separate, and some
times isolated, activities between design and manufacturing that have proven to
be time consuming and costly. A study compared the cost of any change in design
in three different stages, namely, in production, manufacturing engineering, and
design. The cost of a change in the production stage may be apFigure 2 Life-Cycle Phases.
embly), the subassembly, and parts. This design hierarchy is shown in Figure 4 (
Liu and Trappey 1989). The properties of the design hierarchy are as follows:
1314
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 4 The Design Hierarchy with Upstream and Downstream Reasonings.
1. All the subfunctions are required to form a design, and only one among all th
e physical design alternatives is needed to satisfy a speci c functional requireme
nt. 2. Each and every possible physical design, for a system or a subsystem, has
place in the hierarchy. Therefore, the hierarchy serves as a guide for design k
nowledge acquisition, as a structure for design knowledge storage, and as an ind
exing system for design knowledge retrieval. This has important application in s
erving as a software tool for concurrent engineering. 3. Upstream reasoning from
the physical design can be conducted by answering the question What is the design
for? Then the higher level functional requirement may be reached. 4. Downstream r
easoning from functional requirement can be done by answering the question How can
the functional requirement be satis ed? Then the physical design alternatives may b
e generated. 5. Upstreamdownstream reasoning forces the designer to analyze the f
unctional requirements and higher purposes. Thus, it can be used for managing th
e design process and yet, in the meantime, allow for the individual designers cre
ativity (see Figure 2). 6. The hierarchical system can serve as structured black
board for design communication, consultation, and retrieval. The application of
the design hierarchy by one of the authors (Liu) has led to very signi cant produc
t innovation. When the same method was applied by the students in his classes, g
eneral improvement in design creativity was observed. However, the results varie
d tremendously among the individuals. More discussions and elaboration of the pr
oposed functionalphysical design hierarchy were done in Liu and Trappey (1989) an
d Trappey and Liu (1990).
3.
DRAWINGS
Drawings represent the heart of design for manufacturing because they are the pr
incipal means of communication between the functional designer and the producer
of the design. They alone control and completely delineate shape, form, t, nish, a
nd interchangeability requirements that lead to the most competitive procurement
. An engineering drawing, when supplemented by reference speci cations and standar
ds, should permit a competent manufacturer to produce the part shown within the
dimensional and surface tolerance speci cations provided. It is the engineering dr
awing that should demonstrate the most creative design for manufacturing thinkin
g.
1316
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
4. Design for fewer parts, simpler shapes, least precision requirements, fewer m
anufacturing steps, and minimum information requirements. Figure 5 shows the rel
ative cost corresponding to different surface roughness. 5. Apply the concept of
design modularization and group technology. Reduce the varieties of sizes and s
hapes. Experience has shown that the number of hole sizes may be reduced signi can
tly without affecting the function, thus reducing the number of sizes of drills
needed. 6. Always consider standard materials, components, and subassemblies. 7.
Design appropriate to the expected level of production and to t the existing pro
duction facilities. 8. Select the shape of the raw material close to the nished d
esigns. 9. Design for easy inspection. 10. Design for part orientation to maximi
ze the value added in each setup. 11. Design for easy assembly and maintainabili
ty.
5.
PROCESSES AND MATERIALS FOR PRODUCING THE DESIGN
The selection of the ideal processes and materials with which to produce a given
design cannot be an independent activity. It must be a continuing activity that
takes place throughout the design life cycle, from initial conception to produc
tion. Material selection and process selection need to be considered together; t
hey should not be considered independently. In considering the selection of mate
rials for an application, it is usually possible to rule out entire classes of m
aterials because of cost or their obvious inability to satisfy speci c operational
requirements. But even so, with the acceleration of material development there
are so many options for the functional design engineer that optimum selection is
at best dif cult. The suggested procedure for organizing data related to material
selection is to divide it into three categories: properties, speci cations, and d
ata for ordering. The property category will usually provide the information tha
t suggests the most desirable material. A property pro le is recommended, where al
l information, such as yield point, modulus of elasticity, resistance to corrosi
on, and so on, is tabulated. Those materials that qualify because of their prope
rties will stand out. Each material will have its own speci cations on the individ
ual grades available and on their properties, applications, and comparative cost
s. The unique speci cations of a material will distinguish it from all competing m
aterials and will serve as the basis for quality control, planning, and inspecti
on. Finally, the data needed when physically placing an order need to be maintai
ned. This includes minimum order size, quantity breakpoints, and sources of supp
ly. In the nal selection of a material, cost of the proposed material needs to be
consideredhence the need for close association between material selection and pr
ocess selection in connection with design. Design evaluation invariably is in te
rms of a proposed material cost, which may be derived by analyzing the involved
processing steps, including setup and lead-time costs along with the preprocesse
d material cost.
6.
6.1.
DESIGN FOR BASIC PROCESSESMETAL
Liquid State
Early in the planning of the functional design, one must decide whether to start
with a basic process that uses material in the liquid state, such as a casting,
or in the solid state, such as a forging. If the engineer decides a part should
be cast, he or she will have to decide simultaneously which casting alloy and p
rocess can most nearly meet the required dimensional tolerance, mechanical prope
rties, and production rate at the least cost. Casting has several distinct asset
s: the ability to ll a complex shape, economy when a number of similar pieces are
required, and a wide choice of alloys suitable for use in highly stressed parts
, where light weight is important or where corrosion may be a problem. There are
inherent problems, too, including internal porosity, dimensional variations cau
sed by shrinkage, and solid or gaseous inclusions stemming from the molding oper
ation. However, most of these problems can be minimized by sound design for manu
facturing. Casting processes are basically similar in that the metal being forme
d is in a liquid or highly viscous state and is poured or injected into a cavity
of a desired shape. The following design guidelines will prove helpful in reduc
ing casting defects, improving their reliability and assisting in their producib
ility:
ses.
1318 Casting Process Premanent Mold (Preheated Mold) Die (Preheated Mold) 100 g
to 25 kg 3 mm 0.803 mm Shell 100 g to 100 kg 1.5 mm 0.75 mm 0.250.75 mm 1 mm 100 g
to 100 kg Less than 1 g to 50 kg 0.5 mm Plaster (Preheated Mold) Investment (Pr
eheated Mold) Centrifugal Grams to 200 kg 6 mm 0.801.60 mm 2.506.5 mm Less than 1
g to 30 kg 0.75 mm
0.13
0.26 0.05 1.5 0.25 1.5 0.025 0.125 0.80 3.5
TABLE 1
Important Design Parameters Associated with Various Casting Processes
Sand Casting
Design Parameter
Green
Dry / Cold / Set
Weight
Minimum section thickness Allowance for machining
100 g to 400 MT 3 mm
100 g to 400 MT 3 mm
Ferrous2.5 9.5 mm; nonferrous 1.56.5
Ferrous2.5 9.5 mm; nonferrous 1.56.5 mm
0.4
6.4 mm
0.4
6.4 mm
6.024.0 90 Holes
6 mm Holes
12 mm 90 90
6.024.0
Often not required; when required, 2.56.5 mm
2.56.35 90
0.08
6 mm
Holes
6 mm
1
1 4 1 1 2 2
1
100
1
1000 23
3000 25
Holes as small 25 mm diameter; no undercuts 100 03
Minimum lot size Draft allowances
13
13
DESIGN FOR MANUFACTURING TABLE 2 Important Design Parameters Associated with Var
ious Forging Processes Forging Process Design Parameter Size or weight Allowance
for nish machining Thickness tolerance Conventional Utilizing Preblocked Grams t
o 20 kg 210 mm
0.4 mm 0.2 mm to 2.00 mm 0.75 mm
1319
Open Die 500 g to 5000 kg 210 mm
0.6 mm 0.2 mm to 3.00 mm 1.00 mm
Closed Die Grams to 20 kg 15 mm
0.3 mm 0.15 mm to 1.5 mm
0.5
Upset 20250 mm bar 510 mm
Precision Die Grams to 20 kg 03 mm
0.2 mm 0.1 mm to 1 mm
0.2 mm 4.55.0 95 25 12 mm 1.252.25 95 2000 03
0.075 mm /
Filet and corners Surface nish ( rms) Process reliability Minimum lot size Draft al
lowance Die wear tolerance Mismathcing tolerance
57 mm 3.94.5 95 25 510
0.075 mm /
35 mm 3.84.5 95 1000 35
0.075 mm /
mm 24 mm 3.23.8 95 1500 25
0.075 mm /
Shrinkage tolerance
kg weight of forging
.25 mm
0.08 mm
0.08 mm
0.08 mm
0.08 mm
1320
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
In roll forming, strip metal is permanently deformed by stretching it beyond its
yield point. The series of rolls progressively changes the shape of the metal t
o the desired shape. In design, the extent of the bends in the rolls, allowance
must be made for springback. In press forming, as in roll forming, metal is stre
tched beyond its yield point. The original material remains about the same thick
ness or diameter, although it will be reduced slightly by drawing or ironing. Fo
rming is based upon two principles: 1. Stretching and compressing material beyon
d the elastic limit on the outside and inside of a bend. 2. Stretching the mater
ial beyond the elastic limit without compressing the material beyond the elastic
limit without stretching. Spinning is a metal-forming process in which the work
is formed over a pattern, usually made of hard wood or metal. As the mold and m
aterial are spun, a tool (resting on a steady rest) is forced against the materi
al until the material contacts the mold. Only symmetrical shapes can be spun. Th
e manufacturing engineer associated with this process is concerned primarily wit
h blank development and proper feed pressure. In electroforming, a mandrel havin
g the desired inside geometry of the part is placed in an electroplating bath. A
fter the desired thickness of the part is achieved, the mandrel pattern is remov
ed, leaving the formed piece. Automatic screw machine forming involves the use o
f bar stock, which is fed and cut to the desired shape. Table 3 provides importa
nt design for manufacturing information for these basic processes.
7.
DESIGN FOR SECONDARY OPERATION
Just as there should be careful analysis in the selection of the ideal basic or
primary process, so must there be sound planning in the speci cation of the second
ary processes. The parameters associated with all process planning include the s
ize of the part, the geometric con guration or shape required, the material, the t
olerance and surface nished needed, the quantity to be produced, and of course th
e cost. Just as there are several alternatives in the selection of a basic proce
ss, so there are several alternatives in determining how a nal con guration can be
achieved. With reference to secondary removal operations, several guidelines sho
uld be observed in connection with the design of the product in order to help en
sure its producibility. 1. Provide at surfaces for entering of the drill on all h
oles that need to be drilled. 2. On long rods, design mating members so that mal
e threads can be machined between centers, as opposed to female threads, where i
t would be dif cult to support the work. 3. Always design so that gripping surface
s are provided for holding the work while machining is performed and ensure that
the held piece is suf ciently rigid to withstand machining forces. 4. Avoid doubl
e ts in design for mating parts. It is much easier to maintain close tolerance wh
en a single t is speci ed. 5. Avoid specifying contours that require special form t
ools. 6. In metal stamping, avoid feather edges when shearing. Internal edges sh
ould be rounded, and corners along the edge of the strip stock should be sharp.
7. In metal stamping of parts that are to be subsequently press formed, straight
edges should be speci ed, if possible, on the at blanks. 8. In tapped blind holes,
the last thread should be at least 1.5 times the thread pitch from the bottom o
f the hole. 9. Blind-drilled holes should end with a conical geometry to allow t
he use of standard drills. 10. Design the work so that diameters of external fea
tures increase from the exposed face and diameters of internal features decrease
. 11. Internal corners should indicate a radius equal to the cutting tool radius
. 12. Endeavor to simplify the design so that all secondary operations can be pe
rformed on one machine. 13. Design the work so that all secondary operations can
be performed while holding the work in a single xture or jig. Table 4 provides a
comparison of the basic machining operations used in performing the majority of
secondary operations.
1.5 mm
1 mm
0.075 mm
0.075 mm
0.1 mm
0.0025 mm To size Wall thicknesses 0.025 mm; dimension
0.005 mm
0.80 mm diameter by 1.50 mm; length to 200 mm diameter by 900 mm length To size
Diameter 0.01.06 mm; length 0.040.01 mm; 0.10 concentricity 0.06 mm
Minimun thickness Allowance for
To size
To size
Diameter 0.0250.125 mm; length 0.250.50 mm
Diameter
0.752.25 mm
0.1251.25 99 500 ft Yes Yes No Yes Yes No Yes Yes 10,000 ft 99 99 1500
1 0 4
2.22.6
Flatness 0.01 mm / in. of width; wall thickness 0.150.25 mm; cross section
.24.0 0.42.2 9095 5 Yes Yes No Yes No Yes No No
0.1250.250 99 25 Yes Yes No Yes
0.302.5 98 1000 Yes Yes No Yes
95
0.150
99
1000
5000
0
Yes
Yes
No
Yes
Yes
No
1321
Surface nish ( rms) Process reliability Minimum lot size Draft allowance Bosses per
mitted Undercuts permitted Inserts permitted Holes permitted
Yes
No
1322
Relative Motion Tool Work Cutting Tool Single point mm Single point mm Single po
int mm Multiple points mm Twin-edge drill mm
0.050 0.100 0.025 0.050 0.025 0.050 0.005 0.025 0.005 0.025
TABLE 4
Tolerance 0.86.4 Surface Finish (
rms)
at, slots
0.050 mm
0.82.5 0.85.0
Fixed Fixed
Electric discharge machining
Variety of shapes depending on shape of electrode
Electric discharge machine
Lathe-like machine
2.56.4
1323
1324
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
8.
DESIGN FOR BASIC PROCESSESPLASTICS
There are more than 30 distinct families of plastic, from which evolve thousands
of types and formulations that are available to the functional designer. Howeve
r, in the fabrication of plastics, either thermoplastic or thermosetting, only a
limited number of basic processes are available. These processes include compre
ssion molding, transfer molding, injection molding, extrusion, casting, cold mol
ding, thermoforming, calendering, rotational molding, and blow molding. The func
tional designer usually gives little thought to how the part will be made. He or
she is usually concerned primarily with the speci c gravity, hardness, water abso
rption, outdoor weathering, coef cient of linear thermal expansion, elongation, exu
ral modulus, izod impact, de ect temperature under load, and exural yield, tensile,
shear, and compressive strengths.
8.1.
Compression Molding
In compression molding, an appropriate amount of plastic compound (usually in po
wder form) is introduced into a heated mold, which is subsequently closed under
pressure. The molding material, either thermoplastic or thermosetting, is soften
ed by the heat and formed into a continuous mass having the geometric con guration
of the mold cavity. If the material is thermoplastic, hardening is accomplished
by cooling the mold. If the material is thermosetting, further heating will res
ult in the hardening of the material. Compression molding offers the following d
esirable features: 1. Thin-walled parts (less than 1.5 mm) are readily molded wi
th this process with little warpage or dimensional deviation. 2. There will be n
o gate markings, which is of particular importance on small parts. 3. Less shrin
kage, and more uniform, is characteristic of this molding process. 4. It is espe
cially economical for larger parts (those weighing more than 1 kg). 5. Initial c
osts are less since it usually costs less to design and make a compression mold
than a transfer or injection mold. 6. Reinforcing bers are not broken up as they
are in closed-mold methods such as transfer and injection. Therefore, the fabric
ated parts under compression molding may be both stronger and tougher.
8.2.
Transfer Molding
Under transfer molding, the mold is rst closed. The plastic material is then conv
eyed into the mold cavity under pressure from an auxiliary chamber. The molding
compound is placed in the hot auxiliary chamber and subsequently forced in a pla
stic state through an ori ce into the mold cavities by pressure. The molded part a
nd the residue (cull) are ejected upon opening the mold after the part has harde
ned. Under transfer molding, there is no ash to trim; only the runner needs to be
removed.
8.3.
Injection Molding
In injection molding, the raw material (pellets, grains, etc.) is placed into a
hopper, called the barrel, above a heated cylinder. The material is metered into
the barrel every cycle so as to replenish the system for what has been forced i
nto the mold. Pressure up to 1750 kg / cm2 forces the plastic molding compound t
hrough the heating cylinder and into the mold cavities. Although this process is
used primarily for the molding of thermoplastic materials, it can also be used
for thermosetting polymers. When molding thermosets, such as phenolic resins, lo
w barrel temperatures should be used (65 120C). Thermoplastic barrel temperatures
are much higher, usually in the range of 175315C.
8.4.
Extrusion
Like the extrusion of metals, the extrusion of plastics involves the continuous
forming of a shape by forcing softened plastic material through a die ori ce that
has approximately the geometric pro le of the cross-section of the work. The extru
ded form is subsequently hardened by cooling. With the continuous extrusion proc
ess, such products as rods, tubes, and shapes of uniform cross-section can be ec
onomically produced. Extrusion to obtain a sleeve of the correct proportion almo
st always precedes the basic process of blow molding.
8.5.
Casting
Much like the casting of metals, the casting of plastics involves introducing pl
astic materials in the liquid form into a mold that has been shaped to contour o
f the piece to be formed. The material that is used for making the mold is often
exible, such as rubber latex. Molds may also be made of non exible materials such
as plaster. Epoxies, phenolics, and polyesters are plastics that are frequently
fabricated by the casting process.
ill be most helpful. Certainly both thermoforming and blow molding are largely r
estricted to thermosetting resins, as is transfer molding. Injection molding is
used primarily for producing large-volume thermoplastic moldings, and extrusion
for large-volume thermoplastic continuous shapes. Geometry or shape also has a m
ajor impact on process selection. Unless a part has a continuous cross-section,
it would not be extruded; unless it were relatively thin walled and bottle shape
d, it would not be blow molded. Again, calendering is restricted to at sheet or s
trip designs, and the use of inserts is restricted to the molding processes. The
quantity to be produced also has a major role in the selection decision. Most d
esigns can be made by simple compression molding, yet this method would not be e
conomical if the quantity were large and material were suitable for injection mo
lding. The following design for manufacturing points apply to the processing of
plastics: 1. Holes less than 1.5 mm diameter should not be molded but should be
drilled after molding. 2. Depth of blind holes should be limited to twice their
diameter. 3. Holes should be located perpendicular to the parting line to permit
easy material removal from the mold. 4. Undercuts should be avoided in molded p
arts since they require either a split mold or a removable core section. 5. The
section thickness between any two holes should be greater than 3 mm. 6. Boss hei
ghts should not be more than twice their diameter. 7. Bosses should be designed
with at least a 5 taper on each side for easy withdrawal from the mold. 8. Bosses
should be designed with radii at both the top and the base. 9. Ribs should be d
esigned with at least a 25 taper on each side.
1326 Parameter Mold or Tool Material Thermoplastic None None None None None Ribs
Draft Inserts Typical Tolerance Minimum Wall Thickness Low 0.050.200 mm dependin
g on material 0.010.30 mm depending on material Thermoplatic 0.040.25 mm 1.25 or d
epending on thermosetting material Thermosetting 0.040.25 mm 1.5 mm depending on
material Thermoplastic 0.040.25 mm 1.25 mm or depending on thermosetting material
Yes High
1 35 taper; 2 5 height
3 times wall thickness 1 25 taper; 4 4 height
ness; 1 width 2 of wall thickness
TABLE 5
Basic Processes Used to Fabricate Plastics and Their Principal Parameters
Process
Shape Produced
Machine
Minimim Quantity Requirements
Calendering
Continuous sheet Multiple-roll for lm calender
None
Extrusion
Continuous form Extrusion press Hardened steel Thermoplastic die such as rods, t
ubes, laments, and simple shapes
Compression molding
Compression press
Hardened tool steel mold
Possible to Low (tooling is inexpensive) extrude over or around wire insert Yes
Low
Transfer molding
Simple outlines and plain cross sections Complex geometrics possible
Transfer press
Hardened tool steel mold
Injection molding
Complex geometrics possible
Injection press
Hardened tool steel mold
Yes
High
Casting
None
Yes Yes
Simple outlines and plain cross sections Cold molding Simple outlines and plain
cross sections
None
Metal mold or Thermosetting expoxy mold Mold of wood, Thermosetting plaster, or
steel
0.100.50 mm 2.0 mm depending on material 0.010.05 mm 2.0 mm depending on material
Low to medium depending on mold Low
ber an
1328
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
10. Ribs should be designed with radii at both the top and the base. 11. Ribs sh
ould be designed at a height of 1.5 times the wall thickness. The rib width of t
he base should be half the wall thickness. 12. Outside edges at the parting line
should be designed without a radius. Fillets should be speci ed at the base ribs
and bosses and on corners and should be not less than 0.8 mm. 13. Inserts should
be at right angles to the parting line and of a design that allows both ends to
be supported in the mold. 14. A draft or taper of 12 should be speci ed on the vert
ical surfaces or walls parallel with the direction of mold pressure. 15. Cavity
numbers should be engraved in the mold. The letters should be 2.4 mm high and 0.
18 mm deep. 16. Threading below 8 mm diameter should be cut after molding. Table
5 identi es the major parameters associated with basic processes used to fabricat
e thermoplastic and thermosetting resins.
9.
DESIGN FOR ASSEMBLY
The goal of DFA is to ease the assembly of the product. Boothroyd et al. (1994)
propose a method for DFA that involves two principal steps:
Designing with as few parts as possible. This is accomplished by analyzing parts
pairwise to
determine whether the two parts can be created as a single piece rather than as
an assembly.
Estimating the costs of handling and assembling each part using the appropriate
assembly
process to generate costs gures to analyze the cost savings through DFA.
In addition to the assembly cost reductions through DFA, there are reductions in
part costs that
are more signi cant. Other bene ts of DFA include improved reliability and reduction
in inventory and production control costs. Consequently, DFA should be applied
regardless of the assembly cost and product volume.
10. COMPUTER SOFTWARE TOOLS: OBJECT-ORIENTED PROGRAMMING AND KNOWLEDGE-BASED SYS
TEMS
Modern CAD / CAM systems and computer-aided processing planning systems for mach
ining are well known and are very important for integrating design and manufactu
ring. However, more work is needed to develop them into tools for helping design
for manufacturability. We need a software system that can be easily modulized,
expanded, alternated in its structures and contents, and integrated partially or
fully. The key technology is a recently developed style and structure of progra
mming called object-oriented programming (OOP). Object-oriented programming supp
orts four unique object functions or properties: 1. Abstraction: Abstraction is
done by the creation of a class protocol description that de nes the properties of any
object that is an instance of that class. 2. Encapsulation: An object encapsula
tes all the properties (data and messages) of the speci c instance of the class. 3
. Inheritance: Some classes are subordinate to others and are called subclasses.
Subclasses are considered to be special cases of the class under which they are
grouped in the hierarchy. The variables and methods de ned in the higher-level cl
asses will be automatically inherited by the lower-level classes. 4. Polymorphis
m: Allows us to send the same message to different objects in different levels o
f class hierarchy. Each object responds in a way that is inherited or rede ned wit
h respect to the objects characteristics. With these properties, integrated and e
xpandable software for supporting designs, including design for manufacturabilit
y, can be developed. An example is shown in Trappey and Liu (1990), who develope
d a system shell for design using the object-oriented programming language, SMAL
LTALK80 (Goldberg 1984). Another key software technology for design for manufact
urability, such as automated rule checking, is knowledge-based systems, or exper
t systems. The general methodology for building these systems roughly consists o
f ve steps: identi cation, conceptualization, formalization, implementation, and te
sting (Hayes-Roth et al. 1983). An example of this approach for xture design for
machining is shown in Ferreira et al. (1985).
1330
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
1. Select a competent, strong project manager. 2. Quickly develop constraints fo
r the product design and process selection at various levels by the effort of th
e entire team, that is, list the impossible and infeasible rst. 3. Develop the pr
oduct pro le and speci cation through the team effort, remembering the purpose of a
design, and list three other alternatives for every design, be it a subsystem or
a component. 4. Aim high, recognizing that quality and cost need not be comprom
ised when development time is compressed. 5. Give enough authorization to the te
am manager so that quick decisions can be made.
REFERENCES
Boothroyd, G., Dewhurst, P., and Knight, W. (1994), Product Design for Manufactu
re and Assembly, Marcel Dekker, New York. Bralla, J. G. (1986), Handbook of Prod
uct Design for Manufacturing, McGraw-Hill, New York. Brazier, D., and Leonard, M
. (1990), Concurrent Engineering: Participating in Better Designs, Mechanical Engine
ering, January. Ferreira, P. M., Kochar, B., Liu, C. R., and Chandru, V. (1985),
AIFIX: An Expert System Approach to Fixture Design, Computer Aided / Intelligent Pr
ocess Planning, PED, Vol. 19, in C. R. Liu, T. C. Chang, and R. Komanudari, Eds.
, ASME, New York. Goldberg, A. (1984), SAMLLTALK-80, The Interactive Programming
Environment, Addison-Wesley, Reading, MA. Hayes-Roth, F., Waterman, D., and Len
at, D. (1983), Building Expert Systems, Addison-Wesley, Reading, MA, pp. 329. Liu
, C. R., and Trappey, J. C. (1989), A Structured Design Methodology and MetaDesign
er: A System Shell for Computer Aided Creative Design, in Proceedings of ASME Desi
gn Automation Conference (Montreal, September). Mukherjee, A., and Liu, C. R. (1
995), Representation of FunctionForm Relationship for Conceptual Design of Stamped
Metal Parts, Research in Engineering Design, Vol. 7, pp. 253269. Mukherjee, A., and
Liu, C. R. (1997), Conceptual Design, Manufacturability Evaluation and Preliminar
y Process Planning Using FunctionForm Relationships in Stamped Metal Parts, Robotic
s and Computer-Integrated Manufacturing, Vol. 13, No. 3, pp. 253270. Nevins, J. L
. and Whitney, D. E., Eds. (1989), Concurrent Design of Product and Processes, M
cGrawHill, New York. Shina, S. G. (1991), Concurrent Engineering and Design for
Manufacture of Electronic Products, Van Nostrand Reinhold, New York. Stillwell,
H. R. (1989), Electronic Product Design for Automated Manufacturing, Marcel Dekk
er, New York. Trappey, J. C., and Liu, C. R. (1990), An Integrated System Shell Co
ncept for Computer Aided Design and Planning, Technical Report TR-ERC-90-2, Purdue
University, NSF Engineering Research Center, June.
ADDITIONAL READING
Doyle, L. E., Manufacturing Processes and Materials for Engineers, Prentice Hall
, Englewood Cliffs, NJ 1969. Edosmwan, J. A., and Ballakur, A., Productivity and
Quality Improvement in Electronic Assembly, McGraw-Hill, New York, 1989. Greenw
ood, D. C., Product Engineering Design Manual, McGraw-Hill, New York, 1959. Gree
nwood, D. C., Engineering Data for Product Design, McGraw-Hill, New York, 1961.
Greenwood, D. C., Mechanical Details for Product Design, McGraw-Hill, New York,
1964. Legrand, R., Manufacturing Engineers Manual, McGraw-Hill, New York, 1971. N
iebel, B. W., and Baldwin, E. N., Designing for Production, Irwin, Homewood, IL,
1963. Niebel, B. W., and Draper, A., Product Design and Process Engineers, McGr
aw-Hill, New York, 1974. Niebel, B. W., Draper, A. B., and Wysk, R. A., Modern M
anufacturing process Engineering, McGraw-Hill, New York, 1989. Priest, J. W., En
gineering Design for Producibility and Reliability, Marcel Dekker, New York, 198
8.
1334
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
tion project. This causes project time and cost to be more dif cult to estimate du
ring project planning; accordingly, progress during project execution tends to b
e more dif cult to measure. Regardless, a project team is formed to produce a de nit
e set of deliverables within a certain time frame for a speci ed cost (budget). Th
e project team is led by a project manager, who is responsible for ensuring that
the objectives of the project are achieved on time and within budget.
2.
THE PROJECT MANAGER
A project manager typically is someone who has a wide breadth and depth of knowl
edge and experience in a number of areas. He or she also is someone who is skill
ed in working with teams, leverages relationships judiciously, and is knowledgea
ble in the use of tools and techniques that aid in accomplishing his or her role
.
2.1.
The Project Managers Role
The project manager facilitates the team-building process and collaborates with
the project team to create and execute the project plan. The project manager als
o acts as the liaison between the team and the client. He or she continually mon
itors the progress of the project and reports project status periodically to bot
h the client and other interested stakeholders. The project manager works to ens
ure that the client is satis ed and that the project is completed within the param
eters of scope, schedule, cost, and quality.
2.2.
Project Manager Characteristics
A project managers characteristics differ from those of the typical functional ma
nager. For example, functional managers usually:
Are specialists Function as technical supervisors Are skilled at analysis and an
alytical approaches Maintain line control over team members
Project managers, on the other hand, typically:
Are generalists with wide experience and knowledge Coordinate teams of specialis
ts from a variety of technical areas Have technical expertise in one or two area
s Are skilled at synthesis and the systems approach Do not have line control ove
r the project team members
2.3.
Identifying Project Manager Candidates
Project managers can be persons working for the professional enterprise who are
reassigned to the project manager position for the life of a given project or fo
r a speci ed period of time. Project manager candidates may also come from outside
the organization. For example, they may be contracted by the organization to se
rve as the project manager for a speci c project. Project managers may come from a
lmost any educational background, although the industrial engineering curriculum
is probably the most relevant. In any case, a successful project manager candid
ate should be able to demonstrate signi cant experience in the position. Additiona
l quali cations might include project manager certi cation, which is awarded to qual
i ed persons by organizations such as the Project Management Institute.
3.
3.1.
OVERVIEW OF THE PROJECT MANAGEMENT PROCESS
The Phases of Project Management
The project management process includes four phases: Phase Phase Phase Phase I:
project de nition II: project planning III: project monitoring and control IV: pro
ject close
Each phase has its purpose and the phases are linked in order. In fact, Phases I
through III tend to be iterative. For example, some level of project planning i
s required to develop reasonably accurate,
e team members is dependent upon the nature of the project. However, at a minimu
m, they should have a suf cient top-level understanding of the technical issues to
be able to de ne and plan the project effectively as well as to manage the resour
ces that may be recruited to work on the project.
4.3.
Project De nition Components
A project de nition should contain at least the following components:
Project objectives (outcomes) Scope Deliverables (outputs) Approach Resource and
infrastructure requirements High-level time and cost estimates Project risks
Each of these components is described below. Examples are provided based on a ty
pical project to identify, evaluate, and select a manufacturing business plannin
g and control software system. Software evaluation and selection assistance is a
fairly common professional service provided by consulting
1336
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
rms to clients (in this example, ABC Manufacturing, Inc.). This example project w
ill be carried throughout this chapter to illustrate the various aspects of prof
essional services project management.
4.3.1.
Project Objectives (Outcomes)
Objectives are the destination, target, or aim of the project. They are needed t
o clarify the clients and other stakeholders expectations. They also help to:
Establish a common vision Guide the team during project plan development and exe
cution Keep the team focused as the project progresses Provide a basis for commu
nications during the project Establish a means for assessing success at the comp
letion of the project
Good objectives state what will be achieved and / or the results sought, not how
the team will get there. They are speci c, unambiguous, and measurable, containin
g a time frame for the intended achievements. Examples of outcome-oriented objec
tives include: select a business planning and control software system in six mon
ths; implement the selected business planning and control software system in 18
months; increase product throughput by 50% in two years; decrease inventory inve
stment by 75% by year end.
4.3.2.
Scope
The statement of scope sets the boundaries of the project, in that it de nes the c
on nes, the reach, and / or the extent of the areas to be covered. It clari es what
will be included in the project and, if necessary, states speci cally what will no
t be included. Scope may be de ned in terms such as geographical coverage, range o
f deliverables, quality level, and time period. The statement of scope must be c
lear, concise, and complete, as it will serve as the basis for determining if an
d when out-of-scope work is being conducted during project execution. In the pro
fessional services eld, performance of unauthorized, out-of-scope work on a proje
ct usually will result in a budget overrun, unrecovered fees and expenses from t
he client, and unsatisfactory project nancial results. Potential out-of-scope wor
k should be identi ed before it is performed and negotiated as additional work, al
ong with its attendant cost and schedule requirements. An example of a scope sta
tement is: Assist ABC Company in selecting an appropriate business planning and co
ntrol software and hardware system and implementing the selected system. The ass
istance will include de ning business system requirements, evaluating system alter
natives, making a selection that will support manufacturing and accounting funct
ions, and facilitating the implementation of the selected system.
4.3.3.
Deliverables (Outputs)
A deliverable is anything produced on a project that supports achievement of the
project objectives. It is any measurable, tangible, veri able outcome, result, or
item that is produced to complete a task, activity, or project. The term is som
etimes used in a more narrow context when it refers to an external deliverable (
handed over to a stakeholder or client and subject to approval). Examples of del
iverables are system requirements de nition document; request for proposal (RFP) d
ocument; systems-evaluation criteria; software and hardware con guration design; s
ystemimplementation project plan; and facilitation assistance during the systemimplementation process.
4.3.4.
Project Approach
The project approach de nes the general course of action that will be taken to acc
omplish the project objectives. For example, the project approach may be de ned in
such terms as the methodology to be used, the timing / phases of the project, a
nd / or the types of technology and human resources to be applied. The approach
section of the project de nition explains, in general, how the project will be car
ried out. An example of an approach statement for developing a system requiremen
ts de nition document is: We will conduct interviews with personnel in each function
al area to develop and de ne the system requirements, based on a system requiremen
ts pro le. We will provide advice in the de nition of critical system requirements,
such as system performance (timing, volumes, and the like). This phase will culm
inate with the development of a system requirements de nition document.
4.3.5.
Resource and Infrastructure Requirements
Resource and infrastructure requirements for professional service projects typic
ally fall into any of three categories: human resources, facilities and equipmen
t, and information technology (including knowledge bases). Human resource requir
ements, often the major cost of a project, should be de ned
1338
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
iveness of the overall system will be evaluated prior to making a nal selection.
Additionally, arrangements for holding the source code in escrow will ensure the
availability of the source code in the event the software company cannot sustai
n ongoing viability.
5.
5.1.
PHASE II: PROJECT PLANNING
Project Planning Purpose
The purpose of project planning is to con rm the project scope and objectives; dev
elop the project organization, schedule, and budget; secure the necessary resour
ces; and create clear expectations about the project organization, timing, budge
t, and resources. The project workplan should clearly identify the deliverables
that will be prepared and the tasks that need to be performed in order to prepar
e them. The project planning team uses the project de nition as the beginning poin
t for preparing the project workplan. The workplan is typically broken down into
phases, activities, and tasks. Deliverables and resources are usually de ned at t
he task level.
5.2.
The Project Planning Team
A key member of the project planning team is the project manager because he or s
he will have primary responsibility for executing the project plan. The project
planning team may also include one or more members of the project de nition team t
o ensure that the thinking that went into de ning the project is re ected in the pro
ject plan. If the project de nition team is not represented on the project plannin
g team, a draft of the project plan should be reviewed by one or more project de n
ition team members. Other members of the project planning team might include app
ropriate technical specialists and others who may be team members during project
execution.
5.3.
Project Planning Components
There are seven main steps in creating a project workplan: 1. 2. 3. 4. 5. 6. 7.
Con rm objectives and scope. Develop work breakdown structure. Develop a detail ta
sk list. Assign personnel to tasks. Develop time estimates and a preliminary sch
edule of tasks. Determine the critical path. Balance the detailed workplan.
Each of these steps is described below.
5.3.1.
Con rm Objectives and Scope
Often there can be a signi cant time period between the completion of project de nit
ion and the initiation of detailed project planning, which may result in diverge
nt views as to the purpose of the project. It is important that there be full ag
reement regarding the objectives and scope of the project before the workplan is
prepared. The project manager should seek con rmation of the objectives and scope
based on input from the sponsor and / or the steering committee as well as the
project-de nition team members. If there are differences, the project manager shou
ld rely on the sponsor to settle them.
5.3.2.
Develop Work Breakdown Structure
Developing a work breakdown structure entails expanding the project phases or de
liverables into the major activities that need to occur to complete each phase a
nd de ning the tasks that need to occur to complete each activity. Steps for devel
oping a work breakdown structure and examples are presented in Table 1. The work
breakdown structure can encompass more than the three levels shown in Table 1,
depending on the nature and complexity of the work to be done. For example, if t
he work effort has been done many times previously and / or is routine, it may n
ot require more than three levels of detail (phase / activity / task). Conversel
y, work that is less familiar or more complex may require additional levels of d
etail to gain a full understanding of the work that must be done. A work stateme
nt (often called a work package) then is prepared to describe the effort for eac
h task or subtask at the lowest level of the work breakdown structure. Each work
statement should be designed to ensure that the related task or subtask:
Tasks within the De ne systems requirements activity: De ne and document business syste
objectives. Document performance objectives. De ne and document anticipated bene ts
. Document functional requirements. Conduct interviews. Document special conside
rations and constraints. Assemble requirements documentation.
Is measurable Has tangible results / outputs (deliverables) Has identi able and re
adily available inputs Is a nite, manageable unit of work Requires a limited numb
er of resources Fits into the natural order of work progression
A completed work breakdown structure will include the assembled detail tasks and
their relationship to respective activities. A work breakdown structure may be
displayed according to Figure 1 (in the gure, level 1 corresponds to phase, level 2 t
o activity, and level 3 to task).
5.3.3.
Develop a Task and Deliverables List
The different levels of the work breakdown structure should be documented in a t
ask list that identi es each phase, activity, and task (and subtask, as appropriat
e). Next, the name of the person to be responsible for each task (the task owner) an
d a description of the deliverable(s) associated with each task can be added. An
example of a detailed task and deliverables list is shown in Figure 2. When the
task and deliverables list is complete, the logical order in which tasks should
be performed is de ned. This is done by rst determining task dependencies at the l
owest level of the work breakdown structure. These dependency relationships can
be portrayed in the form of a project network diagram. An example of task depend
encies is shown in Figure 3.
5.3.4.
Assign Personnel to Tasks
Each task must be assigned personnel resources to perform the work. The steps fo
r assigning personnel to tasks include:
1340
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Software Selection
Level 1
(e.g., Analysis)
Level 1
(e.g., Coordination)
Level 1
(e.g., training)
Phases
Level 2 (e.g., Define)
Level 2 (e.g., Develop)
Level 2 (e.g., Select)
Activities
Tasks
Level 3 (e.g., Choose best alternative) Level 3 (e.g., Assess consequences) Leve
l 3 (e.g., Final Selection)
Figure 1 Partial Work Breakdown Structure.
List the skills required to complete each task. Identify candidate team members
whose skills meet the task requirements. Develop a rough estimate of the time th
at will be required to complete each task, based on the
experience of the candidates.
Negotiate roles and responsibilities of candidate team members relative to each
task.
Project Name: Software Selection and Implementation
Project Manager: Joan Ryan
Date Prepared: 5/31/2000 Table of Detail Tasks and Deliverables Description List
ID Detail Tasks List 2.0 2.1 2.2 2.3 3.0 3.1 3.2 3.3 3.4 3.5 4.0 4.1 4.2 4.3 5.
0 5.1 5.2 Define system requirements Define and document business system objecti
ves Document performance objectives Define and document anticipated benefits Dev
elop Evaluation Criteria Identify alternative systems or supplements Prioritize
requirements according to importance Evaluate each alternative against the absol
ute requirements Calculate scores for each alternative Assess each alternatives f
unctions against requirements Select an Alternative Choose the best alternative
as a tentative decision Assess adverse consequences and decide if an alternative
should be selected Make final selection Develop Implementation Project Plan Mee
t with project team members Develop detail tasks to be performed within each act
ivity
Task Owner Jim B. Mary P. Joan R.
Deliverable
Business system objectives documentation Performance objectives documentation An
1342
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Project Name: Software Selection and Implementation Date Prepared: 5/31/2000 Tab
le of Time Estimates and Schedule of Tasks ID Detail Tasks List 2.0 2.1 2.2 2.3
3.0 3.1 3.2 3.3 3.4 3.5 4.0 4.1 4.2 4.3 5.0 5.1 5.2 Define system requirements D
efine and document business system objectives Document performance objectives De
fine and document anticipated benefits Develop Evaluation Criteria Identify alte
rnative systems or supplements Prioritize requirements according to importance E
valuate each alternative against the absolute requirements Calculate scores for
each alternative Assess each alternatives functions against requirements Select
an Alternative Choose the best alternative as a tentative decision Assess adver
se consequences and decide if an alternative should be selected Make final selec
tion Develop Implementation Project Plan Meet with project team members Develop
detail tasks to be performed within each activity Project Manager: Joan Ryan
Task Owner Jim B. Mary P. Joan R. Bob S. Guy R. Marie S. Bob S. Henry B.
Effort (hours) 40 8 4 16 8 32 32 16
Duration (weeks) 4 1 1 1 1 3 3 3
Guy R. Marie S. Team Team Wendy L.
8 16 4 40 32
2 2 1 1 2
Figure 4 Task Time Estimates.
tasks simultaneously). It may reveal resources that are overcommitted or underut
ilized. It helps the project manager to determine whether tasks need to be resch
eduled, work reprioritized, or additional time or resources renegotiated. The wo
rkplan is balanced when all appropriate resources are con rmed and an acceptable c
ompletion date is determined. The preliminary schedule, resource availability, a
nd required projectcompletion date all need to be brought into balance.
TABLE 2
Schedule-Compression Options Caveat
If resource loading indicates there are If the interim deliverable is suf ciently
Compression Option Overlap tasks by using partial dependencies.
enough resources available complete
Break dependencies and resequence tasks. Break tasks into subtasks that can be d
one in parallel. Reallocate resources from paths with oat to the critical path.
If the associated risk is acceptable If resource loading indicates there are If
the task is resource driven If the resources have the correct skills and If the
noncritical path(s) have not become
enough resources available available time critical.
Authorize overtime, add shifts, increase staf ng, subcontract jobs. Remove obstacl
es. Reduce project scope.
If there is approval for the additional If priority is high enough If the projec
t sponsor approves
budget expense
16 8
32 16
4 1
8 16
8 16
4.3 5.0 5.1 5.2
4 4 1 31 $11,315.00
4 40 32 180 $38,700.00 $88,415.00
4 40 32 256 $38,400.00
Figure 5 Personnel Costs.
1344
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
calculate his or her direct project cost. The costs for all personnel assigned t
o the project then are totaled to determine the projects personnel budget. Figure
5 provides an example of how personnel costs might be determined.
5.4.2.
Add Support, Overhead, and Contingency Factors
Support tasks and overhead should be included in the detail workplan to account
for their impact on project cost and duration. Support refers to all those tasks
that facilitate production of the deliverables through better communication, pe
rformance, or management. It could be project-related training, meetings, admini
stration, project and team management, report production, and quality assurance
reviews. Overhead is nonproductive time spent on tasks that do not support execu
tion of the project workplan or production of the deliverables but can have cons
iderable impact on the project schedule, the resource loading, and potentially t
he budget. Overhead could include travel time, holidays, vacation, professional
development, or personal / sick time. Nonpersonnel costs associated with the pro
ject are identi ed and estimated. Such costs may include travel expense, technolog
y / knowledge acquisition, and contractor assistance. Finally, contingency facto
rs are considered to compensate for project risks and other potential project is
sues as well as to accommodate personnel learning curves. Contingency factors ma
y be applied at the phase or activity level of the workplan / budget, although a
ccuracy may be improved if applied at the detail task level. Figure 6 extends th
e example from Figure 5 with nonpersonnel costs to arrive at a total project cos
t.
Project Name: Software Selection and Implementation Date Prepared: 5/31/2000 Tab
le of Personnel and Other Costs ID Detail Tasks List 2.0 2.1 2.2 2.3 3.0 3.1 3.2
3.3 3.4 3.5 4.0 4.1 4.2 4.3 5.0 5.1 5.2
Project Manager: Joan Ryan
Joan R. ($365/hr) 4 1 1 1 1 4 1 4
Hours Mary P. ($215/hr) 16 8 4 8 4 16 16 8
Bob S. ($150/hr) 40 8 4 16 8 32 32 16
Define system requirements Define and document business system objectives Docume
nt performance objectives Define and document anticipated benefits Develop Evalu
ation Criteria Identify alternative systems or supplements Prioritize requiremen
ts according to importance Evaluate each alternative against the absolute requir
ements Calculate scores for each alternative Assess each alternatives functions
against requirements Select an Alternative Choose the best alternative as a ten
tative decision Assess adverse consequences and decide if an alternative should
be selected Make final selection Develop Implementation Project Plan Meet with p
roject team members Develop detail tasks to be performed within each activity To
tals Personnel Budget Total Personnel Budget Administration Support Overhead Con
tractor subtotal Contingency @ 10% Total Phase I Project
4 1 4 4 1 31 $11,315.00
8 16 4 40 32 180 $38,700.00 $88,415.00 15,000.00 30,000.00 12,000.00 $145,415.00
14,540.00 $159,955.00
8 16 4 40 32 256 $38,400.00
Figure 6 Total Project Cost.
Project Name: Software Selection and Implementation Feb $10,900 $3,285 $1,825 $4
,485 $2,425 $9,700 $8,605 $5,580 $4,380 $6,205 $2,920 $16,060 $12,045 $15,385 $1
7,235 $14,185 $41,610 Mar Apr May Total $10,900 $3,285 $1,825 $0 $4,485 $2,425 $
9,700 $8,605 $5,580 $0 $4,380 $6,205 $2,920 $0 $16,060 $12,045 $88,415
Project Manager: Joan Ryan
Time-phased Budget
ID Detail Task List
2.0 Define system requirements
2.1 Define and document business system objectives
2.2 Document performance objectives
2.3 Define and document anticipated benefits
3.0 Develop Evaluation Criteria
3.1 Identify alternative systems or supplements
3.2 Prioritize requirements according to importance
3.3 Evaluate each alternative against the absolute requirements
3.4 Calculate scores for each alternative
3.5 Assess each alternatives functions against requirements
4.0 Select an Alternative
4.1 Choose the best alternative as a tentative decision
4.2 Assess adverse consequences and decide if an alternative should be selected
4.3 Make final selection
5.0 Develop Implementation Project Plan
5.1 Meet with project team members
5.2 Develop detail tasks to be performed within each activity
Totals
Figure 7 Time-Phased Budget.
1345
1346
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
5.4.3.
Compile and Reconcile the Project Budget
The budget is compiled by adding personnel costs and all other costs (including
contingencies) to arrive at a total budget number. The budget is subdivided into
time increments (weekly, biweekly, or monthly) for the expected life of the pro
ject, based on the expected allocation of resources and related costs to the tim
e periods in the project schedule. An example of a time-phased budget is shown i
n Figure 7. Because the budget is a projection of project costs, it is based on
many assumptions. The compiled budget should be accompanied by a statement of th
e assumptions made regarding schedules, resource availability, overhead, conting
ency factors, nonpersonnel costs, and the like. If the project budget is materia
lly different from the high-level cost estimate in the project de nition, a reconc
iliation process may need to occur. This may result in the need to rework / reba
lance the project de nition, the workplan, and the budget before the sponsor will
approve execution of the project.
6.
PHASE III: PROJECT MONITORING AND CONTROL
The project workplan and budget (see Section 5) are used as the basis for projec
t monitoring and control. The three main purposes of project monitoring and cont
rol are to: 1. Manage the project within the constraints of budget, time, and re
sources 2. Manage changes that will occur 3. Manage communications and expectati
ons Project monitoring helps the project manager balance constraints, anticipate
/ identify changes, and understand expectations. A well-designed project monito
ring process provides:
Timely information regarding actual vs. planned results An early warning of pote
ntial project problems A basis for assessing the impact of changes An objective
basis for project decision making
Project monitoring also is used to set up ongoing channels of communication amon
g project stakeholders. The major deliverables are project progress reports and
status updates; detailed workplans (updated as necessary); and cost and schedule
performance reports.
6.1.
Organizing for Project Implementation
An important element of project monitoring and control is the organization that
is put in place to support it. Typically the project-implementation structure co
nsists of at least two entities: the project steering committee and the project
of ce.
6.1.1.
The Project Steering Committee
The project steering committee is made up of the key stakeholders in the project
. It usually is chaired by the project sponsor, and the membership is made up of
individuals who are in a position to help move the project along when barriers
are encountered or changes are necessary. The committee members typically are pr
oject supporters, although antagonists may also be included to ensure that their
views are heard and, to the extent possible, accommodated. The project manager
is often a member of the committee. The steering committee has a number of roles
. It provides direction to the project; reviews deliverables, as appropriate; re
ceives periodic reports regarding project progress, status, dif culties, and nearfuture activities; helps clear roadblocks as they occur; and provides nal approva
l that the project has been completed satisfactorily.
6.1.2.
The Project Of ce
The project of ce is led by the project manager. Depending on the size and complex
ity of the project, the of ce may be staffed by appropriate technical and administ
rative personnel to provide assistance to the project manager. The primary role
of the project of ce is to ensure that the project is proceeding according to plan
and that the deliverables are of acceptable quality. This is accomplished by pe
riodic (e.g., weekly or biweekly) review of project progress with respect to pla
n, as well as review of deliverables as they are produced. The project of ce maint
ains and updates the master project workplan and budget and routinely reports pr
ogress and dif culties to interested stakeholders, including the steering committe
e.
y the project of ce and the overall project status report is presented by the proj
ect manager to the steering committee at its regular meeting. The report to the
steering committee focuses on overall project status and speci c issues or roadblo
cks that need to be cleared. A key role of the steering committee is to take the
lead in resolving issues and clearing roadblocks so the project can proceed as
planned.
6.3.
Project Control
Project control involves three main activities: 1. Identifying out-of-control co
nditions 2. Developing corrective actions 3. Following up on corrective action m
easures
6.3.1.
Identifying Out-of-Control Conditions
An activity or task is in danger of going out of control when its schedule or bu
dget is about to be exceeded but the deliverable(s) are not complete. Adequate m
onitoring of project schedule and budget will provide an early warning that a po
tential problem exists. An activity or task that is behind schedule and is on th
e critical path requires immediate attention because it will impact the overall
timetable for project completion, which ultimately will adversely impact the bud
get for the task, the activity, the phase, and the project. Oftentimes, exceedin
g the budget or missing the scheduled completion date for a particular task or a
ctivity may be an early warning sign that a signi cant problem
1348
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
is developing for the project. Either of these signs requires an immediate inves
tigation on the part of the project manager to determine the underlying reasons.
6.3.2.
Developing Corrective Actions
Once the project manager determines the cause of an overage in the budget or a s
lippage in the schedule, he or she must determine an appropriate response and de
velop a speci c corrective action plan. Sometimes a problem is caused by an impedi
ment that the project manager alone can not resolve. In these instances, the pro
ject manager should engage the help of the steering committee. One of the respon
sibilities of the steering committee is to provide air cover for a project manager w
hen he or she encounters complex dif culties. In other cases, the impediment may h
ave been anticipated and a corrective action plan formulated as part of the proj
ects risk-management plan. In these cases, all that may be required is to execute
the corrective action speci ed in the plan.
6.3.3.
Following up on Corrective Action Measures
To ensure that the desired outcome of the corrective action is being achieved, i
t is important to employ project monitoring techniques when executing a correcti
ve action plan. The walking-theproject technique mentioned earlier is an example
of an effective follow-up technique. More complex corrective actions may requir
e a more formal status-reporting approach.
7.
PHASE IV: PROJECT CLOSE
Successful completion of all the deliverables set out in the workplan does not,
by itself, conclude the project. Several activities need to be accomplished befo
re the project can be brought to a formal close, including:
Project performance assessment and feedback Final status reporting Performance r
eview of project team members Project documentation archiving Disbanding of the
project organization
The primary purpose of the project close phase is to ensure that the expectation
s set throughout the project have been met.
7.1.
Project Performance Assessment and Feedback
Project performance should be assessed in a structured manner, addressing the ex
tent to which the objectives set out in the project de nition and workplan have be
en achieved. Obtaining objectivity requires that the clients views be considered
in the assessment.
7.1.1
Time, Cost, and Quality Performance
Time, cost, and quality performance are three key project parameters that should
be subject to assessment. Time performance can be assessed by comparing the ori
ginally planned completion dates of deliverables, both interim and nal, with the
actual completion dates. Causes of any material schedule slippage should be dete
rmined and means for precluding them in future projects developed. A similar ass
essment of project cost can be conducted by comparing budgeted to actual expendi
tures and then examining any material favorable and unfavorable variances. Quali
ty-performance assessment, however, tends to be less quantitative than time and
cost assessment. It usually relies on solicited or unsolicited input from the cl
ient and other stakeholders regarding how they view the project deliverables (e.
g., did they receive what they expected?). Quality performance can also relate t
o how well the project team has communicated with the stakeholders and perceptio
ns of how well the project has been managed and conducted.
7.1.2.
Lessons Learned
The opportunity to identify and capture lessons learned from having done a parti
cular project should not be missed. Lessons learned should be documented and any
best practice examples relating to the project captured. Documentation of lesso
ns learned and best practice examples should be made available to others in the
organization who will be involved in future project design and execution efforts
.
7.2.
Final Status Reporting
A nal project status report should be prepared and issued to at least the project
steering committee, including the sponsor. The report does not need to be exten
sive, but should include:
Project management problems typically dont manifest themselves until the project
is well underway. But the basis for most problems is established early on, durin
g the project de nition and planning phases. In particular, unclear and poorly com
municated statements of project objectives, scope, and deliverables will almost
always result in a project workplan that, when implemented, does not ful ll the ex
pectations of the client and other stakeholders. In the absence of clear project
de nition, the stakeholders will set their own expectations, which usually wont ma
tch those of the project manager and his or her team members. During project imp
lementation, regular and clear communication between the project participants an
d the project manager, as well as between the project manager and the sponsor /
steering committee, will help raise issues to the surface before they become tim
e, cost, or quality problems.
8.2.
Tips for Avoiding Problems in Project Management
The following are some suggestions for avoiding problems on professional service
s projects:
Invest time up front in carefully de ning and communicating the project objectives
, scope, and
deliverables. This will save time and reduce frustration in later stages of the
project.
Know the subject matter of the project and stay within your professional skills.
Seek help before
you need it.
Avoid overselling and overcommitting in the project de nition. Include risk-reduci
ng language
when necessary and appropriate.
Always be clear on project agreements and other matters. Do what you say you wil
l do and
stay within the parameters of the project de nition.
1350
MANAGEMENT, PLANNING, DESIGN, AND CONTROL the expected deliverables and path to
the stakeholders. Make adjustments as necessary and use the plan and related exp
ectations as the basis for explaining how the changes will affect the project. E
ffective project management is as critical as the tasks and milestones in the pr
oject. It keeps the teams efforts focused and aligned. Budget time for the projec
t manager to perform the necessary project management tasks, including, but not
limited to, frequent quality, schedule, and budget reviews. In large projects, c
onsider the use of a project administrator to take responsibility for some of th
e crucial but time-consuming administrative tasks. Communicate informally and fr
equently with the client. Identify and communicate risks and problems as early a
s possible. Make sure the project is de ned as closed at an appropriate time, typi
cally when the objectives have been met, to avoid having the project drift on in
an attempt to achieve perfection.
Be exible. A project workplan documents the expected route and allows you to comm
unicate
ADDITIONAL READING
Cleland, D. I., Project Management: Strategic Design and Implementation, 3rd Ed.
, McGraw-Hill, New York, 1999. Duncan, W. R., A Guide to the Project Management
Body of Knowledge, Project Management Institute, Newtown Square, PA, 1996. KPMG
Global Consulting, Engagement Conduct Guide, Version 1.0, KPMG International, Am
stelveen, The Netherlands, 1999. KPMG U.S., Project Management Reference Manual,
KPMG U.S. Executive Of ce, Montvale, NJ, 1993. Maister, D. H., Managing the Profe
ssional Service Firm, Free Press, New York, 1993.
IV.C
Manpower Resource Planning
Handbook of Industrial Engineering: Technology and Operations Management, Third
Edition. Edited by Gavriel Salvendy Copyright 2001 John Wiley & Sons, Inc.
1353
1354
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
2.1.
Criteria of Job Design
Table 1 gives the six ergonomic criteria for job design: 1. Safety is rst. A job
design that endangers the workers safety or health is not acceptable. However, li
fe does not have in nite value. Management must take reasonable precautions. Natur
ally, the de nition of reasonable is debatable. After designing for safety, design for
performance, then worker comfort, and, nally, worker higher wants (social, ego,
and self-actualization). 2. Make the machine user-friendly. Make the machine adj
ust to the worker, not the worker to the machine. If the system does not functio
n well, redesign the machine or procedure rather than blame the operator. 3. Red
uce the percentage excluded by the design. Maximize the number of people who can
use the machine or procedure. As mandated by the Americans with Disabilities Ac
t, gender, age, strength, and other personal attributes should not prevent peopl
e from using the design. 4. Design jobs to be cognitive and social. Physical and
procedural jobs now can be done by machinesespecially in the developed countries
. Over 50% of all jobs in the developed countries are in of ces. 5. Emphasize comm
unication. We communicate with machines through controls; machines communicate t
o us by displays. Improved communication among people reduces errors and thus im
proves productivity. 6. Use machines to extend human performance. The choice is
not whether to assign a task to a person or to a machine; it is which machine to
use. Small machines (such as word processors and electric drills) tend to have
costs (total of capital, maintenance, and power) of less than $0.25 / hr. Even l
arge machines (such as lathes and automobiles) tend to cost only $12 / hr. Labor
cost (including fringes) tends to be a minimum of $10 / hr, with $15, $20, or hi
gher quite common. What is your wage rate (including fringes) / per hour? Consid
er machines as slaves. The real question then becomes how many slaves the human will
supervise and how to design the system to use the output of the slaves.
2.2.
Organization of Groups of Workstations
This section discusses organization of workstations (the big picture) while Sect
ion 3 discusses the workstation itself. Table 2 gives seven guidelines: 1. Use s
pecialization even though it sacri ces versatility. Specialization is the key to p
rogress. Seek the simplicity of specialization; distrust it thereafter, but rst s
eek it. Use special-purpose equipment, material, labor, and organization. Specia
l-purpose equipment does only one job but does it very well. For example, at McD
onalds a special-purpose grill cooks the hamburger on both sides simultaneously.
Special-purpose material typically trades off higher material cost vs. greater c
apability. Labor specialization improves both quality and quantity. More experie
nce, combined with specialized tools, should yield both higher quality and highe
r quantity. The challenge is that this specialization gives monotonous work; wit
h low pay, it is dif cult to nd workers. With high pay, nding employees is not a pro
blem.
TABLE 1 Number 1. 2. 3. 4. 5. 6.
Ergonomic Job-Design Criteria Criteria Safety is rst. Make the machine user-frien
dly. Reduce the percentage excluded by the design. Design jobs to be cognitive a
nd social. Emphasize communication. Use machines to extend human performance.
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
1356
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Reduce the number of trips by moving less often. Consider replacing transportati
on with communication (e.g., mail vs. e-mail). Fixed cost / trip includes inform
ation transfer; thus, consider computerization. Much transportation (and communi
cation) is distance insensitive. Twice the distance does not cost twice as much.
This affects plant layout because closer (or farther) locations make little dif
ference. Variable cost / distance should consider energy and labor costs. Distan
ce / trip may be irrelevant, especially if a bus system can replace a taxi system. 4.
couple tasks. Figure 1 shows different types of ow lines. Table 4 shows some alte
rnatives for conveyor movement, work location, and operator posture. Assuming pr
ogressive assembly, the elements are divided among the lines stations. In most ca
ses, the mean amount of work time allocated to each workstation is not equal. Bu
t since
Figure 1 Flow Lines Can Be Operation-Only, Order-Picking, Assembly, or Disassemb
ly Lines. (From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. J
ohnson. Copyright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with
permission)
xed
mo
the
Stan
ding. The veins store the bodys blood. If the legs dont move, the blood from the h
eart tends to go down to the legs and stay there (venous pooling). This causes m
ore work for the heart because, for a constant blood supply, when blood per beat
is lower, there are more
1358
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 2 Vary Environmental Stimulation by Adjusting Orientation in Relation to
Other People, Changing Distance, and Using Barriers Such as Equipment between St
ations. The highest stimulation is 1, the lowest is 7. (From Work Design: Indust
rial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Copyright
2000 by Holcomb H
athaway, Pub., Scottsdale, AZ. Reprinted with permission)
beats. Venous pooling causes swelling of the legs, edema, and varicose veins. Af
ter standing, walking about 10 steps changes ankle blood pressure to about 48 mm
Hg (the same level as for sitting). Thus, standing jobs should be designed to h
ave some leg movements. Consider a bar rail (to encourage leg movement while sta
nding) and remote storage of supplies (to encourage walking). Sitting. Sitting i
s discussed in workstation guideline 4. Head / neck. The head weighs about 7.3%
of body weight. For a 90 kg person, that is 6.6 kg (about the same as a bowling
ball). Neck problems occur if the head leans forward or backward on the neck. Fo
rward tilt occurs when the person reduces the distance to an object to improve v
isibility (inspection, ne assembly, VDT work); consider better lighting. Backward
tilt may occur for people using bifocals at VDT workstations; consider single-v
ision glasses (work glasses). If the hands are elevated, head- or workstation-mo
unted magni ers permit lowering the hands.
1360
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 3 Normal Male Work Area for Right Hand. The left-hand area is mirrored. (
From S. Konz and S. Goel, The Shape of the Normal Work Area in the Horizontal Pl
ane. Reprinted with the permission of the Institute of Industrial Engineers, 25
Technology Park, Norcross, GA 30092, 770449-0461. Copyright
1969)
The feet can take some of the load off the hands in simple control tasks such as
automobile pedals or on / off switches. For human-generated power, the legs hav
e about three times the power of the arms. 6. Use gravity; dont oppose it. Consid
er the weight of the body and the weight of the work. The weight of arm plus han
d is typically about 4.9% of body weight; thus, keep arm motion horizontal or do
wnward. Gravity also can be a xture; consider painting the oor vs. the ceiling; weldin
g below you vs. above you, and so on. Gravity also can be used for transport (ch
utes, wheel conveyors). 7. Conserve momentum. Avoid the energy and time penaltie
s of acceleration and deceleration motions. Avoid change of direction in stirrin
g, polishing, grasping, transport, and disposal motions. For example, using the
MTM predetermined time system, an 18 in. (450 mm) move to toss aside a part is a
n M18E and an RL1a total of 0.63 sec. A precise placement would require an M18B,
a P1SE, and an RL1a total of 0.89 sec, an increase of 42%.
Figure 4 Normal Female Work Area for Right Hand. The left-hand area is mirrored.
(From S. Konz and S. Goel, The Shape of the Normal Work Area in the Horizontal
Plane. Reprinted with the permission of the Institute of Industrial Engineers, 2
5 Technology Park, Norcross, GA 30092, 770449-0461. Copyright 1969)
METHODS ENGINEERING
1361
Figure 5 Approximate Reach Distances for Average U.S. Male and Female Workers. (
From V. Putz-Anderson, Ed., Cumulative Trauma Disorders, copyright
1988 Taylor &
Francis Books Ltd., by permission)
8. Use two-hand motions rather than one-hand motions. For movement of the hand /
arm, two hands are better than one. Using two hands takes more time and effort,
but more is produced so cost / unit is lower. When one hand (generally the left
) is acting as a clamp, consider it as idle and use a mechanical clamp instead.
9. Use parallel motions for eye control of two-hand motions. When the two hands
are both moving, minimize the distance between the hands rather than making the
arm motions symmetrical about the body centerline.
1362
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
10. Use rowing motions for two-hand motions. A rowing motion is superior to an a
lternating motion because there is less movement of the shoulder and torso. 11.
Pivot motions about the elbow. For a speci c distance, minimum time is taken by pi
voting about the elbow. It takes about 15% more time to reach across the body (i
t also takes more energy since the upper arm is moved as well as the lower arm).
12. Use the preferred hand. The preferred hand is 510% faster and stronger than
the nonpreferred hand. Unfortunately, since it is used more, it is more likely t
o have cumulative trauma. About 10% of the population uses the left hand as the
preferred hand. 13. Keep arm motions in the normal work area. Figure 3 shows the
area for males; Figure 4 shows the area for females. These are the distances to
the end of the thumb with no assistance from movement of the back. Closer is be
tter. Figure 5 shows reach distance above and below the horizontal plane. 14. Le
t the small woman reach; let the large man t. The concept is to permit most of th
e user population to use the design. Alternative statements are Exclude few, Include m
any, and Design for the tall; accommodate the small. What percentage should be exclude
d? The FordUAW design guide excludes 5% of women for reach, 5% of men for clearan
ce, and 0% of men and women for safety. Three alternatives are (1) one size ts al
l, (2) multiple sizes, and (3) adjustability. Examples of one size ts all are a t
all door and a big bed. An example of multiple sizes is clothing. An example of
adjustability is an automobile seat.
2.4. 2.4.1.
Musculoskeletal Disorders Risk Factors
Safety concerns are for short-term (time frame of seconds) effects of physical a
gents on the body. An example is cutting of a nger. Toxicology generally deals wi
th long-term (years, decades) effects of chemicals on body organs. An example is
exposure to acetone for 10 years, causing damage to the central nervous system.
Musculoskeletal disorders concern intermediate-term (months, years) effects of
body activity upon the nerves, muscles, joints, and ligaments. An example is bac
k pain due to lifting. Nerves supply the communication within the body. Muscles
control movements of various bones. Ligaments (strong, ropelike bers) connect bon
es together. The three main occupational risk factors are repetition / duration,
joint deviation, and force. Vibration is also an important risk factor. Nonoccu
pational risk factors can be from trauma outside work or a nonperfect body. The
lack of perfection can be anatomical (weak back muscles or weak arm due to an in
jury) or physiological (diabetes, insuf cient hormones). Problem jobs can be ident
i ed through (1) records / statistics of the medical / safety department, (2) oper
ator discomfort (e.g., Figure 6), (3) interviews with operators, and (4) expert
opinion (perhaps using checklists such as Table 6, Table 7, and Table 8).
2.4.2.
Solutions
Decrease repetition / duration, joint deviation, force, and vibration. In genera
l, a job is considered repetitive if the basic (fundamental) cycle time is less
than 30 sec (for hand / wrist motions) and several minutes (for back / shoulder
motions). However, if the job is only done for a short time (say, 15 min / shift
), there is relatively low risk of cumulative trauma due to the short duration.
Consider short duration as 1 hr / shift, moderate as 1 to 2 hr, and long as 2 hr.
Thus, repetition really concerns the repetitions / shift. Duration also assumes
weeks, months, and years of repeated activity, not just a couple of days. Ideall
y the joint should operate at the neutral positionthat is, minimum joint deviatio
n. Force on a joint typically is multiplied by a lever arm (i.e., we are really
talking about a torque). Reduce (1) the magnitude of the force, (2) the lever ar
m, and (3) the length of time the force is applied. Vibration typically comes fr
om a powered handtool. Vibration increases force because operators tend to grip
the vibrating tool more tightly because it vibrates. Solutions are divided into
engineering and administrative. 2.4.2.1. Engineering Solutions The rst possible a
pproach is automationthat is, eliminate the person. No person means no possible i
njury.
METHODS ENGINEERING
1363
Figure 6 Body Discomfort Map. Maps can have more or less detail; this map includ
es the foot but other maps may emphasize the arm or hand. The worker gives the a
mount of discomfort at any location. One discomfort scale is the Borg category s
cale (CR-10) shown in the gure. The CR-10 scale gives 0.5 for extremely weak and 10 f
or extremely strong; people are permitted to go below 0.5 and above 10. Body discomf
ort maps are a popular technique of getting quanti ed input from operators. (From
Corlett and Bishop 1976)
A second possible approach is to reduce the number of cycles or the dif culty of t
he cycle. Two alternatives are mechanization and job enlargement. In mechanizati
on, the operator is still present but a machine does part of the work. Two examp
les are electric scissors (vs. manual scissors) and a bar code scanner (instead
of information being keyed). In job enlargement, the same motions are done, but
by a larger number of people. If the job takes four minutes instead of two, the
repetitive motions per person per day are reduced. The third approach is to mini
mize joint deviation. One guide is Dont bend your wrist. For example, place keybo
ards low to minimize backward bending of the hand. Another guide is Dont lift you
r elbow. An example was spraypainting the side of a truck by workers standing on
the oor. The job was modi ed to have the workers stand on a platform, thus paintin
g horizontally and downward. (This had the additional bene t of less paint settlin
g back into the painters faces.) The fourth approach is to minimize force duratio
n and amount. Consider an ergonomic pen (large grip diameter with a high-frictio
n grip) to reduce gripping force. Can a clamp eliminate holding by a hand? Can a
balancer support the tool weight? Can sliding replace lifting? To reduce vibrat
ion, use tools that vibrate less; avoid ampli ed vibration (resonance). Maintain t
he equipment to reduce its vibration. Minimize gripping force (e.g., support the
tool with a balancer or a steady rest). Keep the hands warm and dry; avoid smok
ing. 2.4.2.2. Administrative Solutions Administrative solutions (such as job rot
ation and part-time workers) seek to reduce exposure or increase operators abilit
y to endure stress (exercise, stress reduction, and supports). In job rotation,
people rotate jobs periodically during the shift. The concept is working resta sp
eci c part of the body rests while another part is working. Job rotation requires
cross-trained workers (able to do more than one job), which allows more exible sc
heduling; there is perceived fairness because everyone shares good and bad jobs.
Part-time workers reduce musculoskeletal disorders / person. Some other advanta
ges are lower wage cost / hr, better t to uctuating demand, and (possibly) higherquality people. (It is dif cult to hire high-quality full-time people for repetiti
ve low-paying jobs.) Some disadvantages are less time on the job (and thus less
experience), possible moonlighting with other employers, and a high cost of hiri
ng and training.
6 sec 2
6 to 20 sec 3
20 sec
Legs / knees
R L
Ankles / feet / toes
R L
Priority for change 332 331 323 Very high 322 321 313 High 223 312 Moderate 232
231 222 213 132 123
From S. A. Rodgers, Functional job analysis technique, Occupational Medicine: State
of the Art Reviews, Vol. 7, Copyright
1992. Hanley & Belfus. Reprinted by permis
sion.
lude fatigue, covering absentees, overtime, limiting toxic exposure, and possibl
e moonlighting when workers have large blocks of leisure time. Make the schedule
simple and predictable. People want to be able to plan their personal lives. Ma
ke work schedules predictable. Publicly post them in advance so people can plan;
30 days in advance is a good policy.
Source: Konz and Johnson 2000, from Knauth, 1993.
1368
MANAGEMENT, PLANNING, DESIGN, AND CONTROL (e.g., talk radio), (3) varying the vi
sual environment (e.g., windows with a view), and (4) varying the climate (chang
e air temperature, velocity). Minimize the fatigue dose. The problem is that the
fatigue dose becomes too great to overcome easily. Two aspects are intensity an
d work schedule. Reduce high intensity levels with good ergonomic practices; for
example, use machines and devices to reduce hold and carry activities. Static w
ork (holding) is especially stressful. The effect of fatigue increases exponenti
ally with time. Thus, it is important to get a rest before the fatigue becomes t
oo high. Do not permit workers to skip their scheduled break. For very high-inte
nsity work (e.g., sorting express packages), use part-time workers. Use work bre
aks. The problem with a conventional break is that there is no productivity duri
ng the break. A solution is to use a different part of the body to work while re
sting the fatigued part. If a machine is semiautomatic, the worker may be able t
o rest during the automatic part of the cycle (machine time). Another alternativ
e is job rotation (worker periodically shifts tasks). Fatigue recovery is best i
f the alternative work uses a distinctly different part of the body for example,
loading / unloading a truck vs. driving it, word processing vs. answering a tele
phone. Not quite as good, but still bene cial, is alternating similar work, as the
re would be differences in body posture, force requirements, mental activity, be
cause for example, an assembly team might rotate jobs every 30 minutes. In a war
ehouse, for half a shift, workers might pick cases from a pallet to a conveyor,
and during the second half, they would switch with the people unloading the conv
eyor to trucks. Job rotation also reduces the feeling of inequity because everyo
ne shares the good and bad jobs. However, it does require cross-trained people (
able to do multiple jobs). However, this in turn increases scheduling exibility.
Use frequent short breaks. The problem is how to divide break time. The key to t
he solution is that fatigue recovery is exponential. If recovery is complete in
30 minutes, it takes only 2 minutes to drop from 100% fatigue to 75% fatigue; it
takes 21 minutes to drop from 25% fatigue to no fatigue. Thus, give break time
in small segments. Some production is lost for each break. Reduce this loss by n
ot turning the machine off and on, taking the break near the machine, and so on.
Maximize the recovery rate. The problem is recovering as quickly as possible. I
n technical terms, reduce the fatigue half-life. For environmental stressors, re
duce contact with the stressor. Use a cool recovery area to recover from heat, a
quiet area to recover from noise, no vibration to recover from vibration. For m
uscle stressors, it helps to have a good circulation system (to be in good shape
). Active rest seems better than passive rest. The active rest may be just walki
ng to the coffee area (blood circulation in the legs improves dramatically withi
n 20 steps). Another technique to get active rest is to have the operator do the
material handling for the workstation (obtain supplies, dispose of nished compone
nts). Increase the recovery / work ratio. The problem is insuf cient time to recov
er. The solution is to increase recovery time or decrease work time. For example
, if a speci c joint is used 8 hr / day, there are 16 hr to recover; 2 hr of recov
ery / 1 hr of work. If the work of the two arms is alternated so that one arm is
used 4 hr / day, there are 20 hr to recover; 5 hr of recovery / 1 hr of work. O
vertime, moonlighting or 12 hr shifts can cause problems. Working 12 hr / day gi
ves 12 hr of recovery, so there is 1 hr of recovery / 1 hr of work.
3.
4.
5.
6.
7.
2.6
Error-Reduction Guidelines
Harry and Schroeder (2000) describe the importance from a business viewpoint of
joining the cult of perfectability. Table 12 gives l0 general guidelines to reduce e
rrors. They are organized into three categories: planning, execution, and allowi
ng for error. 1. Get enough information. Generating enough information calls for
(1) generating / collecting the relevant information and (2) ensuring that the
user receives the information. Generating the information can be aided by comput
erization. But the information in the computer needs to be correct! Warnings (e.
g., auditory alarm when truck backs up) can generate information; unfortunately,
not all warnings are followed. Ensure information reception. Just because you p
ut a message in a persons mailbox
1370
MANAGEMENT, PLANNING, DESIGN, AND CONTROL Skill has both mental and physical asp
ects. Skill or knowledge can be memorized, but an alternative is a job aid (such
as information stored in a computer). Practice is important because rookies mak
e more mistakes than old pros; thats why sports teams practice so much. Dont forge
t. Two approaches to avoid forgetting are to reduce the need to remember and to
use memory aids. Avoid verbal orders: they leave no reference to refresh the mem
ory. If you receive a verbal order, write it down to create a database for futur
e reference. A list is a popular memory aid; the many electronic organizers bein
g sold attest to the need for making a list. Memory aids include more than that,
however. Forms not only include information but also indicate what information
is needed. Patterns aid memory. For example, it is easier to remember a meeting
is on the rst Friday of each month than to remember the 12 individual days. At Mc
Donalds, orders are lled in a standard sequence; next time you are there, see if y
ou can determine the standard sequence. Other memory aids include the fuel gauge
and warning lights in cars and the sticker on the windshield showing when the o
il was changed. Simplify the task. Two ways to simplify tasks are to reduce the
number of steps and to improve communication. Familiar examples of reducing step
s are autodialers on telephones and use of address lists for email. Many retaile
rs now have their computers communicate point-of-sale information not only to th
eir own central computer but also to their vendors computers. The original concep
t was to speed vendors manufacturing and shipping, but the retailers soon realize
d that the elimination of all the purchasing paperwork led to drastic error redu
ctions. Improve communication by improving controls (how people communicate to m
achines) and displays (how machines communicate to people); details are covered
in a number of ergonomics texts. But even letters and numbers can be confusedespe
cially the number 0 and the letter O and the number 1 and the letter l. Use all
letter codes or all numeric codes, not a mixture. If a mixed code is used, omit
0, O, 1 and l. Also, do not use both a dash and an underline. Guidelines 6 and 7
concern execution of tasks. Allow enough time. Under time stress, people make e
rrors. Start earlier so there is more margin between the start time and the due
time. Another technique to reduce time / person by putting more people on the ta
sk; this exibility requires cross-trained people. Have suf cient motivation / atten
tion. Motivation is not a replacement for engineering. Gamblers are motivated to
win, but wanting to win is not enough. Motivation can be positive (aid performa
nce) or negative (hinder performance). There are many examples of superiors chal
lenging subordinates decisions but relatively few of subordinates challenging sup
eriors decisions. It is dif cult, though important, to require important decisions
to be challenged by or justi ed to subordinates. Company policies and culture and
openness to criticism are important factors. Even requiring superiors to explain
decisions to subordinates can be valuable: when superiors try to explain the de
cision, they may realize it is a bad decision. Lack of attention can occur from
simple things such as distractions, but it can also be caused by people sleeping
on the job or being drunk or ill. One solution technique is to have observation
by other people (either personally or by TV) so sleeping, heart attacks, and so
on can be observed. Guidelines 8, 9, and 10 discuss allowing for errors. Give i
mmediate feedback on errors. Two subdivisions of feedback are error detection an
d reducing delay. Errors can be detected by people (inspection) or by machines.
Machines can passively display the error (needle in the red zone) or actively di
splay the error (audio warning, ashing light) or even make an active correction (
turn on furnace when temperature in the room drops). The machine also can fail t
o respond (computer doesnt work when the wrong command is entered). Machines also
can present possible errors for human analysis (spell-check programs). Error me
ssages should be speci c and understandable. The message Error ashing on the screen is
not as useful as Invalid inputvalue too high. Permissible range is 18 to 65 years. Er
ror messages should not be too irritating or people will turn them off. Reduce t
he time (latency) until the error is detected. The more quickly the error is det
ected, the easier it is to detect why the error occurred. A quick correction may
also reduce
4.
5.
6.
7.
8.
METHODS ENGINEERING
1371
possible damage. For example, your car may make a noise if you remove the key wh
ile the headlights are on; thus, you can turn the lights off before the battery
dies. 9. Improve error detectability. Consider the error as a signal and the bac
kground as noise. Improve detectability by having a high signal / noise ratioa go
od contrast. Amplify the signal or reduce the noise. One way to enhance a signal
is to match it. For example, if someone gives you a message, repeat it back so
he or she can see that you understand the message. A signal also can be increase
d. For example, putting a switch on can also trigger an indicator light. The sig
nal should not con ict with population stereotypes (expected relationships) and th
us be camou aged. For example, dont put regular coffee in a decaf container; dont pu
t chemicals in a refrigerator that also contains food. Reduce the noise by impro
ving the background. Traf c lights are made more visible by surrounding the lights
with a black background. When talking on the phone, turn down the radio. 10. Mi
nimize consequences of errors. Design equipment and procedures so they are less
sensitive to errorsso they are failsafe. A well-designed system anticipates possi
ble problems. What if an instrument light fails? the address on a letter is inco
rrect? the pilot has a heart attack? the air conditioning system fails? the pain
t drips from the brush? Some solutions are redundant instrument lights, return a
ddresses on letters, copilots, windows that open, painting while using a dropclo
th. The consequences of many errors can be reduced by guards (e.g., guards to pr
event hands touching moving pulley and belts, gloves to prevent chemical burns,
hard hats to prevent head injuries). Ease of recovery also is important. What ca
n be done if a box jams on a conveyor turn? if a paycheck is lost? if a mistake
is made on a computer input? if a car tire has a blowout? Longer time available
for recovery helps. For example, a re door with a 1 hr rating is better than one
with a 0.1 hr rating.
3.
GATHERING / ORGANIZING INFORMATION
This section will discuss What to study (Pareto), videotaping jobs, searching fo
r solutions, betweenoperations analysis, and within-operation analysis.
3.1.
What to Study (Pareto)
Engineering time is a valuable resource; dont waste time on unimportant problems.
To check quickly whether a project is worth considering, calculate (1) savings
/ year if material cost is cut 10% and (2) savings / year if labor cost is cut 1
0%. The concept of the insigni cant many and the mighty few (Pareto distribution) is s
hown in Figure 7. Cause (x-axis) and effect ( y-axis) are not related linearly.
The key concept is that the bulk of the problem (opportunity) is concentrated in
a few items. Pareto diagrams are a special form of a histogram of frequency cou
nts; the key is that the categories are put into order with the largest rst and s
mallest last; then a cumulative curve is plotted. Therefore, using the Pareto co
ncept, if your design concerns crime, it should concentrate on the few individua
ls who cause most of the crimes; if it is to improve quality, it should concentr
ate on the few components that cause most of the problems. See Table 13. Fight t
he giants!
3.2.
Videotaping Jobs
Videotaping is useful for task analysis and for training. Some tips are:
Have spare batteries and a battery charger. A charger may take eight hours to re
charge a battery. Use a camera feature that continuously displays the date on th
e screen; this establishes without
question when the scene was shot. You may wish also to photo a card with the ope
rators name.
Plan the location of camera and subject ahead of time. Use a tripod. Multiple vi
ews are best.
Use some combination of a front view, a side view, a back view, stepladder views (a
partialplan view), overall views, and closeup views. Begin a scene with a full v
iew (far shot, wideangle view) and then zoom as desired. If possible, videotape
multiple operators doing the same task. This permits comparisons of methods. Sma
ll differences can be detected on tape because the tape can be frozen and / or r
epeated. For example, how are items oriented? Is the sequence of steps the same?
Does one operator deviate joints more or less than other operators? If the view
is perpendicular to the operator front or side, the projected view can be froze
n and angles determined on the screen using a protractor. These angles can then
be used as input to computer models.
1372
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 7 Pareto Distribution. Many populations have a Pareto distribution, in wh
ich a small portion of population has a large proportion of the criterion. Inven
tories, for example, can be classi ed by the ABC system. A items may comprise 10% of t
he part numbers but 70% of the inventory cost. B items may comprise 25% of the part
numbers and 25% of the cost; the total of A
B items then comprise 35% of the par
t numbers and 95% of the cost. C items thus comprise the remaining 65% of the part n
umbers but only 5% of the cost. Concentrate your efforts on the A items. Dont use gol
d cannons to kill eas! (From Work Design: Industrial Ergonomics, 5th Ed., by S. K
onz and S. Johnson. Copyright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Re
printed with permission)
Take lots of cycles for your stock tape. Each scene should have several cycles.
You can always
edit, but it is expensive to go back and shoot more tape.
For analysis of time, there are three alternatives:
1. Put a digital clock in the scene. 2. Use a cameraVCR system with a timecode fe
ature, which will allow you to quickly locate any time on the tape. 3. Count the
number of frames (VHS has 30 frames / sec [0.033 sec / frame]).
TABLE 13 Item
Items vs. Opportunities (Pareto Distribution) Opportunity produce most of the di
rect labor dollars. have most of the storage requirements. produce most of the p
ro t. produce most of the quality problems. produce most of the cumulative trauma.
use most of the energy. cover most of the direct labor hours. commit most of th
e crimes. have most of the money. drink most of the beer.
A few products products products operations operations machines time studies ind
ividuals individuals individuals
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
.
cience. If possible, for comparison, obtain measures before and after the change
.
1374
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
E Eliminate unnecessary work and material. Four subcategories are (1) eliminate
unneeded work, (2) eliminate work where costs are greater than bene ts, (3) use se
lf-service, and (4) use the exception principle. Examples of unneeded work are o
bsolete distribution and mailing lists. Once a year send a letter saying that un
less the person completes the form and returns it, that person will no longer re
ceive a copy. An example of costs greater than bene ts is the staf ng of tool cribs.
Self-service at a crib is probably considerably cheaper than having an attendan
t get the item (consider grocery stores). Another example of self-service is rep
lacing a single mailbox with multiple mailboxes designated companythis building, compa
yother buildings, and U.S. mail. An example of the exception principle is a reserved p
rking stall with a strange car in it. Normally the police would give a ticket. U
sing the exception principle, the police would give a ticket only if the stall o
wner complained. S Simplifying operations is shown by special register keys in f
ast-food operations. The key indicates a speci c item (Big Mac) rather than a price. T
his reduces pricing errors and improves communication with the cooking area. For
e-mail, the return key simpli es nding the return address and entering it. A Alter
ing sequence has three subdivisions: (1) simplify other operations, (2) reduce i
dle / delay time, and (3) reduce material-handling cost. Modifying when somethin
g is done may in uence how it is done. For example, machining is easier before a m
aterial is hardened than after. On a car assembly line, installing a brake cylin
der assembly is easier before the engine is installed. Idle / delay time often c
an be reduced. For example, in a restaurant, the server can bring coffee when br
inging the breakfast menu instead of making two trips. Or the idle time might be
used fruitfully by double tooling. Consider having two xtures on a machine so th
at while the machine is processing the material on xture 1, the operator is loadi
ng / unloading xture 2. An example of reducing material-handling costs is using a
n automatic guided vehicle (bus) instead of moving items on a fork truck (taxi).
R Requirements has two aspects (1) quality (capability) costs and (2) initial v
s. continuing costs. Costs rise exponentially (rather than linearly) vs. quality
. Therefore, do not goldplate items. Indirect materials are a fruitful area to inves
tigate because they tend not to be analyzed. For example, one rm compared the var
nish applied on motor coils vs the varnish purchased; there was a 10% difference
. It was found that the varnish was bought in 55-gallon drums. When the drums we
re being emptied, they were turned right side up when there was still considerab
le product inside. The solution was a stand that permitted the varnish to drip o
ut of the drum over a period of hours. Initial vs. continuing costs means you sh
ould focus not just on the initial capital cost of a product but on the life-cyc
le costs (capital cost plus operating cost plus maintenance cost). C Combine ope
rations is really a discussion of general purpose vs. special purpose. For examp
le, do you want to have a maintenance crew with one specialist doing electrical
work, another doing plumbing, and another doing carpentry? Or three people, each
of whom can do all three types of work? Most rms now are going for the multi-ski
lled operator because it is dif cult to nd suf cient work to keep all specialists alw
ays busy. H
How often is a question of the proper frequency. For example, should
you pick up the mail once a day or four times a day? Should solution pH be test
ed once an hour, once a day or once a week?
3.4.
Between-Operations Analysis
This section will discuss ow diagrams, multiactivity charts, arrangement (layout)
of equipment, and balancing ow lines.
3.4.1.
Flow Diagrams
Flow diagrams and their associated process charts are a technique for visually o
rganizing and structuring an overview (mountaintop view) of a between-workstations p
roblem. There are three types: single object, assembly / disassembly, and actiond
ecision. 3.4.1.1. Single Object Figure 8 shows a single-object process chart fol
lowing a housing in a machine shop; the single object also can be a person. Some
examples of following a person are vacuuming an of ce and unloading a semitrailer
. Figure 9 shows the ve standard symbols for process charts. Some people put a nu
mber inside each symbol (identifying operation 1, 2, 3) and some dont. Some peopl
e emphasize do operations by darkening those circles (but not get-ready and put-away
operations) and some dont. Since a process chart is primarily a communication to
ol for yourself, take your choice. Most operations have scrap and rework; Figure
10 shows how to chart them. At the end of the chart, summarize the number of op
erations, moves (including total distance), inspections, storages and delays. Es
timate times for storages (which are planned) and delays (which are unplanned st
orages). Since this is a big-picture analysis, detailed times for operations and
inspections normally are not recorded.
METHODS ENGINEERING
1375
Figure 8 Single-Object Process Chart. The chart follows a single object or perso
nin this example, an object. For good analysis, estimate distances and times. (Fr
om Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Copyr
ight
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission)
1376
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 9 Five Standard Symbols for Process Charts. They are a circle (operation)
, arrow (move), square (inspect), triangle with point down (storage) and D (dela
y). A circle inside a square is a combined operation and inspection. (From Work
Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Copyright
200
0 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission)
Next you would consider possible improvements using Kiplings six honest men (who,
what, where, why, when and how), SEARCH, and various checklists. This may lead
to a proposed method where you would show reductions in number of operations, in
spections, storages, distance moved, and so on. Because ow diagrams organize comp
lex information and thus are a useful communication tool, you may want to make a
polished copy of the ow diagrams to show to others. Since many potential improve
ments involve reducing movements between workstations, a map showing the movemen
ts is useful; Figure 11 shows a ow diagram of Joe College making a complex job of
drinking beer in his apartment. Generally a sketch on cross-section paper is su
f cient. 3.4.1.2. Assembly / Disassembly Diagrams Assembly ow diagrams (see Figure
12) tend to point out the problems of disorganized storage and movements. Figure
13 shows a disassembly diagram; another example would be in packing houses. 3.4
.1.3. ActionDecision Diagrams In some cases, you may wish to study decision makin
g; then consider an actiondecision diagram (see Figure 14) or a decision structur
e table (see Section 3.5.2).
3.4.2.
Multiactivity Charts
The purpose of a multiactivity chart is to improve utilization of multiple relat
ed activities. See Figure 15. The two or more activities (column headings) can b
e people or machines; they can also be parts of a person (left hand vs. right ha
nd) or parts of a machine (cutting head vs. xture 1 vs. xture 2). The time axis (d
rawn to a convenient scale) can be seconds, minutes, or hours. Example charts an
d columns might be milling a casting (operator, machine), cashing checks (cashie
r, customer 1, customer 2), and serving meals (customer, server, cook). For each
column, give cycles / year, cost / minute, and percent idle. Make the idle time
distinctive by cross-hatching, shading, or coloring red. Improve utilization by
(1) reducing idle time in a column, (2) shifting idle time from one column to a
nother, or (3) decreasing idle time of an expensive component (such as person) b
y increasing idle time of a machine. McDonalds uses this third concept when they
have two dispensers of Coke in order to reduce the waiting time of the order tak
er (and thus the customer).
Figure 10 Rework (left) and Scrap (right) for Flow Diagrams. Rework and scrap of
ten are more dif cult to handle than good product. Even though rework is often con
cealed, most items have rework. Material-removal scrap occurs in machining and p
ress operations; scrap units occur everywhere. (From Work Design: Industrial Erg
onomics, 5th Ed., by S. Konz and S. Johnson. Copyright 2000 by Holcomb Hathaway,
Pub., Scottsdale, AZ. Reprinted with permission)
METHODS ENGINEERING
1377
Figure 11 Flow Diagram. This is a map often used with process charts. (From Work
Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Copyright
20
00 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission)
Some time is considered to be free. For example, inside machine time describes the s
ituation in which an operator is given tasks while the machine is operating. Sin
ce the operator is at the machine anyway, these tasks are free. A McDonalds example i
s to grasp a lid while the cup is lling.
Figure 12 Assembly Process Chart. This type of chart shows relationships among c
omponents and helps emphasize storage problems. Each column is an item; the asse
mbly is on the right. (From Work Design: Industrial Ergonomics, 5th Ed., by S. K
onz and S. Johnson. Copyright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Re
printed with permission)
1378
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 13 Disassembly Process Chart. (From Work Design: Industrial Ergonomics, 5
th Ed., by S. Konz and S. Johnson. Copyright
2000 by Holcomb Hathaway, Pub., Sco
ttsdale, AZ. Reprinted with permission)
Figure 14 ActionDecision Flow Diagram. This chart of a seated checkout operator i
n England shows use of a laser scanner. Note that in Europe the customer bags th
e groceries. (From Wilson and Grey 1984)
METHODS ENGINEERING
1379
Figure 15 Multiactivity Chart. The basic concept is to have two or more activiti
es (columns) with a common, scaled, time axis. The goal is to improve utilizatio
n, so emphasis is placed on idle time. For each column, calculate the percent id
le time, the occurrences / year, and the cost / minute. (From Work Design: Indus
trial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Copyright
2000 by Holcomb
Hathaway, Pub., Scottsdale, AZ. Reprinted with permission)
As pointed out above in (3), it is possible to have more than one machine / oper
atorthis is double tooling. For example, use two sets of load / unload xtures on a
n indexing xture to reduce idle time while waiting for the machine to cut. Or one
person can service two machines (i.e., 1 operator / 2 machines or 0.5 operator
/ machine). It is also possible to have 0.67 or 0.75 operators / machine. Do thi
s by assigning 2 people / 3 machines or 3 / 4, that is, having the workers work
as a team, not as individuals. This will require cross-trained operators, which
is very useful when someone is absent. Kitting is the general strategy of gather
ing components before assembly to minimize search-andselect operations and ensur
e that there are no missing parts. Subassemblies also may be useful. McDonalds Bi
g Mac has a special sauce, which is just a mixture of relish and mayonnaise, premixe
d to ensure consistency. A disadvantage of the multiactivity chart is that it re
quires a standardized situation. Nonstandardized situations are dif cult to show.
For them, use computer simulation.
3.4.3.
Arrangement (Layout) of Equipment
This section describes location of one item in an existing network. Arrangement
of the entire facility is covered in Chapter 55. Problem. Table 15 shows some ex
amples of locating one item in an existing network of customers. The item can be
a person, a machine, or even a building. The network of customers can be people
, machines, or buildings. The criterion minimized is usually distance moved but
could be energy lost or time to reach a customer. Typically an engineer is inter
ested in nding the location that minimizes the distance moveda minisum problem. An
example would be locating a copy machine in an of ce. An alternative objec-
1380
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
TABLE 15 Examples of Locating an Item in an Existing Network of Customers with V
arious Criteria to be Minimized New Item Machine tool Tool crib Time clock Inspe
ction bench Copy machine Warehouse or store Factory Electric substation Storm wa
rning siren IIE meeting place Fire station Network of Customers Machine shop Mac
hine shop Factory Factory Of ce Market Warehouses Motors City Locations of IIE mem
bers City Criterion Minimized Movement of product Walking of operators Walking o
f operators Movement of product or inspectors Movement of secretaries Distributi
on cost Distribution cost Power loss Distance to population Distance traveled Ti
me to re
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
.
tive is to minimize the maximum distancea minimax or worst-case problem. An examp
le minimax problem would be to locate an ambulance so everyone could be reached
in no more than 15 minutes. There is extensive analytical literature on this plana
r single-facility location problem; see Francis et al. (1992, chap. 4). The follow
ing is not elegant math but brute force calculations. The reason is that, using a ha
nd calculator, the engineer can solve all reasonable alternatives in a short tim
esay 20 minutes. Then, using the calculated optimum for the criterion of distance
, the designer should consider other criteria (such as capital cost, maintenance
cost) before making a recommended solution. Solution. The following example con
siders (1) the item to be located as a machine tool, (2) the network of customer
s (circles in Figure 16) as other machine tools with which the new tool will exc
hange product, and (3) the criterion to be minimized as distance moved by the pr
oduct. In most real problems, there are only a few possible places to put the ne
w item; the remaining space is already lled with machines, building columns, aisl
es, and so forth. Assume there are only two feasible locations for the new itemth
e A and B rectangles in Figure 16.
Figure 16 Information for Location of One Item. Customers (in circles) can be se
rved from either location A or B. Table 17 gives the importance of each customer
. (From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson.
Copyright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permiss
ion)
METHODS ENGINEERING
1381
Travel between a customer and A or B can be (1) straight line (e.g., conveyors),
(2) rectangular (e.g., fork trucks using aisles), or (3) measured on a map (e.g
., fork trucks using one-way aisles, conveyors following nondirect paths). Trave
l may be a mixture of the three types. Some customers are more important than ot
hers, and thus the distance must be weighted. For a factory, an index could be p
allets moved per month. For location of a re station, the weight of a customer mi
ght depend on number of people occupying the site or the re risk. The movement co
st of locating the new item at a speci c feasible location is: MVCOST
WGTK (DIST)
where MVCOST index of movement cost for a feasible location WGTK
weight (importa
nce) of the Kth customer of N customers DIST distance moved For rectangular:
N
MVCOST
K 1
(Xij
Xk
Yij
Yk)
(Xi, j
Xk)2
(Yi, j
Yk)2
For the two locations given in Table 16, Table 17 shows the MVCOST. Movement cos
t at B is 67,954 / 53,581
126% of A. If you wish to know the cost at locations o
ther than A and B, calculate other locations and plot a contour map. A contour m
ap indicates the best location is X
42 and Y 40 with a value of 32,000. Thus, si
te A is 6,000 from the minimum and site B is 13,000 from the minimum. The previo
us example, however, made the gross simpli cation that movement cost per unit dist
ance is constant. Figure 17 shows a more realistic relationship of cost vs. dist
ance. Most of the cost is xed (loading / unloading, starting / stopping, paperwor
k). Cost of moving is very low. Thus: For rectangular: DIST
Lk Ck (Xi, j Xk Yi, j
Y
k) For straight line: DIST
Lk
Ck (Xi, j
Xk)2 (Yi, j
Yk)2 where Lk
load unload c
(including paperwork) per trip between the Kth customer and the feasible locati
on
TABLE 16 Importance of Each Customer Customers 1 to 6 can be served either from
location A (X
30, Y
20) or from location B (X
90, Y
50). Which location minimize
s movement cost? Coordinate Customer 1 2 3 4 5 6 X 40 60 80 30 90 0 Y 70 70 70 3
0 10 60 Weight or Importance 156 179 143 296 94 225 Movement Type Straight line
Straight line Straight line Rectangular Rectangular Rectangular
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
.
1382 TABLE 17
MANAGEMENT, PLANNING, DESIGN, AND CONTROL Cost of Locating a New Machine at Loca
tion A or B Site A Weight, Pallets / Month 156 179 143 296 94 225 Distance, Mete
rs 51 58 71 10 70 70 Cost, m-Pallets / Month 7,956 10,382 10,153 2,960 6,580 15,
750 53,781 Site B Cost, m-Pallets / Month 8,424 6,444 3,146 23,680 3,760 22,500
67,954
Customer 1 2 3 4 5 6
Distance 54 36 22 80 40 100
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
. Since WGTK was pallets / month and DIST was in meters, MVCOST
meter-pallets /
month.
Ck cost / unit distance (excluding Lk) Assume, for customers 1, 2, and 3, that L
k $0.50 / trip and Ck
$0.001 / m; for customers 4, 5, and 6, Lk
$1 / trip and Ck
$0.002 / m. Then the cost for alternative A
$854
$79.07 $933.07 while the cost
for B
$854 $117.89
$971.89. Thus, B has a movement cost of 104% of A. When makin
g the decision where to locate the new item, use not only the movement cost but
also installation cost, capital cost, and maintenance cost. Note that the produc
t (WGTH)(DIST) (that is, the $854) is independent of the locations; it just adds
a constant value to each alternative. Cost need not be expressed in terms of mo
ney. Consider locating a re station where the customers are parts of the town and
the weights are expected number of trips in a l0-year period. Then load might b
e 1 min to respond to a call, travel 1.5 min / km, and unload 1 min; the criteri
on is to minimize mean time / call. The distance cost might rise by a power of 2
(the inverse square law) for problems such as location of a siren or light.
3.4.4.
Balancing Flow Lines
The two most common arrangements of equipment are a job shop and a ow line (often
an assembly line). The purpose of balancing a ow line is to minimize the total i
dle time (balance delay time). There are three givens: (1) a table of work eleme
nts with their associated times (see Table 18), (2) a precedence diagram showing
the element precedence relationships (see Figure 18), and (3) required
Figure 17 Relationship of Cost vs. Distance. In general, material-handling cost
is almost independent of distance. (From Work Design: Industrial Ergonomics, 5th
Ed., by S. Konz and S. Johnson. Copyright 2000 by Holcomb Hathaway, Pub., Scott
sdale, AZ. Reprinted with permission)
METHODS ENGINEERING TABLE 18 Elements and Work Times for the AssemblyLine Balanc
ing Problem Element 1 2 3 4 5 6 7 8 9 10 Work Time / Unit, hr 0.0333 0.0167 0.01
17 0.0167 0.0250 0.0167 0.0200 0.0067 0.0333 0.0017 0.1818
1383
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
. Each element time is assumed constant. In practice, each element time is a dis
tribution.
units / minute from the line. The three unknowns are (1) the number of stations,
(2) the number of workers at each station, and (3) the elements to be done at e
ach station. 1. What is the total number to be made, and in how long a time? For
example, 20,000 units could be made in 1000 hr at the rate of 20 / hr, 500 hr a
t 40 / hr, or some other combination. Continuous production is only one of many
alternatives; a periodic shutdown might be best. Assume we wish to make 20,000 u
nits in 1000 hr at the rate of 20 / hr. Then each station will take 1000 hr / 20
,000 units
0.05 hr / unit; cycle time is .05 hr. 2. Guess an approximate number
of workstations by dividing total work time by cycle time: 0.1818 hr / 0.05 hr /
station 3.63 stations. Then use 4 workstations with 1 operator at each. 3. Make
a trial solution as in Table 19 and Figure 19. Remembering not to violate prece
dence, identify each station with a cross-hatched area. Then calculate the idle
percentage: 0.0182 / (4
0.05)
9.1%. But this can be improved. Consider Table 20.
Here stations 1 and 2 are combined into one superstation with two operators. El
emental time now totals 0.0950. Since there are two operators, the
Figure 18 Precedence Diagram Showing the Sequence Required for Assembly. The lin
es between the circles are not drawn to scale. That is, elements 4 and 9 both mu
st be competed before 6, but 9 could be done before or after 4. Precedence must
be observed. Thus, elements 3, 4, and 9 could not be assigned to one station and
elements 8 and 6 to another. However, 8, 9, and 10 could be done at one station
. (From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson.
Copyright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permiss
ion)
1384 TABLE 19
MANAGEMENT, PLANNING, DESIGN, AND CONTROL Trial Solution for Assembly-Line Balan
ce Problem Element Time, hr / unit 0.0333 0.0167 0.0067 0.0333 0.0117 0.0167 0.0
167 0.0250 0.0200 0.0017
0.05) 9.1% Station Element Time, hr / unit 0.0500 0.0400
0.0451 0.0467 Station Idle Time, hr / unit 0 0.0100 0.0049 0.0033 Cumulative Id
le Time, hr / unit 0 0.0100 0.0149 0.0182
Station Element 1 2 2 8 9 3 3 4 6 4 5 7 10 Idle percent
0.0182 / (4 1
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
. Cycle time
0.0500 hr.
idle time / operator is 0.0025. So far there is no improvement over the solution
of Table 19. But there is idle time at each station. Thus, the line cycle time
can be reduced to 0.0475. The new idle time becomes 0.0083 / (4
0.0475)
4.4% ins
tead of 9.1%. The point is that small changes in cycle time may have large bene ts
. Small modi cations of other rigid facts can be bene cial. Consider element sharing. Th
at is, operators / station need not be 1.0. One possibility is more than 1 opera
tor / station. Two operators / station yields 2.0 operators / station. Three ope
rators / two stations yields 1.5 operators / station. This permits a cycle time
that is less than an element time. For example, with 2 operators / station, each
would do every other unit. Get fewer than 1 operator / station by having operat
ors walk between stations or having work done off-line (i.e., using buffers). Al
so, operators from adjacent stations can share elements. For example, station D
does half of element 16 and 17 and station E does half of element 16 and 17. Rem
ember that cycle times are not xed. We assumed a cycle time of 0.05 hr (i.e., the
line runs 1000 hr or 1000 / 8 125 days). As pointed out above, it is more ef cien
t to have cycle time of
Figure 19 Graphic Solution of Table 19. (From Work Design: Industrial Ergonomics
, 5th Ed., by S. Konz and S. Johnson. Copyright
2000 by Holcomb Hathaway, Pub.,
Scottsdale, AZ. Reprinted with permission)
Fish diagrams graphically depict a multidimensional list. See Figures 20 and 21.
Professor Ishikawa developed them while on a quality control project for Kawasa
ki Steel; they are the cause side of cause-and-effect diagrams. The diagram gives an
easily understood overview of a problem. Start with the effect (a shhead), a speci c p
roblem. Then add the cause (the sh body), composed of the backbone and other bones. A
good diagram will have three or more levels of bones (backbone, major bones, mi
nor bones on the major bones). There are various strategies for de ning the major
bones. Consider the 4 Ms: manpower, machines, methods, materials. Or use the 4 Ps:
policies, procedures, people, plant. Or be creative. Figures 20 and 21 were use
d at Bridgestone to reduce the variability of the viscosity of the splicing ceme
nt used in radial tires. Figure 21 shows the variability of the four operators b
efore and after the quality circle studied the problem. A very effective techniq
ue is to post the sh diagram on a wall near the problem operation. Then invite ev
eryone to comment on possible problems and solutions; many quality problems can
be reduced with better communication.
3.5.2.
Decision Structure Tables
Decision structure tables are a print version of if, then statements (the what if sce
io) in computer programs. They also are known as protocols or contingency tables
; see Table 21 for an example. They unambiguously describe complex, multivariabl
1386
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 20 Fish Diagrams Used at Bridgestone Tire. The goal was to reduce viscosi
ty variance. The four main categories were operating methods, raw materials, hum
ans, and equipment. The variance was reduced when it was discovered that not all
operators followed the standard method; those following the standard had more v
ariance. See Figure 21. (From R. Cole, Work, Mobility, and Participation: A Comp
arative Study of American and Japanese Industry. Copyright
1979 The Regents of t
he University of California. By permission)
Figure 21 Distribution Curves for Viscosity Variance before and after Quality Ci
rcle Project. Reducing variance (noise) is an excellent quality-improvement tech
nique because the effects of various changes are seen much more clearly for proc
esses with little variability. (From R. Cole, Work, Mobility, and Participation:
A Comparative Study of American and Japanese Industry. Copyright
1979 The Regen
ts of the University of California. By permission)
1387
Then Clearance Drill Bit Size Is 51 45 40 32
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
. Assume you wish a hole large enough so the drill will not touch the bolt threa
ds.
Checklists can also focus more precisely. Table 7 is a checklist for just the up
per extremity. Table 8 is a checklist to prioritize potential ergonomic problems
due to posture; it focuses on force and duration for neck, trunk, and general b
ody.
4.
ENGINEERING DESIGN
After reviewing the job design guidelines from Section 2 and the information fro
m Section 3, you can design alternatives. Remember the ve steps of engineering de
sign with the acronym DAMES: De ne the problem, Analyze, Make search, Evaluate alt
ernatives, and Specify and sell solution. See Table 22. 1. De ne the problem broad
ly. Usually the designer is not given the problem but instead is confronted with
the current solution. The current solution is not the problem but just one solu
tion of among many possible solutions. The broad, detail-free problem statement
should include the number of replications, the criteria, and the schedule. At th
is stage, putting in too much detail makes you start by defending your concept r
ather than opening minds (yours and your clients) to new possibilities. At this s
tage, the number of replications should be quite approximate (within 500%). Criteri
a (see Section 2.1) usually are multiple (capital cost, operating cost, quality)
rather than just one criterion. The schedule de nes priorities and allocation of
resources for the project. 2. Analyze in detail. Now amplify step 1 (de ning the p
roblem) with more detail on replications, criteria, and schedule. What are the n
eeds of the users of the design (productivity, quality, accuracy, safety, etc.)?
See Sections 2.1, 2.2, 2.3, 2.4, 2.5, and 2.6. What should the design achieve?
What are limits (also called constraints and restrictions) to the design? What a
re the characteristics of the population using the design? For example, for desi
gning an of ce workstation, the users would be adults within certain ranges (age f
rom 18 to 65, weight from 50 to 100 kg, etc.). The design should consider not on
ly the main activities (the do, e.g., the assembly) but also the get-ready and putsupport activities such as setup, repairs, maintenance, material handling, utili
ties, product disposal, and user training. Since people vary, designers can foll
ow two alternatives: (a) Make the design with xed characteristics; the users adju
st to the device. One example would be an unadjustable chair. Another example wo
uld be a machine-paced assembly line. (b) Fit the task to the worker. One exampl
e would be an adjustable chair. Another example would be a human-paced assembly
line (i.e., with buffers) so all workers could work at individual paces. 3. Make
search of the solution space. Now design a number of alternativesnot just one. N
ow use the information you gathered using the techniques of Section 3. Also get
design ideas from a number of sources: workers, supervisors, staff people, other
engineers, vendors, suppliers, and so on. (Benchmarking is a technique of obtai
ning ideas from other organizations.) The solution space will be reduced by vari
ous economic, political, aesthetic, and legal constraints. Among the feasible so
lutions (solutions that work), try to select the best onethe optimum solution.
1388
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
TABLE 22 DAMES: The Five Steps of Engineering Design (De ne, Analyze, Make search,
Evaluate, Specify and sell) Step De ne the problem broadly. Analyze in detail. Co
mments Make statement broad and detailfree. Give criteria, number of replication
s, schedule Identify limits (constraints, restrictions). Include variability in
components and users. Make machine adjust to person, not converse. Example Desig
n, within 5 days, a workstation for assembly of 10,000 / yr of unit Y with reaso
nable quality and low mfg cost. Obtain speci cations of components and assembly. O
btain skills availability of people; obtain capability / availability of equipme
nt. Get restrictions in fabrication and assembly techniques and sequence. Obtain
more details on cost accounting, scheduling, and tradeoffs of criteria. Seek a
variety of assembly sequences, layouts, xtures, units / hr, handtools, etc. Alt.
A: installed cost $1000; cost / unit $1.10. Alt. B: installed costs $1200; cost
/ unit $1.03. Recommend Alt. B. Install Alt. B1, a modi cation of B suggested by t
he supervisor.
Make search of solution space.
Evaluate alternatives.
Dont be limited by imagined constraints. Try for optimum solution, not feasible s
olution. Have more than one solution. Trade off multiple criteria. Calculate ben
e t / cost. Specify solution in detail. Sell solution. Accept a partial solution r
ather than nothing. Follow up to see that design is implemented and that design
reduces the problem.
Specify and sell solution.
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
.
A problem is the tendency of designers to be satis ers rather than optimizers. Tha
t is, designers tend to stop as soon as they have one feasible solution. For exa
mple, when designing an assembly line, the designer may stop as soon as there is
a solution. For an optimum solution, there must be a number of alternatives to
select from. Alternatives tend to suggest further alternatives, so stopping too
soon can limit solution quality and acceptance. 4. Evaluate alternatives. To get
better data for your evaluation, consider trying out your alternatives with moc
kups, dry runs, pilot experiments, and simulations. You will need to trade off m
ultiple criteriausually without any satisfactory tradeoff values. For example, on
e design of an assembly line may require 0.11 min / unit while another design ma
y require 0.10 min / unit; however, the rst design gives more job satisfaction to
the workers. How do you quantify job satisfaction? Even if you can put a numeri
cal value on it, how many satisfaction units equal a 10% increase in assembly labor
cost? Consider a numerical ranking, using a equal interval scale for each criter
ion. (Method A requires 1.1 min / unit, while method B requires 1.0 min / unit;
method A requires 50 m2 of oor space, while method B requires 40 m2.) However, ma
nagers generally want to combine the criteria. Table 23 shows one approach. Afte
r completing the evaluation, have the affected people sign off on the evaluation
form. Then go back and select features from the alternatives to get an improved
set of designs. 5. Specify and sell solution. Your abstract concept must be tra
nslated into nuts and boltsdetailed speci cations. Then you must convince the decis
ion makers to accept the proposal. A key to acceptance is input
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
. The criteria and weights will depend upon speci c management goals. Grade each a
lternative (A
Excellent
4; B Good 3; C
Average
2; D Fair
1; F Bad
0). Then
ate the alternatives gradepoint (grade
weight). De ning the best as 100%, calculate
the percent for each alternative.
from others (especially users and decision makers) early in the design stages (s
teps 1, 2, and 3). At the meeting where the nal proposal is presented, there shou
ld be no surprises; the participants should feel you are presenting material the
y have already seen, commented on, and approved. That is, the selling is done ea
rly, not late; if they did not buy your approach then, you have modi ed it until i
t is acceptable. One common modi cation is partial change instead of full change;
a test market approach instead of immediate national rollout; change of some of
the machines in a department instead of all the machines; a partial loaf rather
than a full loaf. The installation needs to be planned in detail (who does what
when). This is a relatively straightforward, though time-consuming, process. The
installation plan typically is approved at a second meeting. It also should hav
e preapproval by everyone before the decision meeting. During the installation i
tself, be exibleimprovements may become apparent from the suggestions of superviso
rs, operators, skilled-trades workers, and so on. Finally, document the results.
A documentary video and photos give opportunities to praise all the contributor
s (and may even reduce resistance to your next project).
REFERENCES
Cole, R. (1979), Work, Mobility and Participation: A Comparative Study of Americ
an and Japanese Industry, University of California Press, Berkeley. Corlett, N.,
and Bishop, R. (1976), A Technique for Assessing Postural Discomfort, Ergonomics, V
ol. 19, pp. 17582. Helander, M., and Burri, G. (1995), Cost Effectiveness of Ergono
mics and Quality Improvements in Electronics Manufacturing, International Journal
of Industrial Ergonomics, Vol. 15, pp. 137 151. Keyserling, M., Brouwer, M., and
Silverstein, B. (1992), A Checklist for Evaluating Ergonomic Risk Factors Resultin
g from Awkward Postures of the Legs, Trunk and Neck, International Journal of Indu
strial Ergonomics, Vol. 9, pp. 283301. Knauth, P. (1993), The Design of Shift Syste
ms, Ergonomics, Vol. 36, Nos. 13, pp. 1528. Konz, S., and Goel, S. (1969), The Shape o
f the Normal Work Area in the Horizontal Plane, AIIE Transactions, Vol. 1, No. 4,
pp. 359370. Konz, S., and Johnson, S. (2000), Work Design: Industrial Ergonomics,
5th Ed., Holcomb Hathaway, Scottsdale, AZ. Lifshitz, Y., and Armstrong, T. J. (
1986), A Design Checklist for Control and Prediction of Cumulative Trauma Disorder
in Hand Intensive Manual Jobs, in Proceedings of the Human Factors Society 30th A
nnual Meeting. Putz-Anderson, V., Ed. (1988), Cumulative Trauma Disorders, Taylo
r & Francis, London. Rodgers, S. A. (1992), A Functional Job Analysis Technique, Occ
upational Medicine: State of the Art Reviews, Vol. 7, No. 4, pp. 679711. Wilson,
J., and Grey, S. (1984), Reach Requirements and Job Attitudes at Laser-Scanner Che
ckout Stations, Ergonomics, Vol. 27, No. 12, pp. 12471266.
1390
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
ADDITIONAL READING
Francis, R., McGinnis, L., and White J., Facility Layout and Location: An Analyt
ical Approach, Prentice Hall, Englewood Cliffs, NJ, 1992. Harry, M., and Schroed
er, R., Six Sigma, Doubleday, New York, 2000.
1392
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
1.
WHY DETERMINE TIME / JOB?
It is useful to know the direct labor cost / unit, especially when the job is re
petitive. Five typical applications are: 1. 2. 3. 4. 5. Cost allocation Producti
on and inventory control Evaluation of alternatives Acceptable days work Incentiv
e pay
1.1.
Cost Allocation
To determine cost / unit, you need the direct material cost, the direct labor co
st, and various miscellaneous costs (called overhead or burden). Direct labor co
st is (direct labor time)(wage cost / hr). So you need to determine how long the
job takes. But, in addition, overhead costs usually are allocated as a percenta
ge of direct labor (e.g., overhead is 300% of direct labor cost). So again you n
eed direct labor time. Without good estimates of the cost of production to compa
re vs. selling price, you dont know your pro t / unit (it may even be negative!). T
he goal is to improve and control costs through better information.
1.2.
Production and Inventory Control
Without time / unit, you can not schedule or staff (i.e., use management informa
tion systems). How many people should be assigned to the job? When should produc
tion start in order to make the due date and thus avoid stockouts?
1.3.
Evaluation of Alternatives
Without time / unit, you can not compare alternatives. Should a mechanic repair
a part or replace it with a new one? Is it worthwhile to use a robot that takes
10 seconds to do a task?
1.4.
Acceptable Days Work
Sam picked 1600 items from the warehouse todayis that good or bad? Supervisors wo
uld like to be able to compare actual performance to expected performance. Many
applications of standards to repetitive work have shown improvement in output of
30% or more when measured daywork systems are installed in place of nonengineer
ed standards. Output increases about 10% more when a group incentive payment is
used and 20% more when an individual incentive is used.
1.5.
Incentive Pay
A minority of rms use the pay-by-results (the carrot) approach. If you produce 1%
more, you get paid 1% more. This works for the rm because even though direct lab
or cost / unit stays constant, overhead costs do not increase and thus total cos
t / unit decreases.
2.
ESTABLISHING TIME STANDARDS
There are two basic strategies: nonengineered (subjective) standards (did take times
) and engineered (objective) standards (should take times). The techniques to use de
pend upon the cost of obtaining the information and the bene ts of using the infor
mation.
2.1.
Nonengineered (Type 2) Estimates
Quick and dirty information can be obtained at low cost. But using dirty information
reases the risk of errors in decisions. Since nonengineered standards are not pr
eceded by methods or quality analysis, they are did take times, not should take times
here are four approaches: historical records, ask expert, time logs, and work (o
ccurrence) sampling.
2.1.1.
Historical Records
Standards from historical records tend to be very dirty (although cheap). For exampl
e, in the warehouse, how many cases can be picked per hour? From shipping record
s, determine the number of cases shipped in January, February, and March. From p
ersonnel, determine the number of employees in shipping in each month. Divide to
tal cases / total hours to get cases / hr. Ignore changes in product output, pro
duct mix, absenteeism, delays, and so on.
TIME STANDARDS
1393
2.1.2.
Ask Expert
Here you ask a knowledgeable expert how long a job will take. For example, ask t
he maintenance supervisor how long it will take to paint a room. Ask the sales s
upervisor how many customers can be contacted per week. A serious problem is tha
t the expert may have an interest in the answer. For example, a hungry maintenance s
upervisor wants work for his group and so quotes a shorter painting time; a sale
s supervisor may be able to hire more staff (and thus increase her prestige and
power) by giving a low estimate of customers / sales representative.
2.1.3.
Time Logs
It may be that a job is cost plus and so the only problem is how many hours to charg
e to a customer. For example, an engineer might write down, for Monday, 4.0 hr f
or project A, 2.5 hr for B, and 3.5 hr for C. Obviously there are many potential
errors here (especially if work is done on project D, for which there no longer
is any budget).
2.1.4.
Work (Occurrence) Sampling
This technique is described in more detail in Chapter 54. It is especially usefu
l when a variety of jobs are done intermittently (e.g., as in maintenance or of ce
work). Assume that during a threeweek period a maintainer spends 30% of the tim
e doing carpentry, 40% painting, and 30% miscellaneous; this means 120 hr
0.4
48
hr for painting. During the three-week period, 10,000 ft2 of wall were painted
or 10,000 / 48 208 ft2 / hr. Note that the work method used, work rate, delays,
production schedule, and so on are not questioned.
2.2.
Engineering (Type 1) Estimates
1394
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
mined times); and (2) the analyst must imagine the work method; even experienced
analysts may overlook some details or low-frequency elements.
3.
3.1.
ADJUSTMENTS TO TIME: ALLOWANCES
Three Levels of Time
Time is reported at three levels: 1. Observed time: The raw (unadjusted) time ta
ken by the worker. It does not have any rating, allowance or learning adjustment
. 2. Normal time: Normal time
(observed time)(rating) The observer estimates the
pace of the worker in relation to normal pace. Normal time is the time an exper
ienced operator takes when working at a 100% pace. See Chapter 54 for more detai
ls. 3. Standard time: For allowances expressed as a percent of shift time: Stand
ard time normal time / (1
allowances) For allowances expressed as a percent of w
ork time: Standard time normal time (1 allowances) It is a policy decision by th
e rm whether to give allowances as a percent of shift or work time. Normal time n
eeds to be increased from standard time by personal, fatigue, and delay allowanc
es.
3.2.
Personal Allowances
Personal allowances are given for such things as blowing your nose, going to the
toilet, getting a drink of water, smoking, and so on. They do not vary with the
task but are the same for all tasks in the rm. There is no scienti c or engineerin
g basis for the percent to give. Values of 5% (24 minutes in a 480-minute day) s
eem to be typical. Most rms have standardized break periods (coffee breaks)for exa
mple, 10 minutes in the rst part of the shift and the same in the second part. It
is not clear whether most rms consider this time as part of the personal allowan
ce or in addition to it. The midshift meal break (lunch) is another question. Th
is 2060-minute break obviously permits the worker to attend to personal needs and
recover from fatigue. Yet lunch usually is not considered as part of allowancese
ven if the lunch period is paid. Some rms give an additional break if work is ove
r 8 hours. For example, if a shift is over 10 hours, there is an additional brea
k of 10 minutes after the 9th hour. In addition, some rms give an additional allo
wance to all workers for cleanup (either of the person or the machine), putting
on and taking off of protective clothing, or travel. In mines, the travel allowa
nce is called portal-to-portal pay; pay begins when the worker crosses the mine
portal, even though the worker will not arrive at the working surface until some
time later.
3.3.
Fatigue Allowances
The rationale of fatigue allowances is to compensate the person for the time los
t due to fatigue. In contrast to personal allowances, which are given to everyon
e, fatigue allowances are given only for causefor fatigue. No fatigue? Then no fa
tigue allowance! Another challenge is the concept of machine time. With the incr
easing capabilities of servomechanisms and computers, many machines operate semi
automatically (operator is required only to load / unload the machine) or automa
tically (machine loads, processes, and unloads). During the machine time of the
work cycle, the operator may be able to drink coffee (personal allowance) or tal
TIME STANDARDS
1395
applicable fatigue allowance points. Then, using Table 1, convert points to perc
ent time. For a more detailed discussion of allowances, see Konz and Johnson (20
00, chap. 32). The fatigue factors are grouped into three categories: physical,
mental, and environmental.
3.3.1.
Physical: Physical Fatigue
Table 2 shows how the ILO makes a distinction among carrying loads, lifting load
s, and force applied. In the NIOSH lifting guideline, the lift origin and destin
ation, frequency of move, angle, and container are considered as well as load.
3.3.2. 3.3.3. 3.3.4. 3.3.5. 3.3.6.
Physical: Short Cycle Physical: Static Load (Body Posture) Physical: Restrictive
Clothing Mental: Concentration / Anxiety Mental: Monotony
Table 3 gives the fatigue allowance to allow time for the muscles to recover.
Table 4 gives the allowance for poor posture.
Table 5 gives the allowance for restrictive clothing.
Table 6 gives the allowance for concentration / anxiety.
Table 7 gives the allowance for monotony. In the authors opinion, allowances for
monotony, boredom, lack of a feeling of accomplishment, and the like are questio
nable. These factors are unlikely to cause fatigue and thus increase time / cycl
e. These factors primarily re ect unpleasantness and thus should be re ected in the
wage rate / hr rather than the time / unit.
3.3.7. 3.3.8.
Environmental: Climate Environmental: Dust, Dirt, and Fumes
Table 8 gives the allowance for climate.
Table 9 gives the allowance for dust, dirt and fumes.
TABLE 1 Points 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140
Conversion from Points Allowance to Percent Allowance for ILO 0 10 11 13 15 19 2
4 30 37 45 54 64 75 88 101 116 1 10 11 13 16 19 24 30 37 46 55 65 77 89 103 118
2 10 11 13 16 20 25 31 38 47 56 66 78 91 105 119 3 10 11 13 16 20 26 32 39 48 57
68 79 92 106 121 4 10 11 14 17 21 26 32 40 48 58 69 80 93 107 122 5 10 12 14 17
21 27 33 40 49 59 70 82 95 109 123 6 10 12 14 17 22 27 34 41 50 60 71 83 96 110
125 7 11 12 14 18 22 28 34 42 51 61 72 84 97 112 126 8 11 12 15 18 23 28 35 43
52 62 73 85 99 113 128 9 11 12 15 18 23 29 36 44 53 63 74 87 100 115 130
From International Labour Of ce, Introduction to Work Study, 4th (Rev.) Ed., pp. 4
91498. Copyright International Labour Organization 1992. The second column (0) gi
ves the 10s, and the remaining columns give the units. Thus, 30 points (0 column
) 15%; 31 points (1 column)
16%; 34 points 17%. The percent allowance is for man
ual work time (not machine time) and includes 5% personal time for coffee breaks
.
n equipment with twice the capacity costs less than twice as much. Then capital
cost / unit is reduced and fewer work hours are needed / unit of output.
4.1.3.
Quantifying Improvement
Practice makes perfect has been known for a long time. Wright (1936) took a key step
when he published manufacturing progress curves for the aircraft industry. Wrig
ht made two major contributions. First, he quanti ed the amount of manufacturing p
rogress for a speci c product. The equation was of the form Cost
a (number of airp
lanes)b; (see Figure 1). But the second step was probably even more important: h
e made the data a straight line (by putting the curve in the axis!) (see Figure
2). That is, the data is on a loglog scale.
TIME STANDARDS
1401
Figure 1 Practice Makes Perfect. As more and more units are produced, the xed cos
t is divided by more and more units, so xed cost / unit declines. In addition, va
riable cost / unit declines because fewer mistakes are made, less time is spent
looking up instructions, better tooling is used, and so on. The variable cost da
ta usually can be tted with an equation of the form y axb. (From Work Design: Ind
ustrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Copyright
2000 by Holcom
b Hathaway, Pub., Scottsdale, AZ. Reprinted with permission.)
Figure 2 LogLog Scale. Supervisors like straight lines. Plotting y axb on loglog p
aper gives a straight line. The key piece of information supervisors desire is t
he rate of improvementthe slope of the line. The convention is to refer to reduct
ion with doubled quantities. If quantity x1 8, the quantity x2 16. Then if cost
at x1 is y1
100 and cost at x2 is y2 80, this is an 80% curve. (From Work Design
: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Copyright
2000 by H
olcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission.)
1402
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 3 Cartesian Coordinates, Semi-Log Coordinates, and LogLog Coordinates. Car
tesian coordinates have equal distances for equal numerical differences; that is
, the linear difference from 1 to 3 is the same as from 8 to 10. On a log scale,
the same distance represents a constant ratio; that is, the distance from 2 to
4 is the same as from 30 to 60 or 1000 to 2000. Semi-log paper has one axis Cart
esian and one axis log. Loglog (double log) paper has a log scale on both axes. (
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
.)
Figure 4 Average Cost / Unit from Table 12. Table 12 gives a 79% curve. Cost / u
nit is the cost of the nth unit; average cost / unit is the sum of the unit cost
s / n. Cost / unit can be estimated by multiplying average cost / unit by the fa
ctor from Table 12. The average cost of the rst 20 units is estimated as 25.9 fro
m the tted line; the cost of the 20th unit is 25.9(0.658) 17.0 hr. (From Work Des
ign: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Copyright
2000 b
y Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission.)
TIME STANDARDS TABLE 12 Factors for Various Improvement Curves Learning Factor,
b, For Curve y
axb
0.515 0.474 0.434 0.396 0.358 0.322 0.286 0.252 0.234 0.218 0.184 0.152
.029
0.120 0
1403
Improvement Curve, % Between Doubled Quantities 70 72 74 76 78 80 82 84 85 86 88
90 92 94 95 96 98
Multiplier to Determine Unit Cost if Average Cost Is Known 0.485 0.524 0.565 0.6
06 0.641 0.676 0.709 0.746 0.763 0.781 0.813 0.847 0.877 0.909 0.926 0.943 0.971
Multiplier to Determine Average Cost if Unit Cost Is Known 2.06 1.91 1.77 1.65 1
.56 1.48 1.41 1.34 1.31 1.28 1.23 1.18 1.14 1.10 1.08 1.06 1.03
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
. The multipliers in columns 3 and 4 are large-quantity approximations. For exam
ple, for a 90% curve, the table value in column 4 is 1.18. A more precise value
at a quantity of 10
1.07, at 50
1.13, and at 100 1.17. A more precise value for
an 85% curve at a quantity of 100
1.29; a more precise value for a 95% curve at
a quantity of 100 1.077.
TABLE 13
Time and Completed Units as They Might Be Reported for a Product Units Completed
(Pass Final Inspection) 14 9 16 21 24 43 Months Direct Labor Hours Charged To Pr
oject 410 191 244 284 238 401 Cumulative Units Completed 14 23 39 60 84 127 Cumu
lative Work Hours Charged to Project 410 601 845 1129 1367 1708
Month March April May June June August
Average Work hr / unit 29.3 26.1 21.7 18.8 16.3 13.4
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright 2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
.
On a log scale, the physical distance between doubled quantities is constant (i.
e., 8 to 16 is the same distance as 16 to 32 or 25 to 50) (see Figure 4). Wright
gave the new cost as a percent of the original cost when the production quantit
y had doubled. If cost at unit 10 was 100 hr and cost at unit 20 was 85 hr, then
this was an 85 / 100
85% curve. Since the curve was a straight line, it was eas
y to calculate the cost of the 15th or the 50th unit. If you wish to solve the Y
axb equation instead of using a graph (calculators are much better now than in
1935), see Table 12. For example, assume the time for a (i.e., cycle 1) 10 min an
d there is a 90% curve (i.e., b
0.152), then the time for the 50th unit is: Y 10
(50) 0.152
10 / (50)0.152
5.52 min. Table 13 shows how data might be obtained for
ting a curve. During the month of March, various people wrote down on charge sli
ps a total of 410 hours against this project charge number.
1404
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
The average work hours / unit during March then becomes 29.3. The average x-coor
dinate is (1
14) / 2
7.5. Because the curve shape is changing so rapidly in the
early units, some authors recommend plotting the rst lot at the 1 / 3 point[(1
14
) / 3] and points for all subsequent lots at the midpoint. During April, 9 units
passed nal inspection and 191 hours were charged against the project. Cumulative
hours of 601 divided by cumulative completed output of 23 gives average hr / un
it of 26.1. The 26.1 is plotted at (15 23) / 2
19. As you can see from the examp
le data, there are many possible errors in the data, so a curve more complex tha
n a straight line on loglog paper is not justi ed. Figure 4 shows the resulting cur
ve. Although average cost / unit is what is usually used, you may wish to calcul
ate cost at a speci c unit. Conversely, the data may be for speci c units and you ma
y want average cost. Table 12 gives the multiplier for various slopes. The multi
pliers are based on the fact that the average cost curve and the unit cost curve
are parallel after an initial transient. Initial transient usually is 20 units,
although it could be as few as 3. The multiplier for a 79% slope is (0.641
0.67
6) / 2 0.658. Thus, if we wish to estimate the cost of the 20th unit, it is (24.
9 hr)(0.658)
16.4 hr. Cost / unit is especially useful in scheduling. For exampl
e, if 50 units are scheduled for September, then work-hr / unit (for a 79% curve
) at unit 127
(13.4)(0.656)
8.8 and at unit 177
7.8. Therefore between 390 and 4
40 hours should be scheduled. Looking at Figure 4, you can see that the extrapol
ated line predicts cost / unit at 200 to be 11.4 hr, at 500 to be 8.3, and at 1,
000 to be 6.6. If we add more cycles on the paper, the line eventually reaches a
cost of zero at cumulative production of 200,000 units. Can cost go to zero? Ca
n a tree grow to the sky? No. The loglog plot increases understanding of improvem
ent, but it also deceives. Note that cost / unit for unit 20 was 24.9 hr. When o
utput was doubled to 40 units, cost dropped to 19.7; doubling to 80 dropped cost
to 15.5; doubling to 160 dropped cost to 12.1; doubling to 320 dropped cost to
9.6; doubling to 640 dropped cost to 7.6. Now consider the improvement for each
doubling. For the rst doubling from 20 to 40 units, cost dropped 5.20 hr or 0.260
hr / unit of extra experience. For the next doubling from 40 to 80, cost droppe
d 4.2 hr or 0.105 hr / unit of extra experience. For the doubling from 320 to 64
0, cost dropped 2.0 hr or 0.006 hr / unit of extra experience. In summary, the m
ore experience, the more dif cult it is to show additional improvement. Yet the gur
es would predict zero cost at 200,000 units, and products just arent made in zero
time. One explanation is that total output of the product, in its present desig
n, is stopped before 200,000 units are produced. In other words, if we no longer
produce Model Ts and start to produce Model As, we start on a new improvement cur
ve at zero experience. A second explanation is that the effect of improvement in
hours is masked by changes in labor wages / hr. The Model T Ford had a manufact
uring progress rate of 86%. In 1910, when 12,300 Model T Fords had been built, t
he price was $950. When it went out of production in 1926 after a cumulative out
put of 15,000,000, the price was $270; $200 in constant prices plus in ation of $7
0. The third explanation is that straight lines on loglog paper are not perfect ts
over large ranges of cycles. If output is going to go to 1,000,000 cumulative u
nits over a 10-year period, you really shouldnt expect to predict the cost of the
1,000,000th unit (which will be built 10 years from the start) from the data of
the rst 6 months. There is too much change in economic conditions, managers, uni
ons, technology and other factors. Anyone who expects the future to be perfectly
predicted by a formula has not yet lost money in the stock market.
4.1.4.
Typical Values for Organization Progress
The rate of improvement depends on the amount that can be learned. The more that
can be learned, the more will be learned. The amount that can be learned depend
s upon two factors: (1) amount of previous experience with the product and (2) e
xtent of mechanization. Table 14 gives manufacturing progress as a function of t
he manual / machine ratio. Allemang (1977) estimates percent progress from
TABLE 14 Manual 25 50 75
Prediction of Manufacturing Progress Machine 75 50 25 Manufacturing Progress, %
90 85 80
Percent of Task Time
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
.
TIME STANDARDS
1405
product design stability, a product characteristics table (complexity, accessabi
lity, close tolerances, test speci cations, and delicate parts), parts shortage, a
nd operator learning. Konz and Johnson (2000) have two detailed tables giving ab
out 75 manufacturing progress rates reported in the literature.
4.1.5.
Typical Values for Learning
Assume learning has two components: (1) cognitive learning and (2) motor learnin
g (Dar-El et al. 1995a, b). Cognitive learning has a greater improvement (say 70
% curve), while motor learning is slower (say 90% curve). For a task with both t
ypes, initially the cognitive dominates and then the motor learning dominates. U
se values of 70% for pure cognitive, 72.5% for high cognitive, 77.5% for more cogni
an motor, 82.5% for more motor than cognitive, and 90% for pure motor. Konz and Joh
000) give a table of 43 tasks for which learning curves have been reported. The
improvement takes place through reduction of fumbles and delays rather than grea
ter movement speed. Stationary motions such as position and grasp improve the mo
st while reach and move improve little. It is reduced information-processing tim
e rather than faster hand speed that affects the reduction. The range of times a
nd the minimum time of elements show little change with practice. The reduction
is due to a shift in the distribution of times; the shorter times are achieved m
ore often and the slower times less oftengoing slowly less often (Salvendy and Seymou
r 1973). The initial time for a cognitive task might be 1315 times the standard t
ime; the initial time for a manual task might be 2.5 times the standard time.
4.1.6.
Example Applications of Learning
Table 15 shows the effect of learning / manufacturing progress on time standards
. The fact that labor hr / unit declines as output increases makes computations
using the applications of standard time more complicated. Ah, for the simple lif
e! 4.1.6.1. Cost Allocation Knowing what your costs are is especially important
if you have a makebuy decision or are bidding on new contracts. If a component is
used on more than one product (standardization), it can progress much faster on
the curve since its sales come from multiple sources. Manufacturing progress al
so means that standard costs quickly become obsolete. Note that small lots (say
due to a customer emergency) can have very high costs. For example, if a standar
d lot size is 100 and labor cost is 1 hr / unit and there is a 95% curve, a lot
of 6 would have a labor cost about 23% higher (1.23 hr / unit). Consider chargin
g more for special orders! 4.1.6.2. Scheduling Knowing how many people are neede
d and when is obviously an important decision. Also, learning / manufacturing pr
ogress calculations will emphasize the penalties of small lots. 4.1.6.3. Evaluat
ion of Alternatives When alternatives are being compared, a pilot project might
be run. Data might be obtained for 50100 cycles. Note that the times after the pi
lot study should be substantially shorter due to learning. In addition, the lear
ning / manufacturing progress rate for alternatives A and B might differ so that
what initially is best will not be best in the long run.
TABLE 15 Learning Curve, % 98 95 90 85
Demonstration of the Learning Effect on Time Standards Time / Unit at 2X 0.98 0.
95 0.90 0.85 4X 0.96 0.90 0.81 0.72 32X 0.90 0.77 0.59 0.44 2X 102 105 111 118 P
ercent of Standard at 4X 104 111 124 138 32X 111 129 169 225
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
. Even a small learning rate can have a major effect on performance. X
experienc
e level of the operator when time study was taken, for example, 50, 100, or 500
cycles. The table gives time / unit based on a time standard of 1.0 min / unit;
therefore, if actual time standard were 5.0 min / unit, then time / unit at 98%
and 2X would be 0.98 (5.0) 4.9.
1406
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
4.1.6.4. Acceptable Days Work Assume a time standard y is set at an experience le
vel x. Assume further, for ease of understanding, that y
1.0 min and x
100 units
. A time study technician, Bill, made a time study of the rst 100 units produced
by Sally, calculated the average time, and then left. Sally continued working. L
ets assume a 95% rate is appropriate. Sally completes the 200th piece shortly bef
ore lunch and the 400th by the end of the shift. The average time / unit for the
rst day is about 0.9 min (111% of standard). The second day, Sally completes the
800th unit early in the afternoon. She will complete the 3200th unit by the end
of the week. The average time of 0.77 min / unit during the week yields 129% of
standard! Point. If none of the operators in your plant ever turns in a time th
at improves as they gain experience, do you have stupid operators or stupid supe
rvisors? Next point. The magnitude of the learning effect dwarfs potential error
s in rating. You might be off 5% or 10% in rating but this effect is trivial com
pared versus the errors in using a time standard without considering learning /
mfg. progress. Third point. Any time standard that does not consider learning /
manufacturing progress will become less and less accurate with the passage of ti
me. Final point. Since learning and manufacturing progress are occurring, output
and number of work hours should not both be constant. Either the same number of
people should produce more or fewer people can produce a constant output.
5.
5.1.
DOCUMENTING, USING, AND MAINTAINING STANDARDS
Documenting Standards
Standards are part of a goal-setting system, and control is essential to any goa
l-setting system. The more detailed the data, the more detailed the possible ana
lysis; you can always consolidate data after they is gathered, but you cant break
the data down if they are consolidated before they are gathered. Computerizatio
n permits detailed recording and thus analysis of downtime, machine breakdown, s
etup time, and so on. With bar coding of parts and computer terminals at worksta
tions, it is feasible to record times for each individual part. For example, for
product Y, operator 24 completed operation 7 on part 1 at 10:05, part 2 at 10.0
8, and so on. More commonly, however, you would just record that for product Y,
operator 24 started operation 7 at 10:00 and completed the 25 units at 11:50. Le
ast useful would be recording, for product Y, that all operations on 25 units we
re completed on Tuesday. Companies tend to develop elaborate codes for types of
downtime, quality problems, and so on. Be careful about downtime reporting: it i
s easy to abuse.
5.2. 5.2.1.
Using Standards Reports
Variance is the difference between standard performance and actual performance.
Generally, attention is focused on large negative variances so that impediments
to productivity can be corrected. Performance should be fed back to both workers
and management at least weekly. Daily reports highlight delays and production p
roblems; monthly reports smooth the uctuations and show long-term trends.
5.2.2.
Consequences of Not Making Standard
If the standard is used to determine an acceptable days work or pay incentive wag
es, the question arises, What if a person doesnt produce at a standard rate? For
a typical low-task standard such as methods time measurement (MTM), over 99% of
the population should be able to achieve standard, especially after allowances a
re added. The relevant question is not what workers are able to do but what they
actually do. Comparisons of performance vs. standard should be over a longer ti
me period (such as a week) rather than a short period (such as a day). The rst po
ssibility to consider is learning. As noted above, for cognitive work, the rst cy
cle time may be as much as 13 times MTM standard and, for motor work, 2.5 times
MTM standard. If a new operators performance is plotted vs. the typical learning
curve for that job, you can see whether the operator is making satisfactory prog
ress. For example, Jane might be expected to achieve 50% of standard the rst week
, 80% the second week, 90% the third week, 95% the fourth week, and 100% the fth
week. Lack of satisfactory progress implies a need for training (i.e., the opera
tor may not be using a good method). If Jane is a permanent, experienced worker,
the below-standard performance could be considered excused or nonexcused. Excused fa
re is for temporary situationsbad parts from the supplier, back injuries, pregnan
cy for females, and so forth. For example, Employees returning to work from Workers
Compensation due to a loss-of-time accident in excess of 30 days will be given
consideration based upon the medical circumstances of each individual case.
TIME STANDARDS
1407
Nonexcused performances are those for which the worker is considered capable of
achieving the standard but did not make it. Table 16 shows some example penaltie
s. Most rms have a forget feature; for example, one month of acceptable performance d
rops you down a step. The use of an established discipline procedure allows work
ers to self-select themselves for a job. The rm can have only minimal preemployme
nt screening and thus not be subject to discrimination charges. Standards are ba
sed on an eight-hour day, but people work longer and shorter shifts. Because mos
t standards tend to be loose, people can pace themselves and output / hr tends t
o be constant over the shift. I do not recommend changing the standard for shift
s other than eight hours. The level at which discipline takes place is negotiabl
e between the rm and the union. For example, it might be 95% of standardthat is, a
s long as workers perform above 95% of standard, they are considered satisfactor
y. However, this tends to get overall performance slightly highersay 98%. The bes
t long-range strategy probably is to set discipline at 100% of standard. Anythin
g less will give a long-term loss in production, especially if a measured daywor
k system is used instead of incentives. Firms can use several strategies to impr
ove the groups performance and reduce output restrictions. Basically, they allow
the employees as well as the rm to bene t from output over 100%. The primary techni
que is to give money for output over 100%. A 1% increase in pay for a 1% increas
e in output is the prevalent system. Another alternative is to give the worker t
ime off for output over 100%. For example, allow individuals to bank weekly hours ea
rned over 100%. Most people will soon run up a positive balance to use as insuranc
e. This can be combined with a plan in which all hours in the bank over (say) 20 h
r are given as scheduled paid time off. This tends to drop absenteeism because w
orkers can use the paid time off for personal reasons.
5.3.
Maintaining Standards (Auditing)
A standard can restrict productivity if it has not been updated because workers
will produce only to the obsolete standard and not to their capabilities. What t
o audit? To keep the work-measurement system up to date, accurate, and useful, M
ILSTD-1567A says the audit should determine (1) the validity of the prescribed c
overage, (2) the percentage of type I and II coverage, (3) use of labor standard
s, (4) accuracy of reporting, (5) attainment of goals, and (6) results of correc
tive actions regarding variance analysis. How often to audit? Auditing should be
on a periodic schedule. A good procedure is to set an expiration date on each s
tandard at the time it is set; MIL-STD-1567 says annually. A rule of thumb is to
have it expire at 24 months if the application is 50 hr / year, at 12 months if
between 50 and 600 hr / year, and at 6 months if over 600 hr / year. Then, when
the standard expires, and if it is still an active job, an audit is made. If it
is not active, the standard will be converted from permanent to temporary. Then,
if the job is resumed, the temporary can be used for a short period (e.g., 30 d
ays) until a new permanent standard is set. An advantage of a known expiration d
ate is that if a standard is audited (and perhaps tightened), the operator will
not feel picked on. If the resources for auditing are not suf cient for doing all
the audits required, use the Pareto principle. Audit the mighty few and dont audit th
e insigni cant many. However, when the standard on one of the insigni cant many passes t
he expiration date, convert the permanent standard to temporary.
TABLE 16 Step 0 1 2 3 4 5 6
Example Discipline Levels for Not Producing Enough Description Normal operator,
acceptable performance Oral warning Oral warning; detailed review of method with
supervisor or trainer Written warning; additional training Written warning; som
e loss of pay Written warning; larger loss of pay Discharge from job
From Work Design: Industrial Ergonomics, 5th Ed., by S. Konz and S. Johnson. Cop
yright
2000 by Holcomb Hathaway, Pub., Scottsdale, AZ. Reprinted with permission
. A published set of rules ensures that everyone is treated fairly. Most organiz
ations have a similar set of rules for tardiness and absenteeism.
1408
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
REFERENCES
Allemang, R. (1977), New Technique Could Replace Learning Curves, Industrial Enginee
ring, Vol. 9, No. 8, pp. 2225. Dar-El, E., Ayas, K., and Gilad, I. (1995a), A DualPhase Model for the Individual Learning Process in Industrial Tasks, IIE Transacti
ons, Vol. 27, pp. 265271. Dar-El, E., Ayas, K. and Gilad, I. (1995b), Predicting Pe
rformance Times for Long Cycle Tasks, IIE Transactions, Vol. 27, pp. 272281. Intern
ational Labour Of ce (1992), Introduction to Work Study, 4th (Rev.) Ed., Internati
onal Labour Of ce, Geneva. Konz, S., and Johnson, S. (2000), Work Design: Industri
al Ergonomics, 5th Ed., Holcomb Hathaway, Scottsdale, AZ. Salvendy, G., and Seym
our, W. (1973), Prediction and Performance of Industrial Work Performance, John
Wiley & Sons, New York, p. 17. Wright, T. (1936), Factors Affecting the Cost of Ai
rplanes, Journal of Aeronautical Sciences, Vol. 3, February, pp. 122128.
eing employed, regular audits should be scheduled by the time study analysts. Th
e more frequently a standard is applied, the more frequent the audits should be.
Time study audits do not necessarily lead to conducting a time study of the ent
ire method. If, however, a major modi cation in the method is observed (e.g., chan
ge in tools used, different sequence in operations, change in materials processe
d, change in process or approach), then a new detailed study must be performed.
1.1.
Basic Procedure of Work Measurement
In general, the basic procedure of work measurement uses the following steps: 1.
Select the work to be studied. 2. Record all the relevant data relating to the
circumstances in which the work is being done, the methods, and the elements of
activity in them. 3. Examine the recorded data and the detailed breakdown critic
ally to ensure that the most effective method and motions are being used and tha
t unproductive and foreign elements are separated from productive elements.
, with reasonable allowances for personal delays, unavoidable delays, and fatigu
e. Employees are expected to perform the prescribed method at a pace, neither fa
st nor slow, that may be considered representative of a full days output by a wel
l-trained, experienced, and cooperative operator.
2.2.
Time Study Equipment
The equipment needed to develop reliable standards is minimal, easy to use and o
ften inexpensive. Only four basic items are needed: an accurate and reliable sto
pwatch, a time study board, a welldesigned time study form, and a calculator to
compute the recorded observations. Videotape equipment can also be very useful.
2.2.1.
Time-Recording Equipment
Several types of time-recording equipment are available today. The most common a
re decimal minute stopwatches, computer-assisted electronic stopwatches, and vid
eotape cameras. Two types of stopwatches are used today, the mechanical decimal
minute watch and the electronic stopwatch. The mechanical decimal minute watch h
as 100 divisions on its face, each division equal to 0.01 min. One minute requir
es one revolution of the long hand. The small dial on the watch face has 30 divi
sions, each representing 1 min. For every full revolution of the long hand, the
small hand moves 1 division, or 1 min. Electronic stopwatches provide resolution
to 0.001 sec and an accuracy of 0.002%. They are lightweight and have a digital di
splay. They permit the timing of any number of individual elements
1414
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
The COMPU-RATE, developed by Faehr Electronic Timers, Inc., is a portable device
using batteries, which provide about 120 hr of running time. Manual entries are
required only for the element column and the top four lines of the form. This a
llows the analyst to concentrate on observing the work and the operator performa
nce. Time can be in thousandths of a minute or one hundredthousandths of an hour
. The COMPU-RATE software system computes the mean and median element values, ad
justment of mean times to normal time after inputting the performance rating, an
d allowed times in minutes and / or hours per piece and pieces per hour. Errors
can be corrected through an edit function. The GageTalker Corporation (formerly
Observational Systems) markets the OS-3 Plus Event Recorder. This versatile reco
rder is useful for setting and updating standard times, machine downtime studies
, and work sampling. The device allows the analyst to select the time units appr
opriate for the study0.001 min, 0.0001 hr, or 0.1 sec. The total time, frequency,
mean times, performance ratings, normal times, allowances, standard times, and
pieces per hour may be printed through printer interface devices. In addition, t
he standard deviation, the maximum and minimum element values and frequencies ar
e also available. Videotape cameras are an excellent means for recording operato
rs methods and elapsed time to facilitate performance rating. By recording exact
details of the method used, analysts can study the operation frame by frame and
assign normal time values. Projecting the tape at the same speed at which the pi
ctures were taken and then rating the performance of the operator can also be us
ed. For example, a half minute to deal 52 cards into a four-hand bridge deck is
considered by many to be typical of normal performance. An operator carrying a 9
kg (20 lb) load a distance of 7.6 m (25 ft) may be expected to take 0.095 min w
hen working at a normal pace (Niebel 1992). Observing the videotape is a fair an
d accurate way to rate performance because all the facts are documented. Also, t
he videotape can be used to uncover potential methods improvements that otherwis
e would have not been observed using a stopwatch procedure. Videotapes are also
excellent training material for novice time study analysts, particularly in acqu
iring consistency in conducting performance rating.
2.2.2.
Time Study Board
It is often convenient for time study analysts to have a suitable board to hold
the time study form and the stopwatch. The board should be light but made of suf c
iently hard material. Suitable materials include 14 in. plywood and smooth plasti
c. The board should have both arm and body contacts for comfortable t and ease of
writing while being held. A clip holds the time study form. The stopwatch shoul
d be mounted in the upper right-hand corner of the board for right-handed observ
ers. Standing in the proper position, the analyst should be able to look over th
e top of the watch to the workstation and follow the operators movements while ke
eping both the watch and time study form in the immediate eld of vision.
2.2.3.
Time Study Forms
It is important that a well-designed form be used for recording elapsed time and
working up the study. The time study form is used to record all the details of
the time study. It should be designed so that the analyst can conveniently recor
d watch readings, foreign elements (see Section 2.4.2), and rating factors and c
alculate the allowed time. It should also provide space to record all pertinent
information concerning the operator being observed, method studied, tools and eq
uipment used, department where the operation is being performed, and prevailing
working conditions. Figures 1 and 2 illustrates a time study form (Niebel 1992)
that is suf ciently exible to be used for practically any type of operation. Variou
s elements of the operation are recorded horizontally across the top of the shee
t, and the cycles studied are entered vertically row-wise. The four columns unde
r each element are R for ratings, W for watch reading, OT for observed time, and
NT for normal time. At the back of the time study form, a sketch of the worksta
tion layout is drawn and a detailed description is given of the work elements, t
ools and equipment utilized, and work conditions. Computed results of the time s
tudy can also be summarized next to each work element de ned.
2.3.
Requirements for Effective Time Study
Certain fundamental requirements must be met before the time study can be undert
aken. First, all details of the method to be followed and working conditions mus
t be standardized at all points where it is to be used before the operation is s
tudied. Furthermore, the operator should be thoroughly acquainted with the presc
ribed method to be followed before the study begins. Without standardization, th
e time standards will have little value and will continually be a source of dist
rust, grievances, and internal friction. Reasons for selecting a particular job
may be any of the following: 1. The job in question is a new one, not previously
carried out (new product, component, operation, or set of activities) 2. A chan
ge in material or method of working has been made and a new time standard is req
uired.
1415
Figure 1. Front of Time Study Form. (From Niebel 1992)
1416
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Figure 2 Back of Time Study Form. (From Niebel 1992)
Choosing an Operator
After reviewing the job in operation, both the supervisor and the analyst should
agree that the job is ready to be studied. If more than one person is performin
g the job, several criteria should be considered when selecting a quali ed operato
r for the study. In general, a quali ed operator is one who has acquired the skill
, knowledge, and other attributes to carry out the work to satisfactory standard
s of quantity, physical speci cations, and safety. Normal pace is neither fast nor
slow and gives due consideration to the physical, mental, and visual requiremen
ts of a speci c job. The quali ed operator usually performs the work consistently an
d systematically, thereby making it easier for the analyst to apply a correct pe
rformance factor. The operator should be completely
1418
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
trained in the method and should demonstrate motivation in doing a good job. The
operator should have con dence in both time study methods and the analyst, be coo
perative, and willingly follow suggestions made by the supervisor or the analyst
. In cases where only one operator performs the operation, the analyst needs to
be especially careful in establishing the performance rating. It is possible tha
t the operator may be performing either fast or slow. Once the quali ed operator h
as been selected, the analyst should approach the operator in a friendly manner
and demonstrate understanding of the operation to be studied to obtain full coop
eration. The operator should have the opportunity to ask questions and be encour
aged to offer suggestions in improving the job. The analyst, in return, should a
nswer all queries frankly and patiently and willingly receive the operators sugge
stions. Winning the respect and goodwill of the operators helps establish a fair
and acceptable time standard.
2.4.2.
Breaking the Job into Elements
For ease of measurement, the operation should be divided into groups of motions
known as elements. An element is a distinct part of a speci ed job selected for co
nvenience of observation. It is a division of work that can be easily measured w
ith a stopwatch and that has readily identi able breakpoints. To divide the operat
ion into individual elements, the analyst should carefully observe the operator
over several cycles. A work cycle is the sequence of elements that are required
to perform a job or yield a unit of production. The sequence may include element
s that do not occur every cycle. It is desirable to determine, prior to conducti
ng the time study, the elements into which the operation is divided. Breaking do
wn the operation into elements improves method description and provides good int
ernal consistency during the time study. Elements also make it possible to reuse
the data and permit different ratings for the work cycle. Elements are classi ed
into the following types: 1. Repetitive element: an element that occurs in every
work cycle of the job 2. Occasional element: an element that does not occur in
every work cycle of the job but that may occur at regular or irregular intervals
3. Constant element: an element for which the basic time remains constant whene
ver it is performed (e.g., stop machine) 4. Variable element: an element for whi
ch the basic time varies in relation to some characteristics of the product, equ
ipment, or process (e.g., walk X meters) 5. Manual element: an element performed by
a worker 6. Machine element: an element automatically performed by a power-drive
n machine or process 7. Governing element: an element occupying a longer time th
an that of any other element being performed concurrently 8. Foreign element: an
element observed during a study that after analysis is not found to be a necess
ary part of the job Each element should be recorded in proper sequence, includin
g a de nition of its breakpoints or terminal points. The analyst should follow som
e basic guidelines when dividing the job into elements: 1. Ascertain that all el
ements being performed are necessary. If some are unnecessary and the objective
is to come up with a standard time, the time study should be discontinued and a
method study should be conducted. 2. Elements should be easily identi able (with c
lear start and break-off points) so that once established, they can be repeatedl
y recognized. Sounds (such as machine motor starts and stops) and change in moti
on of hand (putting down tool, reaching for material) are good breakpoints. 3. E
lements should be as short in duration as possible to be timed accurately. The m
inimum practical units for timing are generally considered to be 0.04 min or 2.4
sec (Niebel and Freivalds 1999). For less-trained observers, it may be 0.070.10
minutes. Very short cycles should, if possible, be next to long elements. 4. Ele
ments should be chosen so that they represent naturally uni ed and recognizably di
stinct segments of the operation. To illustrate, reaching for a tool can be deta
iled as reaching, grasping, moving, and positioning. This can be better treated
as a whole and describe as obtaining and positioning of wrench. 5. Irregular element
s should be separated from regular elements. 6. Machine-paced elements should be
separated from operator-controlled elements; the division helps recognize true
delays.
n(n
1)
(xi)2
However, since pilot time studies involve only small samples (n
30) of a populat
ion; a t-distribution must be used. The con dence interval equation is then: x t s
n
The
n
ts
where k
an acceptable percentage of x. If we let N be the number of observations
for the actual time study, solving for N yields:
TABLE 3 Cycles
Recommended Number of Observation Recommended Number of Cycles 200 100 60 40 30
20 15 10 8 5 3
Cycle Time, min 0.01 0.25 0.50 0.75 1.00 2.00 2.005.00 5.0010.00 10.0020.00 20.0040.
00 40.00above
From B. Niebel and A. Freivalds, Methods, Standards, and Work Design, 10th Ed.
999 McGraw-Hill Companies, Inc. Reprinted by permission.
1420
MANAGEMENT, PLANNING, DESIGN, AND CONTROL N
st kx
2
For example, a pilot study of 30 readings for a given element showed that x
0.25
and s 0.08. A 5% probability of error for 29 DOF (30
1 DOF) yields t
2.045 (see
Table 4 for values of t). Solving for N yields (rounded up): N
(0.08)(2.045) (0.05)(0.25)
2
171.3
172 observations
Thus, a total of 172 observations need to be taken for the pilot sample x to be
within 5% of the population mean .
2.4.4.
Principal Methods of Timing
There are two techniques of recording the elemental times during a time study: t
he continuous method and the snapback method. Each technique has its advantages
and disadvantages, which are described as follows. 2.4.4.1. Continuous Timing Th
e watch runs continuously throughout the study. It is started at the beginning o
f the rst element of the rst cycle to be timed and is not stopped until the whole
study is completed. At the end of each element, the watch reading is recorded. T
he purpose of this procedure is to ensure that all the time during which the job
is observed is recorded in the study. The continuous method is often preferred
over the snapback method for several reasons. The principal advantage is that th
e resulting study presents a complete record of the entire observation period. C
onsequently, this type of study is more appealing to the operator and the union
because no time is left out of the study, and that all delays and foreign elemen
ts have been recorded. The continuous method is also more suited for measuring a
nd recording short elements. Since no time is lost in snapping the hand back to
zero, accurate values are obtained on successive short elements. More clerical w
ork is involved when using the continuous method. The individual element time va
lues are obtained on successive subtractions after the study is completed to det
ermine the elapsed elemental times. 2.4.4.2. Snapback Timing The hands of the st
opwatch are returned to zero at the end of each element and are allowed to start
immediately. The time for each element is obtained directly. The mechanism of t
he watch is never stopped and the hand immediately starts to record the time of
the next element. It requires less clerical work than the continuous method beca
use time recorded is already time of elements. Elements observed out of order by
the operator can also be readily recorded without special notation. Among the d
isadvantages of the snapback method are: 1. If the stopwatch is analog, time is
lost in snapping back to zero and cumulative error is introduced into the study.
However, this error becomes insigni cant when using a digital stopwatch because o
nly the delay is in the display but internally it does not lag. 2. Short element
s are dif cult to time (0.04 min and less). 3. No veri cation of overall time with t
he sum of elemental watch readings is given. 4. A record of complete study is no
t always given in the snapback method. Delays and foreign elements may not be re
corded.
2.4.5.
Recording Dif culties Encountered
During the course of the study, the analyst may occasionally encounter variation
s in the sequence of elements originally established. First, the analyst may mis
s reading an element. Second, the analyst may observe elements performed out of
sequence. A third variation is the introduction of foreign elements during the c
ourse of the study. 2.4.5.1. Missed Readings When missing a reading, analysts sh
ould in no circumstance approximate or endeavor to record the missed value. Put
a mark M on the W column (referring to Figure 1) or the area where the time shou
ld have been logged. Occasionally, the operator omits an element. This is handle
d by drawing a horizontal line through the space in the W column. This occurrenc
e should happen infrequently; it usually indicates an inexperienced operator or
a lack of a standard method. Should elements be omitted repeatedly, the analyst
should stop the study and investigate the necessity of performing the omitted el
ements in cooperation with the supervisor and the operator.
TABLE 4 Probability (P)a 0.5 1.000 0.816 0.765 0.741 0.727 0.718 0.711 0.706 0.7
03 0.700 0.697 0.695 0.694 0.692 0.691 0.690 0.689 0.688 0.688 0.687 0.686 0.686
0.685 0.685 0684 0.684 0.684 0.683 0.683 0.683 0.681 0.679 0.677 0.674 1.376 1.
061 0.978 0.941 0.920 0.906 0.896 0.889 0.883 0.879 0.876 0.873 0.870 0.868 0.86
6 0.865 0.863 0.862 0.861 0.860 0.859 0.858 0.858 0.857 0.856 0.856 0.855 0.855
0.854 0.854 0.851 0.848 0.845 0.842 1.963 1.386 1.250 1.190 1.156 1.134 1.119 1.
108 1.100 1.093 1.088 1.083 1.079 1.076 1.074 1.071 1.069 1.067 1.066 1.064 1.06
3 1.061 1.060 1.059 1.058 1.058 1.057 1.056 1.055 1.055 1.050 1.046 1.041 1.036
3.078 1.886 1.638 1.533 1.476 1.440 1.415 1.397 1.383 1.372 1.363 1.356 1.350 1.
345 1.341 1.337 1.333 1.330 1.328 1.325 1.323 1.321 1.319 1.318 1.316 1.315 1.31
4 1.313 1.311 1.310 1.303 1.296 1.289 1.282 6.314 2.920 2.353 2.132 2.015 1.943
1.895 1.860 1.833 1.812 1.796 1.782 1.771 1.761 1.753 1.746 1.740 1.734 1.729 1.
725 1.721 1.717 1.714 1.711 1.708 1.706 1.703 1.701 1.699 1.697 1.684 1.671 1.65
8 1.645 12.706 4.303 3.182 2.776 2.571 2.447 2.365 2.306 2.262 2.228 2.201 2.179
2.160 2.145 2.131 2.120 2.110 2.101 2.093 2.086 2.080 2.074 2.069 2.064 2.060 2
.056 2.052 2.048 2.045 2.042 2.021 2.000 1.980 1.960 31.821 6.965 4.541 3.747 3.
365 3.143 2.998 3.896 2.821 2.764 2.718 2.681 2.650 2.624 2.602 2.583 2.567 2.55
2 2.539 2.528 2.518 2.508 2.500 2.492 2.485 2.479 2.473 2.467 2.462 2.457 2.423
2.390 2.358 2.326 63.657 9.925 5.841 4.604 4.032 3.707 3.449 3.355 3.250 3.169 3
.106 3.055 3.012 2.977 2.947 2.921 2.898 2.878 2.861 2.845 2.831 2.819 2.807 2.7
97 2.787 2.779 2.771 2.763 2.756 2.750 2.704 2.660 2.617 2.576 0.4 0.3 0.2 0.1 0
.05 0.02 0.01
Percentage Points of the t-Distribution 0.001 363.619 31.598 12.941 8.610 6.859
5.959 5.405 5.041 4.781 4.587 4.437 4.318 4.221 4.140 4.073 4.015 3.965 3.922 3.
883 3.850 3.819 3.792 3.767 3.745 3.725 3.707 3.690 3.674 3.656 3.646 3.551 3.46
0 3.373 3.291
n
0.9
0.8
0.7
0.6
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
40 60 120
0.158 0.142 0.137 0.134 0.132 0.131 0.130 0.130 0.129 0.129 0.129 0.128 0.128 0.
128 0.128 0.128 0.128 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.127 0.12
7 0.127 0.127 0.127 0.126 0.126 0.126 0.126
0.325 0.289 0.277 0.271 0.267 0.265 0.263 0.262 0.261 0.260 0.260 0.259 0.259 0.
258 0.258 0.258 0.257 0.257 0.257 0.257 0.257 0.265 0.265 0.265 0.265 0.265 0.26
5 0.265 0.265 0.265 0.255 0.254 0.254 0.253
0.510 0.445 0.424 0.414 0.408 0.404 0.402 0.399 0.398 0.397 0.396 0.395 0.394 0.
393 0.393 0.392 0.392 0.392 0.391 0.391 0.391 0.391 0.390 0.390 0.390 0.390 0.38
9 0.389 0.389 0.389 0.388 0.387 0.386 0.385
0.727 0.617 0.584 0.569 0.559 0.553 0.549 0.546 0.543 0.542 0.540 0.539 0.538 0.
537 0.536 0.535 0.534 0.534 0.533 0.533 0.532 0.532 0.532 0.531 0.531 0.531 0.53
1 0.530 0.530 0.530 0.529 0.527 0.526 0.524
1421
From
and
the
eas;
1422
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
2.4.5.2. Variations from Expected Sequence of Work This happens frequently when
observing new and inexperienced workers or on a long-cycle job made of many elem
ents. If using snapback timing, just note the jump from one element to another b
y putting arrows. For continuous timing: 1. Draw a horizontal line in the box wh
ere the time is to be logged. 2. Below the line, log the time operator began the
element, and above, record the time it ended. 3. Continue logging in this forma
t until the next element observed is the normal element in the sequence. 2.4.5.3
. Foreign Elements Foreign elements are unavoidable delays such as talk with the
supervisor, tool breakage, material replenishment, etc. or personal delays such
as drinking, resting, etc. A separate area should be assigned where details of
the delay can be recorded (letter code A, B, C . . . , start and end time, descr
iption). Put the code in the box of the element done upon resumption. 2.4.5.4. S
hort Foreign Elements Occasionally, foreign elements can be of a short duration
that makes it impossible to time. Delays like dropping and retrieving of tools,
wiping of brows, or short answers to supervisors inquiries are example of such si
tuations. In these situations, continue timing the elements but encircle the tim
e and put in the remarks column the foreign element encountered.
2.5.
Rating the Operators Performance
The purpose of performance rating is to determine, from the time actually taken
by the operator being observed, the standard time that can be maintained by the
average quali ed worker and that can be used as a realistic basis for planning, co
ntrol, and incentive plans. The speed of accomplishment must be related to an id
ea of a normal speed for the same type of work. This is an important reason for
doing a proper method study on a job before attempting to set a time standard. I
t enables the analyst to gain a clear understanding of the nature of the work an
d often enables him or her to eliminate excessive effort or judgment and so brin
g the rating process nearer to a simple assessment of speed. Standard performanc
e is the rate of output that quali ed workers will naturally achieve without undue
fatigue or overexertion during a typical working day or shift, provided that th
ey know and adhere to the speci ed method and are motivated to apply themselves to
their work. This performance is denoted as 100 on the standard rating and perfo
rmance scales. Depending on the skill and effort of the operator, it may be nece
ssary to adjust upwards to normal the time of a good operator and adjust downwar
ds to normal the time of a poor operator. In principle, performance rating adjus
ts the mean observed time (OT) for each element performed during the study to th
e normal time (NT) that would be required by a quali ed operator to perform the sa
me work: NT (OT)
R 100
where R is expressed as a percentage. If the analyst decides that the operation
being observed is being performed with less effective speed than the concept of
standard, the analyst will use a factor of less than 100 (e.g., 75 or 90). If, o
n the other hand, the analyst decides that the effective rate of working is abov
e standard, it has a factor greater than 100 (e.g., 115 or 120). Unfortunately,
there is no universally accepted method of performance rating, nor is there a un
iversal concept of normal performance. The majority of rating systems used are l
argely dependent on the judgment of the time study analyst. It is primarily for
this reason that the analyst re ects high personal quali cations. To de ne normal perf
ormance, it is desirable for a company to identify benchmark examples so that va
rious time study analysts can develop consistency in performance rating. The ben
1.2
1.26
1.
1.23 or 123
This rating factor is then used for all other effort elements of the method. 2.5
.1.3. Speed Rating Today, speed rating is probably the most widely used rating s
ystem (Niebel 1992). Speed rating is a performance evaluation that considers onl
y the rate of accomplishment of the work per unit time. In this method, observer
s measure the effectiveness of operator against concept of a normal operator and
then assign a percentage to indicate the ratio of the observed
1424 TABLE 5 Skill ratings 0.15 0.13 0.11 0.08 0.06 0.03 0.00 0.05
atings 0.13 0.12 0.10 0.08 0.05 0.02 0.00 0.04 0.08 0.12 0.17
0.10
0.16
0.22 E
0.07 F Consistency
From B. Niebel and A. Freivalds, Methods, Standards, and Work Design, 10th Ed.
999 McGraw-Hill Companies, Inc. Reprinted by permission.
0.06
0.03
0.03
0.02
0.03
0.03
0.03
0.04
0.04
0.04
0.04
0 Expected 0 0 0 0 0 0 0
Below
0.02
0.02
0.02
0.06
0.06
0.06
0.04
0.08
0.08
0.02
0.04
0.08
0.06
0.03
0 0
0.04
0.02
0.08
0.04
(5) handling or sensory requirements, and (6) weight handled or resistance encou
ntered. The sum of the numerical values for each of the factors constitute the d
if culty factor. Hence performance rating would be P
(pace rating)(difficulty adju
stment) Tables of percentage values for the effects of various dif culties in oper
ation performed can be found in Mundel and Danner (1994).
2.5.2.
Selecting a Rating System
From a practical-standpoint, the performance-rating technique that is easiest to
understand, apply, and explain is speed rating when augmented by standard data
benchmarks (see Section 4). As is true of all procedures requiring the exercise
of judgment, the simpler and more concise the technique, the easier it is to use
and, in general, the more valid the results. Five criteria are used to ensure t
he success of the speed rating procedure (Niebel 1992): 1. Experience by the tim
e study analyst in the class of work being performed: The analyst should be suf ci
ently familiar with the details of the work being observed as well as have exper
ience as an observer. 2. Use of standard data benchmarks on at least two of the
elements performed: The accuracy of the analysts ratings can be validated using s
tandard data elements. Standard data are a helpful guide to establishing perform
ance factors. 3. Regular training in speed rating: It is important that time stu
1426
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
2.6.
Allowances
The fundamental purpose of all allowances is to add enough time to normal produc
tion time to enable the average worker to meet the standard when performing at s
tandard performance. Even when the most practical, economical, and effective met
hod has been developed, the job will still require the expenditure of human effo
rt, and some allowance must therefore be made for recovery from fatigue and for
unavoidable delays. Allowance must also be made to enable a worker to attend to
his personal needs. Other special allowances may also have to be added to the ba
sic time in order to arrive at a fair standard. Fatigue and personal needs allow
ances are in addition to the basic time intended to provide the worker with the
opportunity to recover from the physiological and psychological effects of carry
ing out speci ed work under speci ed conditions and to allow attention to personal n
eeds. The amount of allowance will depend on the nature of the job, the work env
ironment, and individual characteristics of the operator (e.g., age, physical co
ndition, and working habits). Usually a 5% allowance for personal delays is appr
opriate in the majority of work environments nowadays. It is also considered goo
d practice to provide an allowance for fatigue on the effort elements of the tim
e study. Special allowances include many different factors related to the proces
s, equipment, materials, and so on and are further classi ed into unavoidable dela
ys and policy allowances. Unavoidable delays are a small allowance of time that
may be included in a standard time to meet legitimate and expected items of work
or delays, such as due to power outages, defective materials, waiting lines, la
te deliveries, and other events beyond the control of the operator. Precise meas
urement of such occurrences is uneconomical because of their infrequent or irreg
ular occurrence. Policy allowance (ILO 1979) is an increment, other than bonus i
ncrement, applied to standard time (or to some constituent part of it) to provid
e a satisfactory level of earnings for a speci ed level of performance under excep
tional circumstances (e.g. new employees, differently abled, workers on light du
ty, elderly). Special allowances may also be given for any activities that are n
ot normally part of the operation cycle but that are essential to the satisfacto
ry performance of the work. Such allowances may be permanent or temporary and ar
e typically decided by management with possible union negotiations (Niebel and F
reivalds 1999). Policy allowances should be used with utmost caution and only in
clearly de ned circumstances. The usual reason for making a policy allowance is t
o line up standard times with requirements of wage agreements between employers
and trade unions. Whenever possible, these allowances should be determined by a
time study. Two methods are frequently used for developing standard allowance da
ta. A production study, which requires analysts to study two to three operations
over a long period, records the duration and reason for each idle interval. Aft
er a reasonably representative sample is established, the percent allowance for
each applicable characteristic is determined. The other method involves work sam
pling studies. This method requires taking a large number of random observations
. (See Chapter 53 for more information on allowances.) The allowance is typicall
y given as a percentage and is used as a multiplier, so that normal time (NT) ca
n be readily adjusted to the standard time (ST): ST
NT NT(allowance)
NT(1
allowa
nce) The total allowance is determined by adding individual allowance percentage
s applicable to a job. For example, total allowance is computed as the sum of pe
rsonal needs (5%), fatigue (4%), and unavoidable delays (1%), equal to 10%. Norm
al time would then be multiplied by 1.1 to determine standard time.
2.7.
Calculating the Standard Time
The standard time for the operation under study is obtained by the sum of the el
emental standard times. Standard times may be expressed in minutes per piece or
hours per hundred pieces for operSTANDARD TIME
Observed Time
Performance Rating
Fatigue and Personal Needs
Unavoidable Delays
NORMAL TIME
ALLOWANCES
Figure 3 Breakdown of Standard Time.
100
9.58 hr
119.7% 8 hr
2.7.1.
Temporary Standards
In many cases, time study analysts establish a standard on a relatively new oper
ation wherein there is insuf cient volume for the operator to reach his long-term
ef ciency. Employees require time to become pro cient in any new or different operat
ion. If the analyst rates the operator based on the usual concept of output (i.e
., rating the operator below 100), the resulting standard may be unduly tight an
d make it dif cult to achieve any incentive earnings. On the other hand, if a libe
ral standard is established, this may cause an increase in labor expense and oth
er related production costs. The most satisfactory solution to such situations w
ould be the establishment of temporary standards. The analyst establishes the st
andard based on the dif culty of the job and the number of pieces to be produced.
Then, by using a learning curve (see Chapter 53 on learning curves) for the work
, as well as existing standard data, an equitable temporary standard for the wor
k can be established. When released to the production oor, the standard should be
clearly identi ed as temporary and applicable to only to a xed quantity or xed dura
tion. Upon expiration, temporary standards should be immediately replaced by per
manent standards.
2.7.2.
Setup Standards
Setup standards are for those elements that involve all events that take place b
etween completion of the previous job and the start of the present job. Setup st
andards also include teardown or put-away elements, such as clocking in on the j
ob, getting tools from the tool crib, getting drawings from the dispatcher, sett
ing up the machine, removing tools from the machine, returning tools to the tool
crib, and clocking out from the job. The analysts need to use the same care and
precision in studying the setup time because only one cycle, if not a few, can
be observed and recorded in one day. Because setup elements are of long duration
, there is a reasonable amount of time to break the job down, record the time, a
nd evaluate the performance as the operator proceeds from one work element to th
e next.
3.
PREDETERMINED TIME STANDARDS
Predetermined time standards (PTS) are techniques that aim at de ning the time nee
ded for the performance of various operations by derivation from preset standard
s of time for various motions and not by direct observation and measurement. PTS
is a work measurement technique whereby times established for basic human motio
ns (classi ed according to the nature of the motion and the conditions under which
it is made) are used to build up the time for a job at de ned levels of performan
ce. They are derived as a result of studying a large sample of diversi ed operatio
ns. There are a number of different PTS systems, and it will be useful to presen
t the main ways in which the systems vary. The differences are with respect to t
he levels and scope of application of data, motion classi cation, and time units.
Essentially, these predetermined time systems are sets of motiontime tables with
explanatory rules and instructions on the use of the motiontime values. Most comp
anies require considerable training in the practical application of these techni
ques to earn certi cation before analysts are allowed to apply the Work-Factor, MT
M, or MOST systems. Figure 4 illustrates the derivation of all of these predeter
mined time systems. No two PTS systems have the same set of time values. This is
partly due to the fact that different systems have different motion classes and
the time data therefore refer to different things. Variations
1428
Micro Motion analyses
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
Maynard operation sequence technique MOST Universal analyzing system UAS Micro m
otion analyses
1980
Modular arrangement PTS MODAPTS
Dimensional motion times G.E. DMT
MTM-3 MTM-2
General purpose Data GPD
Master standard Data MSD
1960
Basic Motion Times BMT Motion-time survey G.E. MTS Methods-time measurement West
inghouse MTM
Unified standard Data USD
Simplified motion system SMS
Work time measurement General Motors
1940
Applied Time study W. Holmes
Engstroms tables G.E.
Work-factor R.C.A. W- F
Minneapolis Honeywell M-H
Elemental time Standards Western Electric
Motion-time Standards Philco MTS
1920 F.B.GILGRETH THERBLIGS
MOTION-TIME ANALYSIS A.B. SEBUR "M.T.A." H.L. GANTT HUMAN FACTORS HARRINGTON EME
RSON MANAGEMENT Direct effect Influence F.W. TAYLOR TIME STUDY
Figure 4 Family Tree of Predetermined Times. (From Sellie 1992)
in time units are due to differences in the methods adopted for standardizing th
e motion times, the choice of the basic unit (seconds, minutes, hours, or fracti
ons of a minute), and the practice of adding contingency allowances or not. The
scope of application of a PTS system can be universal or generic, functional, or
speci c. A universal system is one that is designed for body members in general.
Its application is not restricted
sustained at steady state for the working life of a healthy employee. In using t
his system, rst all left-hand and right-hand motions required to perform a job pr
operly are summarized and tabulated. Then the rated times in TMU for each motion
are determined from the MTM-1 tables. The time required for a normal performanc
e of the task is obtained by adding only the limiting motions (the longer time b
etween two simultaneous motions predominates). The nonlimiting motion values are
then either circled or deleted. Figure 5 illustrates the use of MTM-1 in analyz
ing a simple operation.
3.1.2.
MTM-2
MTM-2 is a system of synthesized MTM data and is the second general level of MTM
data. It consists of single basic MTM motions and certain combinations of basic
MTM motions. MTM-2
1430
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
TABLE 7
Summary of MTM-1 Data
1432
MANAGEMENT, PLANNING, DESIGN, AND CONTROL
TABLE 7
(Continued )
Courtesy: MTM Association.
1434 TABLE 8 Range Up to 2 in. (5 cm) Over 26 in. (15 cm) Over 612 in. (30 cm) Ove
r 1218 in. (45 cm) Over 18 in. (45 cm)
MANAGEMENT, PLANNING, DESIGN, AND CONTROL Summary of MTM-2 Data MTM-2 (TMU) Code
2 6 12 18 32
GA 3 6 9 13 17
GB 7 10 14 18 23
GC 14 19 23 27 32
PA 3 6 11 15 20
PB 10 15 19 24 30
PC 21 26 30 36 41
GW 1per 2 lb (1 kg) A 14
Courtesy: MTM Association.
PW 1per 10 lb (4.5 kg) C 15 S 18 F 9 B 61
R 6
E 7
ends with the object still under control at the intended place. Similarly, the t
ime required to perform a PUT motion is affected by distance and weight variable
s. Three cases of PUT are likewise distinguished based on the number of correcti
ng motions required. A correcting motion is an unintentional stop, hesitation, o
r change in direction at the terminal point. Identi cation of the cases of PUT can
be made using the decision model shown in Figure 7. When there is engagement of
parts following a correction, an additional PUT motion is allowed when the dist
ance exceeds 2.5 cm (1 in.). A further consideration is that a PUT motion can be
accomplished either as an insertion or an alignment. An insertion involves plac
ing one object into another while an alignment involves orienting a part on a su
rface. Table 9 assists the analyst in better identifying the appropriate case. G
ET WEIGHT (GW) is the action required for the hand and arm to take up the weight
of the object. It occurs after the ngers have closed on the object in the preced
ing GET motion and is accomplished before any actual movement takes place. The t
ime value for a GW is 1 TMU per kilogram (2.2 lb). For instance, a 4 kg (8.8 lb)
load handled by both hands will be assigned a time value of 2 TMU since the eff
ective weight per