Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Evaluating the Relative Performance of Engineering Design Projects: A Case Study Using Data Envelopment Analysis

IEEE Transactions on Engineering Management, 2000
...Read more
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 471 Evaluating the Relative Performance of Engineering Design Projects: A Case Study Using Data Envelopment Analysis Jennifer A. Farris, Richard L. Groesbeck, Eileen M. Van Aken, and Geert Letens Abstract—This paper presents a case study of how Data En- velopment Analysis (DEA) was applied to generate objective cross-project comparisons of project duration within an engi- neering department of the Belgian Armed Forces. To date, DEA has been applied to study projects within certain domains (e.g., software and R&D); however, DEA has not been proposed as a general project evaluation tool within the project management literature. In this case study, we demonstrate how DEA fills a gap not addressed by commonly applied project evaluation methods (such as earned value management) by allowing the objective comparison of projects on actual measures, such as duration and cost, by explicitly considering differences in key input characteris- tics across these projects. Thus, DEA can overcome the paradigm of project uniqueness and facilitate cross-project learning. We describe how DEA allowed the department to gain new insight about the impact of changes to its engineering design process (re- designed based on ISO 15288), creating a performance index that simultaneously considers project duration and key input variables that determine project duration. We conclude with directions for future research on the application of DEA as a project evaluation tool for project managers, program office managers, and other decision-makers in project-based organizations. Index Terms—Concurrent engineering, ISO 15288, military or- ganizations, performance measurement, project management. I. INTRODUCTION C ROSS-PROJECT learning is vital for any organization seeking to continuously improve its project management practices and identify competitive strengths and weaknesses [1]. The Project Management Institute (PMI) [2] describes a knowledge base that integrates performance information and lessons learned from previous projects as a key asset of an organization. However, cross-project learning is difficult to achieve [3], in particular due to the reigning paradigm of project uniqueness [4]. As described in [1, p. 17] “the more a project is perceived as unique the less likely are teams to try and learn from others.” Cross-project learning, thus, requires a method of identifying sets of similar projects for which the observed performance information and lessons learned may be “fairly” aggregated [5]. Yet, to date, the project management literature has proposed few tools for enabling comparisons of performance across projects which explicitly consider differences in key project Manuscript received March 1, 2005; revised July 1, 2005 and September 1, 2005. Review of this manuscript was arranged by Department Editor, J. K. Pinto. The authors are with Virginia Polytechnic Instittute and State University, Blacksburg, VA 24061 USA (e-mail: evanaken@vt.edu) Digital Object Identifier 10.1109/TEM.2006.878100 input characteristics. Commonly applied project evaluation methods, such as earned value management (EVM) [6], pro- vide organizations with a method of systematically comparing actual project performance to project goals, encouraging or- ganizations to document causes of variance, as well as the reasoning behind the corrective actions taken. However, these methods do not take into account differences in many project input characteristics, such as scope, technical complexity, and staffing, which impact project performance and the suitability of making cross-project comparison [7]. One modeling methodology that provides a flexible and powerful approach for overcoming the paradigm of project uniqueness and facilitating cross-project learning is Data Envelopment Analysis (DEA), a linear programming-based performance evaluation methodology. DEA can simultaneously consider project performance on outcome dimensions, as well as differences in key project input characteristics. While DEA has been widely applied in the operations research and decision sciences literature, DEA has not been proposed as a general project evaluation tool within the project management litera- ture. This paper demonstrates how DEA can be readily adopted as a project evaluation tool using a case study of an engineering department in the Belgian Armed Forces. A key feature of this case study was the close collaboration between the researchers and the case study site (one of the authors was also a manager within the case study organization), in order to ensure that the performance model and results were readily understood by or- ganizational personnel, as well as reflective of actual practices within the organization. We describe how the application of DEA allowed the department to gain new insights about the impact of changes to its engineering design process on project duration by creating a performance index that simultaneously considered project duration and input variables. The paper is organized as follows. Section II provides a brief review of the relevant literature. The case study organization is described in Section III, while the DEA model and results are described in Sections IV and V, respectively. Finally, Section VI concludes with directions for future research. II. LITERATURE REVIEW A. Project Evaluation Extensive project management research has identified a wide variety of measures that describe the outcomes of a project and the input characteristics that impact outcomes. The most com- monly cited project outcome measures include cost, schedule, technical performance outcomes [8] and client satisfaction 0018-9391/$20.00 © 2006 IEEE
472 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 [9], although a universal definition of project success remains elusive [10]. In addition, a wealth of research has studied fac- tors which impact project outcomes (e.g., [3] and [11]–[16]). Based on a review of several key studies, Belassi and Tukel [17] identified four overall groups of project success factors: factors related to the project (e.g., size, urgency), factors related to the project manager and team members (e.g., competence, technical background), factors related to the organization (e.g., top management support), and factors related to the external environment (e.g., client, market). Recently, the PMI [5] identi- fied ten dimensions of project performance measures for study in benchmarking efforts (e.g., cost, schedule performance, staffing, alignment to strategic business goals, and customer satisfaction). Yet, despite the multidimensional nature of project perfor- mance, traditionally, most organizations have evaluated project performance primarily through cost and schedule performance measures, such as EVM, which is cited by the PMI as a com- monly used method of project performance evaluation [2]. These methods center on measuring ongoing and final project performance against project goals. While these approaches provide some basis for evaluating the extent of success across projects, they do not explicitly take into account differences in project characteristics which may impact cost and schedule per- formance, and they depend upon the appropriateness of project goals [18]. Other evaluation methods may include additional outcome measures (e.g., [19]), but still do not explicitly take into account many key project input characteristics. Only a few project evaluation tools have been proposed to allow project managers to explicitly consider differences in input characteristics across projects when evaluating project outcomes [20]. Slevin and Pinto [21] developed the Project Im- plementation Profile (PIP), a questionnaire-based instrument, which managers can use to assess the relative presence of ten critical success factors [15]. While assessing the presence of key factors that are within management control, the PIP does not appear to be intended to measure cross-project differences in many less controllable project characteristics (e.g., tech- nical complexity). Andersen and Jessen [22] present a similar questionnaire-based project evaluation tool which incorporates a different set of success factors, and directly includes some success measures. Pillai et al. [20] present an integrated frame- work for measuring the performance of R&D projects, which calls for the measurement of both controllable and uncontrol- lable factors that influence project outcomes. However, the discussion in [20] is largely theoretical, primarily identifying the high-level factors that should be considered in determining the integrated performance index and an overall method of combining these factors (i.e., an index using a weighted sum), rather than a detailed implementation scheme. Furthermore, all of the above methods require the a priori specification of weights for variables, assuming that the weights assigned to each variables should be the same for all projects. As will be described in the next section, DEA overcomes this difficulty by removing the requirement for managers to specify weights for variables a priori. B. Data Envelopment Analysis DEA, developed by Charnes et al.[23], is a linear program- ming-based technique for determining the relative efficiency of decision making units (DMUs), based on the performance of all other DMUs in the data set. Using a linear programming model, the best performing (“best practice”) DMUs in the data set are used to define an efficient frontier, against which all other DMUs are benchmarked. The “best practice” DMUs are assigned an efficiency score of “1” or 100%. The efficiency score for each inefficient DMU is calculated based on its distance from the effi- cient frontier. The method of calculating distance from the fron- tier depends on the type of DEA model used (e.g., input mini- mizing, output maximizing, additive, etc.), however, each ineffi- cient DMU is measured against the portion of the efficient fron- tier defined by the “best practice” DMUs with the most similar input/output mix, thus allowing each inefficient DMU to achieve its maximum efficiency score. For brevity, the mathematical for- mulations, as well as detailed description of the different types of DEA models, are not presented here. Instead, the reader is re- ferred to several excellent, comprehensive texts (e.g., [24] and [25]). Currently, several DEA software packages exist to allow managers and researchers to implement DEA models without directly solving a linear program for each DMU [26], [27]. The analysis in this study used Banxia Frontier Analyst ® , a commer- cially available DEA software package [28]. DEA has many strengths as a performance analysis method (e.g., as described in [29] and [30]). Two primary strengths are of particular importance to this research. First, DEA is a mul- tidimensional measurement method that can incorporate mul- tiple input and output variables, including variables with dif- ferent units of measure and levels of managerial control, such as exogenous variables, as well as categorical variables [24]. Second, as a nonparametric benchmarking methodology, DEA does not require that all units weigh inputs and outputs the same way. Instead, DEA utilizes the weight for each input and output that will give each DMU its maximum possible efficiency score [24]. Therefore, DEA is appropriate for analyzing complex ac- tivities that require considerable flexibility in the detailed “pro- duction” approach [31]. Engineering design projects are clearly one such activity in which the detailed approach of design ac- tivities can vary from project to project. The only caveat to this point is that DMUs must be similar enough to form a relatively homogeneous set [32]. That is, they must complete similar types of activities (although the detailed approach to completing these activities may differ), produce similar products and service, con- sume similar types of resources, and perform under similar envi- ronmental constraints (although differences in environment can be accounted for through nondiscretionary or categorical vari- ables). In the nearly 30 years since its introduction, DEA has been widely applied to a multitude of problems in a variety of do- mains [33]. Several application areas related to this research have been explored in previous research. For instance, DEA has been applied to study the performance of military vehicle main- tenance units [34]–[37]. These prior applications are relevant to the present research since the case study organization is also a military support unit. However, the case study organization dif- fers from those studied in [34]–[37] in that it is an engineering department that designs new communications equipment for military vehicles, rather than a maintenance department. In ad- dition, while [34]–[37] compared departments within the or-
IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 471 Evaluating the Relative Performance of Engineering Design Projects: A Case Study Using Data Envelopment Analysis Jennifer A. Farris, Richard L. Groesbeck, Eileen M. Van Aken, and Geert Letens Abstract—This paper presents a case study of how Data Envelopment Analysis (DEA) was applied to generate objective cross-project comparisons of project duration within an engineering department of the Belgian Armed Forces. To date, DEA has been applied to study projects within certain domains (e.g., software and R&D); however, DEA has not been proposed as a general project evaluation tool within the project management literature. In this case study, we demonstrate how DEA fills a gap not addressed by commonly applied project evaluation methods (such as earned value management) by allowing the objective comparison of projects on actual measures, such as duration and cost, by explicitly considering differences in key input characteristics across these projects. Thus, DEA can overcome the paradigm of project uniqueness and facilitate cross-project learning. We describe how DEA allowed the department to gain new insight about the impact of changes to its engineering design process (redesigned based on ISO 15288), creating a performance index that simultaneously considers project duration and key input variables that determine project duration. We conclude with directions for future research on the application of DEA as a project evaluation tool for project managers, program office managers, and other decision-makers in project-based organizations. Index Terms—Concurrent engineering, ISO 15288, military organizations, performance measurement, project management. I. INTRODUCTION ROSS-PROJECT learning is vital for any organization seeking to continuously improve its project management practices and identify competitive strengths and weaknesses [1]. The Project Management Institute (PMI) [2] describes a knowledge base that integrates performance information and lessons learned from previous projects as a key asset of an organization. However, cross-project learning is difficult to achieve [3], in particular due to the reigning paradigm of project uniqueness [4]. As described in [1, p. 17] “the more a project is perceived as unique the less likely are teams to try and learn from others.” Cross-project learning, thus, requires a method of identifying sets of similar projects for which the observed performance information and lessons learned may be “fairly” aggregated [5]. Yet, to date, the project management literature has proposed few tools for enabling comparisons of performance across projects which explicitly consider differences in key project C Manuscript received March 1, 2005; revised July 1, 2005 and September 1, 2005. Review of this manuscript was arranged by Department Editor, J. K. Pinto. The authors are with Virginia Polytechnic Instittute and State University, Blacksburg, VA 24061 USA (e-mail: evanaken@vt.edu) Digital Object Identifier 10.1109/TEM.2006.878100 input characteristics. Commonly applied project evaluation methods, such as earned value management (EVM) [6], provide organizations with a method of systematically comparing actual project performance to project goals, encouraging organizations to document causes of variance, as well as the reasoning behind the corrective actions taken. However, these methods do not take into account differences in many project input characteristics, such as scope, technical complexity, and staffing, which impact project performance and the suitability of making cross-project comparison [7]. One modeling methodology that provides a flexible and powerful approach for overcoming the paradigm of project uniqueness and facilitating cross-project learning is Data Envelopment Analysis (DEA), a linear programming-based performance evaluation methodology. DEA can simultaneously consider project performance on outcome dimensions, as well as differences in key project input characteristics. While DEA has been widely applied in the operations research and decision sciences literature, DEA has not been proposed as a general project evaluation tool within the project management literature. This paper demonstrates how DEA can be readily adopted as a project evaluation tool using a case study of an engineering department in the Belgian Armed Forces. A key feature of this case study was the close collaboration between the researchers and the case study site (one of the authors was also a manager within the case study organization), in order to ensure that the performance model and results were readily understood by organizational personnel, as well as reflective of actual practices within the organization. We describe how the application of DEA allowed the department to gain new insights about the impact of changes to its engineering design process on project duration by creating a performance index that simultaneously considered project duration and input variables. The paper is organized as follows. Section II provides a brief review of the relevant literature. The case study organization is described in Section III, while the DEA model and results are described in Sections IV and V, respectively. Finally, Section VI concludes with directions for future research. II. LITERATURE REVIEW A. Project Evaluation Extensive project management research has identified a wide variety of measures that describe the outcomes of a project and the input characteristics that impact outcomes. The most commonly cited project outcome measures include cost, schedule, technical performance outcomes [8] and client satisfaction 0018-9391/$20.00 © 2006 IEEE 472 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 [9], although a universal definition of project success remains elusive [10]. In addition, a wealth of research has studied factors which impact project outcomes (e.g., [3] and [11]–[16]). Based on a review of several key studies, Belassi and Tukel [17] identified four overall groups of project success factors: factors related to the project (e.g., size, urgency), factors related to the project manager and team members (e.g., competence, technical background), factors related to the organization (e.g., top management support), and factors related to the external environment (e.g., client, market). Recently, the PMI [5] identified ten dimensions of project performance measures for study in benchmarking efforts (e.g., cost, schedule performance, staffing, alignment to strategic business goals, and customer satisfaction). Yet, despite the multidimensional nature of project performance, traditionally, most organizations have evaluated project performance primarily through cost and schedule performance measures, such as EVM, which is cited by the PMI as a commonly used method of project performance evaluation [2]. These methods center on measuring ongoing and final project performance against project goals. While these approaches provide some basis for evaluating the extent of success across projects, they do not explicitly take into account differences in project characteristics which may impact cost and schedule performance, and they depend upon the appropriateness of project goals [18]. Other evaluation methods may include additional outcome measures (e.g., [19]), but still do not explicitly take into account many key project input characteristics. Only a few project evaluation tools have been proposed to allow project managers to explicitly consider differences in input characteristics across projects when evaluating project outcomes [20]. Slevin and Pinto [21] developed the Project Implementation Profile (PIP), a questionnaire-based instrument, which managers can use to assess the relative presence of ten critical success factors [15]. While assessing the presence of key factors that are within management control, the PIP does not appear to be intended to measure cross-project differences in many less controllable project characteristics (e.g., technical complexity). Andersen and Jessen [22] present a similar questionnaire-based project evaluation tool which incorporates a different set of success factors, and directly includes some success measures. Pillai et al. [20] present an integrated framework for measuring the performance of R&D projects, which calls for the measurement of both controllable and uncontrollable factors that influence project outcomes. However, the discussion in [20] is largely theoretical, primarily identifying the high-level factors that should be considered in determining the integrated performance index and an overall method of combining these factors (i.e., an index using a weighted sum), rather than a detailed implementation scheme. Furthermore, all of the above methods require the a priori specification of weights for variables, assuming that the weights assigned to each variables should be the same for all projects. As will be described in the next section, DEA overcomes this difficulty by removing the requirement for managers to specify weights for variables a priori. B. Data Envelopment Analysis DEA, developed by Charnes et al.[23], is a linear programming-based technique for determining the relative efficiency of decision making units (DMUs), based on the performance of all other DMUs in the data set. Using a linear programming model, the best performing (“best practice”) DMUs in the data set are used to define an efficient frontier, against which all other DMUs are benchmarked. The “best practice” DMUs are assigned an efficiency score of “1” or 100%. The efficiency score for each inefficient DMU is calculated based on its distance from the efficient frontier. The method of calculating distance from the frontier depends on the type of DEA model used (e.g., input minimizing, output maximizing, additive, etc.), however, each inefficient DMU is measured against the portion of the efficient frontier defined by the “best practice” DMUs with the most similar input/output mix, thus allowing each inefficient DMU to achieve its maximum efficiency score. For brevity, the mathematical formulations, as well as detailed description of the different types of DEA models, are not presented here. Instead, the reader is referred to several excellent, comprehensive texts (e.g., [24] and [25]). Currently, several DEA software packages exist to allow managers and researchers to implement DEA models without directly solving a linear program for each DMU [26], [27]. The analysis in this study used Banxia Frontier Analyst®, a commercially available DEA software package [28]. DEA has many strengths as a performance analysis method (e.g., as described in [29] and [30]). Two primary strengths are of particular importance to this research. First, DEA is a multidimensional measurement method that can incorporate multiple input and output variables, including variables with different units of measure and levels of managerial control, such as exogenous variables, as well as categorical variables [24]. Second, as a nonparametric benchmarking methodology, DEA does not require that all units weigh inputs and outputs the same way. Instead, DEA utilizes the weight for each input and output that will give each DMU its maximum possible efficiency score [24]. Therefore, DEA is appropriate for analyzing complex activities that require considerable flexibility in the detailed “production” approach [31]. Engineering design projects are clearly one such activity in which the detailed approach of design activities can vary from project to project. The only caveat to this point is that DMUs must be similar enough to form a relatively homogeneous set [32]. That is, they must complete similar types of activities (although the detailed approach to completing these activities may differ), produce similar products and service, consume similar types of resources, and perform under similar environmental constraints (although differences in environment can be accounted for through nondiscretionary or categorical variables). In the nearly 30 years since its introduction, DEA has been widely applied to a multitude of problems in a variety of domains [33]. Several application areas related to this research have been explored in previous research. For instance, DEA has been applied to study the performance of military vehicle maintenance units [34]–[37]. These prior applications are relevant to the present research since the case study organization is also a military support unit. However, the case study organization differs from those studied in [34]–[37] in that it is an engineering department that designs new communications equipment for military vehicles, rather than a maintenance department. In addition, while [34]–[37] compared departments within the or- FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS ganization, the present research compares the performance of projects within a given department. In another related application, Paradi et al.[38] used DEA to analyze the performance of engineering design teams at Bell Canada. This work is related to the present research since both use DEA to examine the performance of engineering activities, although the unit of analysis in [38] was the team, while the unit of analysis in the present research is the project. To date, the application of DEA to projects appears to be almost solely limited to software and R&D projects. Examples of software project applications include [29], [30], and [39]–[47]. R&D project applications have focused primarily on selecting the best set of R&D projects to receive funding (e.g., [31] and [48]–[55]). However, there have been some applications that used DEA to measure the efficiency of completed or ongoing R&D projects (e.g., [56]–[58]). Our literature review only revealed four project-level application areas that did not involve either software projects or R&D projects. These related studies have focused on the selection of projects [59], [60] or on assessing performance of completed projects [61], [62]. In the application most similar to the present research, Busby and Williamson [62] applied DEA to the project and “work package” (subassembly) level of engineering design projects in the aerospace industry. Their research was focused on studying methods of measuring performance for engineering activities, rather than investigation of the usefulness of DEA as a general project evaluation tool for organizational decision makers in project-based organizations. III. BACKGROUND ON CASE STUDY ORGANIZATION The Department of Technical Studies and Installations (TSI) designs and installs communication and information systems (CISs) for military vehicles in the Belgian Armed Forces (BAF). The work of the Department is entirely project-based and is comprised of engineering design projects to develop CISs, addressing specific design problems such as vibrations and electromagnetic compatibility. Historically, the TSI Department used a sequential engineering (SE) process based on “over the wall” hand-offs between functions, with little or no interaction between these functions. This way of working appeared to lead to rework and, ultimately, increased project duration, due to conceptual misunderstandings between functions, lack of information sharing and formal documentation of project knowledge, and bottlenecks of projects waiting for action between functions. To address these issues, the TSI Department had completed the first phase of redesigning its engineering design process prior to the start of the current research (see [63] for more information on the TSI Department and the redesign of the engineering design process). The new process incorporated concurrent engineering (CE) concepts and a stage-gate system based on the ISO 15288 standard for the systems engineering lifecycle [64]. Key CE practices included in the new design process were a dedicated, cross-functional project team and overlapping of project activities. The dedicated project team cooperatively planned project work from the conceptual stages until completion. This involvement of all project personnel from conceptual 473 Fig. 1. Project duration versus engineering design process (n = 15 projects). stages enabled the overlapping of many design activities. In addition to the use of a dedicated team and overlapping activities, the stage-gate system structured decision-making throughout the project and improved documentation and communication of project knowledge, reducing the potential for misunderstandings leading to costly delays and/or rework. The TSI Department chose to pilot this new approach in its most technically complex projects. (“Technical complexity” describes the technical difficulty and uncertainty of a project). To empirically evaluate the effectiveness of the new engineering design process, the TSI Department needed a way to assess the relative performance of projects before and after the implementation of the new design process. Prior to the introduction of the new process, the Department rarely attempted to compare performance across projects, due to the paradigm of project uniqueness. Therefore, soon after the introduction of the new design process, in addition to continuing to evaluate the performance of each project compared to its targets, the Department began to use two measures to compare performance across multiple projects. The two measures selected were project duration (in working days) and a measure of efficiency called the project workflow index, which was defined as effort (work content of the project, measured in person-days) divided by project duration. While the measures used by the TSI Department are less sophisticated than other existing project evaluation approaches (e.g., EVM or measures of perceived success), they share the same major weaknesses. They fail to adequately account for the differences in all the key project input characteristics, which would be necessary to enable “fair” comparison of outcomes across projects. The Department’s internal evaluation of the first four projects completed using the new CE design process – using project duration and the project workflow index – did not unequivocally support the continued use of the new process. When the duration of the first four projects completed under the new development method was compared to the eleven projects of similar technical complexity completed under the old SE design process, it appeared that project duration had decreased significantly under the new method (see Fig. 1). However, there was no 474 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 Fig. 2. Project workflow index versus engineering design process (n = 15 projects. evidence that project workflow index performance had changed (see Fig. 2), indicating that there was no strong evidence that projects under the new process had proportionally fewer delays and nonproductive waiting time than projects under the old SE process. Furthermore, variation in staffing (personnel, officers) and priority was evident even for these fifteen projects in the same technical complexity category. Thus, many personnel within the TSI Department were not convinced that the new engineering design process was effective in reducing project duration. These personnel argued that the shorter project duration for the projects completed using the new engineering design method may be due to differences in other project characteristics compared to projects completed under the old SE approach. Findings from the CE literature provided even more evidence for questioning the expected impact of the new engineering design process, particularly given the characteristics of the TSI Department’s projects. Although CE practices have become common in the last twenty years and many studies suggest that CE practices are successful in reducing project duration [65], some research has found that the success of CE practices has varied across organizations [66]–[68], and other research has suggested that outcomes may depend upon project characteristics. In particular, the impact of the two key CE practices included in the new design process, overlapping activities and dedicated cross-functional project teams, may be affected by project uncertainty (the degree to which the information needed to complete a project is incomplete or unstable), which is one aspect of technical complexity. For instance, [69] found that overlapping activities actually increased project duration in high uncertainty projects due to increased rework. Similarly, [70] found that overlapping reduced project duration for low uncertainty projects, but not for high uncertainty projects. The use of cross-functional project teams, on the other hand, has been found to reduce project duration more for high uncertainty versus low uncertainty projects (e.g., [69], [71], and [72]), but this finding has been questioned [65]. Further, most studies of CE practices only focus on the effects of the practices on project duration and do not consider other project characteristics, such as effort or quality [65], [69]. Thus, it became clear that the key question for analysis posed by the TSI Department was: Does the new CE design process appear to result in shorter project duration than the old SE design process, given differences in characteristics across projects? To address this question, the TSI Department required a multidimensional index of project performance, capable of measuring relative performance across similar projects. DEA was identified as the modeling methodology to be applied to generate this index. Other modeling methodologies that were considered were performance indexes (ratio measures) and regression. These modeling methodologies contain certain assumptions that were not desirable in the present case. For instance, the development of performance indexes requires the a priori specification of variable weights, assuming that all projects combine inputs in exactly the same way to produce outputs. For complex tasks, such an assumption may be too limiting [31]. Although not requiring the a priori specification of weights, regression similarly assumes that all projects in a data set use the same set of input and output weights. Further, regression identifies the mean performance for a set of inputs while DEA seeks to identify best performing units given the set of inputs and outputs. For additional comparison of DEA to ratio and regression modeling methodologies see [73]. The development of the DEA model is discussed in the next section. IV. MODEL SPECIFICATION The TSI Department’s data set contained 15 projects with comparable technical complexity completed over a period of eight years, with 23 input and output variables recorded for each project. As previously indicated, eleven of these projects were conducted under the old engineering design process, and four were conducted under the new process. The authors made the decision to build the model using only the performance measures extant in the data set, due to difficulty in collecting accurate post-hoc data for completed projects, as well as precedent in the DEA literature (e.g., [41]). Therefore, the data set employed does not capture all potential project characteristics (input variables) that could impact project duration or other project outcomes (such as measures of customer satisfaction). In addition, since the TSI Department’s database did not preserve information such as project cost and schedule goals, it is impossible to compare the performance of the DEA model to cost and schedule variance measures. Table I defines the variables referenced in the model specification discussion described in this section. The first step in modeling project performance using DEA was to work with Department managers to identify the input and output variables of interest. TSI Department management agreed that project duration (Variable 1) was the key output of interest to the case study application. The driving force behind the process redesign was the need to reduce project duration. In the project management literature, “time” (along with cost, scope, technical performance and satisfaction) represents a key category of project performance measures [74], [75], and minimizing project duration is one objective the project-based organization can pursue [76]. Project duration has frequently FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS 475 TABLE I VARIABLE DESCRIPTIONS been included in DEA models of software projects (e.g., [30], [41], [45], and [61]). Other potentially relevant output measures, such as quality and customer satisfaction, were not currently tracked by the TSI Department. Project cost was included indirectly through the input variable effort (person days), which reflects the major component of project cost. The decision to incorporate project cost in the model as an input rather than as an output is based on the fact that cost represents the consumption of organizational resources used to create project outcomes. This decision is aligned with previous DEA applications to project work (e.g., [41], [47], [54], and [55]), which have also modeled project cost as an input variable. Thus, a single output DEA model using project duration was used in the case study application. After identifying the output variable of interest, the next step in specifying the DEA model was to identify the input variables necessary to capture important differences between projects. Input variables were identified through consultation with the TSI Department management (to accurately describe their practices) and through a review of the DEA and project management literature. While it is important that the input variables that most impact project duration are identified, if too many variables are included, a DEA model loses discriminatory power – that is, all or most units become efficient due to their unique levels of inputs and outputs. The recommended maximum number of input and output variables is equal to one-half the number of DMUs in any given category or analysis [32]. Because this analysis concerns the 15 projects of greatest technical complexity, the maximum number of input and outputs variables that could be included in the DEA model for this analysis was seven. In addition to the single output variable, four input variables were identified for inclusion in the model, for a total of five variables. The four input variables identified were: effort, project staffing, priority, and number of officers. In addition, because all 15 projects had the same level of technical complexity, this was a control variable in the present analysis. Effort (Variable 2) describes the total amount of person-days consumed by the project (i.e., the work content of the project.). This variable is under the influence of the project manager, but is fixed beyond a certain minimum point. While inefficient project management practices can increase effort, through rework, there is a certain minimum amount of work that must be completed to meet the objectives of the project – that is, there is a minimum level of effort. Therefore, effort can be viewed as a cost measure, and also as a measure related to project scope or size. Effort, measured as labor hours, has been studied as an input in DEA applications to software projects (e.g., [29], [40], [43], and [46]) and R&D projects (e.g., [31]). In the project management literature, project size or scope is considered a dimension of project performance, as is cost [8]. Project staffing (Variable 3) describes the concentration of labor resources on the project. Specifically, project staffing describes the average number of people scheduled to work on a project each project day, thus capturing resource assignment decisions within the TSI Department. Obtaining and scheduling labor resources is a significant portion of any project manager’s job, and is also a concern of top management. All else being equal, scheduling more people to concurrently work on a project – that is, increasing overlapping – could decrease project duration, although, as previously indicated, this has been debated in the CE literature. Project staffing – in the form of average team size during the life of the project – has been studied in past DEA applications to project work (e.g., [45]). In addition, the project management literature has studied the relationship between average number of employees assigned to a project during its lifespan and project outcomes [77]. Finally, project staffing relates to top management support and project personnel, which have been identified as critical project success factors [15]. Priority (Variable 4) indicates the importance (urgency) assigned to a project by top management. The TSI Department rated project priority on a nine-point scale, with “1” representing the lowest level of priority and “9” representing the highest level of priority. Thus, while priority is actually an interval variable, the relatively large number of intervals suggests that it can be treated like a continuous variable. All else being equal, a higher-urgency project would be expected to achieve shorter project duration than a lower-urgency project, because higher urgency projects would receive more attention and experience shorter turnaround times in resource requests and other administrative tasks. Priority is, therefore, a constraint reflecting the satisfaction of top management objectives (i.e., stakeholder satisfaction). Aspects of project priority have been considered in past applications of DEA to project work (e.g., 476 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 [47]). In the project management literature, priority relates to both top management support [15] and project urgency [16]. Number of officers (Variable 5) indicates the number of officers available in the TSI Department to support a project, not the actual number of officers directly assigned to a project. All else being equal, increasing the number of officers should allow officers to give more attention to individual projects, thereby reducing the turnaround time for administrative tasks and, ultimately, reducing project duration. Increasing the number of officers could also allow officers to specialize in a particular type of project, thereby increasing efficiency of project oversight. Clarke [36] included number of officers as an input variable in his application of DEA to military maintenance units. Number of officers can be related to both top management support and project personnel (since officers are a personnel resource available to the team), which have long been identified as key project success factors [15]. Finally, technical complexity (Variable 6) describes the technical difficulty and uncertainty of a project. Although related to effort (i.e., more technically complex projects tend to involve more work content), technical complexity captures additional elements that affect project duration — such as the extent of risk, need for testing, need for increased coordination between functions, and degree of technological uncertainty. Uncertainty, in particular, is likely to increase the need for rework under overlapping of project tasks [69], [78]. The TSI Department categorized projects according to general level of technical complexity, with “1” representing the most technically complex projects, “2” representing projects of medium technical complexity, and “3” representing the least technically complex projects. For categorical inputs under DEA, projects in a disadvantaged category (e.g., greater technical complexity) are not directly compared to projects in a more advantaged category (e.g., lower technical complexity). However, projects in a more advantaged category can be benchmarked against “best practice” projects in a disadvantaged category [33]. That is, technical complexity category 2 or 3 projects would not form part of the efficient frontier for category 1 projects, since these projects are more difficult. This is the reason why projects in category 2 and 3 were excluded from the analysis on the impact of the new engineering design process, since the new process had only been piloted with projects in technical complexity category 1. Technical complexity has been considered in some past applications of DEA to project work (e.g., [30], [40], [56], and [61]). Technical complexity and related variables (uncertainty, technical difficulty) have also emerged as a key performance dimension in the project management literature (e.g., [14] and [79]) and the CE literature (e.g., [69], [70], and [78]). It should be noted here that the Department’s project workflow index (Variable 7) is equal to effort divided by project duration. Therefore, the project workflow index was not used as a separate input variable, as the variables used to calculate the index are already in the model. After identifying the output and input variables to be studied in the case study application, the final steps before executing the DEA model included specifying the particular DEA model to be applied and transforming two of the model variables. Two basic DEA model orientations are output maximizing and input minimizing [see [25] for a more detailed discussion of the differences summarized here]. The model employed for a particular case depends upon the objectives and questions of interest, as well as the relative controllability of inputs versus outputs. An output maximizing model determines the maximum proportional increase in outputs possible for the given level of inputs. Output maximizing models are generally used when output levels are discretionary, but input levels are relatively fixed, or when it is desirable to set targets for maximum output levels. Input minimizing models determine the decrease in inputs that should be possible while producing the current level of outputs. Input minimizing models are generally used when input levels are relatively controllable, but output levels are pre-specified, or when it is desirable to evaluate internal process efficiency. Because the key analysis question here involved estimating maximum project duration, an output maximizing model was used. This was also appropriate since many of the input variables were not under the direct control of the project manager: priority and number of officers were determined by top management, technical complexity is an exogenous variable, and effort, the work content of a project, is only partially controlled by the project manager. Only project staffing is, arguably, primarily controlled by the project manager. DEA models can also assume a variety of returns to scale. Two basic models are Charnes, Cooper, Rhodes (CCR) [23], which assumes constant returns to scale (CRS), and Banker, Charnes, Cooper (BCC) [80], which assumes variable returns to scale (VRS). CRS models provide the most conservative measure of efficiency (i.e., the most aggressive DEA project duration targets). Under CRS, all units are compared against a frontier defined by units operating under the most productive scale size. Units operating under any diseconomies of scale, therefore, cannot be 100% efficient. On the other hand, VRS models allow units operating under diseconomies of scale to form part of the frontier, as long as they perform better than their most similar peers (e.g., those operating under similar diseconomies of scale). Choosing which model to use depends on both the characteristics of the data set and the question being analyzed. In the case study application, the TSI Department sought to determine to what extent the use of the new engineering design process had succeeded in improving the overall performance of projects (e.g., reducing project duration, given project inputs). This analysis focuses on a small group of projects (15 in total), all in technical complexity category 1. In the case study application, diseconomies of scale could exist for many input variables. For instance, increasing project staffing beyond a certain level may yield diminishing returns in project duration, due to congestion. Similarly, projects with large amounts of effort could also experience diminishing returns of scale. This could be due either to inherent project characteristics (project scope) or poor project management practices (coordination difficulties or rework). However, for this analysis, the researchers and the TSI Department did not want units operating under diseconomies of scale (e.g., over-staffed projects or projects with increased effort due to rework or other inefficient practices) to be considered 100% efficient. Instead, they wanted to draw aggressive comparisons based on the performance of best practice units. For these FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS 477 TABLE II PROJECT DATA FOR 15 PROJECTS INTECHNICAL COMPLEXITY CATEGORY 1 TABLE III MEAN VALUES AND STATISTICAL SIGNIFICANCE TEST RESULTS FOR ENGINEERING DESIGN PROCESS (n = 15 projects) reasons, the CRS model output was used, since the CRS identifies inefficiency due to diseconomies of scale and benchmarks performance against units that are operating under the most productive scale size. Finally, executing the DEA model required transforming two variables: project duration and effort. Because increasing project duration is undesirable, project duration is an “undesirable output” in DEA terminology. There are several different methods for modeling undesirable outputs in DEA. The method used for the case study application was the transformation [81], a common practice in the DEA literature (e.g., [32] transformation, the undesirable output and [41]). In the ( ) is subtracted from a significantly large scalar ( ), such that all resulting (transformed) values ( ) are positive and increasing values are desirable. The chosen is generally a value just slightly larger than the maximum value of the undesirable output observed in the data set, since choosing a value that is much greater than this maximum value can distort model results [32]. In this case application, the maximum project duration was 1930 days, thus, 2000 was chosen as and all project durations were subtracted from this to create the variable quickness. Similarly, the input effort had to be transformed because an increase in an input should contribute to increased output, in this case quickness. Thus, the transformation was applied to effort. The maximum effort value in the data set was 815 days, and the selected was 850 days. Following full specification, the DEA model was executed with the DEA software. V. RESULTS Table II presents the project data, as well as DEA results for the 15 projects in technical complexity category 1. Table III presents summary statistics and statistical test results for projects completed under the new CE design process versus the old SE design process. Kolmogorov–Smirnov tests of normality suggested that several variables, including the distribution of DEA scores, significantly departed from the normal distribution. Thus, the nonparametric Mann-Whitney U test was used to test for differences across groups in the current analysis. In addition, nonparametric tests have been specifically recommended for analysis of DEA results [82], [83]. 478 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 Fig. 3. DEA efficiency versus engineering design process (n = 15 projects). A Mann-Whitney U test revealed that there was a significant difference in project duration between projects completed under the new CE design process versus the old SE design process ( ). Differences among input variables were not statistically significant. However, for number of officers and effort, the -value was fairly low ( ). Given the small sample size, the statistical power of the U test to detect significant effects was low. Thus, one could argue that the number of projects was too small to be very confident that effort and the number of officers did not have an effect on project duration. Therefore, when individual study variables were compared for the new CE process versus the old SE process, it was not clear initially how much of the noted reduction in project duration was due to the new CE design project and how much was due to subtle differences in project characteristics. Examination of DEA results, however, clearly revealed the impact of the new design process. DEA efficiency scores for projects completed under the new design process were significantly higher than the efficiency scores for projects completed under the old process ( ) (see Fig. 3 and Table III). That is, when differences in input characteristics between projects are explicitly taken into account, and each project is compared only to its most similar peers in the data set, projects completed under the new engineering design process are superior to projects completed under the old engineering design process in terms of project duration. This result indicates how decisionmakers in project-based organizations can use DEA to overcome the paradigm of project uniqueness, ensuring that variations in project input characteristics are explicitly taken into account when drawing cross-project comparisons of project outcomes. In fact, a closer examination of DEA scores (Table II) reveals that three out of four projects completed under the new engineering design process form part of the efficient frontier (i.e., are 100% efficient). Two technical complexity category 1 projects completed under the old engineering design process are also 100% efficient. Reference frequencies and cross-efficiencies for technical complexity category 1 projects also provide support for the effectiveness of the new engineering design process in reducing project duration. Reference frequency indicates the number of inefficient units that calculate their efficiency scores at least in part from a particular efficient unit. An efficient unit with a high reference frequency is, therefore, similar to many other less efficient DMUs and is more likely to have achieved its efficiency score through best practices, rather than simply by virtue of uniqueness. DEA reference frequencies (Table II) indicated that, out of the five efficiency leaders in the current analysis, two projects completed under the new process accounted for 41% (nine) and 27% (six), respectively, of all references by their less efficient peers. The three remaining “best practice” units are more unique in their input mix and, therefore, had lower reference frequencies. A cross-efficiency score is the calculation of the efficiency score that would result for a particular unit if its efficiency score were calculated based on the input and output weightings used by another unit and can, thus, be viewed as a form of sensitivity analysis of a unit’s DEA efficiency score. Examination of cross-efficiency scores (Table II) reveals that the two projects with the highest reference frequencies also had the highest average cross-efficiency scores: 97% and 91%, respectively. That is, these projects are highly efficient regardless of the weighting of inputs used to evaluate the project. Thus, their efficiency scores are robust. The mean cross-efficiency score for the 11 projects completed under the old process was 43% (with a low of 11% and a high of 82%), while the mean cross-efficiency score for the 4 projects completed under the new process was 88% (with a low of 78% and a high of 97%). The reference frequencies and cross-efficiencies provide powerful tools for organizational decision-makers to identify appropriate projects to benchmark, as well as indicating what types of other projects the lessons learned from these projects will apply to. Thus, DEA provides additional project performance evaluation information not attained through the application of traditional project evaluation techniques. Finally, evidence suggests that the new engineering design process shifted the efficient frontier, indicating enhanced potential for shorter project duration. In Table IV, Column 2 presents the resulting DEA project duration targets when the DEA model was executed using the data set containing all 15 projects. Column 3 in Table IV presents the resulting DEA project duration targets when the DEA model was executed using a data set containing only the 11 projects completed under the old engineering design process. Thus, as Table IV indicates, executing the DEA model with (Column 2) versus without (Column 3) the four projects completed under the new process showed that new process projects shifted the frontier an average of 22%. That is, individual project duration targets for projects completed under the old process were an average of 22% shorter when projects completed under the new engineering design process were included in the analysis. The maximum observed change was 48%. In terms of actual project duration, this shift indicates that the “average” project completed under the old process would be expected to be 206 days shorter if performed under the new process. The maximum expected reduction in project duration for an inefficient project under the old design process, if conducted under the new design process, was 533 days. FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS 479 TABLE IV EFFECT OF NEW ENGINEERING DESIGN PROCESS ON DEA TARGET DURATIONS VI. CONCLUSIONS AND FUTURE RESEARCH This case study demonstrated how DEA allowed the TSI Department to gain new insight about the impact of changes to its engineering design process by creating a performance index that simultaneously considered project duration and key input variables that determine project duration. Analysis of DEA output unequivocally demonstrated that changes to the engineering design process were successful in reducing project duration. Although DEA has been widely applied in the operations research and Decision Sci. literature, DEA has not been proposed as a general project evaluation tool within the project management literature. Yet, DEA fills a gap not addressed by commonly applied project evaluation methods used by organizations. By explicitly considering differences in project input characteristics when evaluating performance, DEA allowed the case study organization to draw more unequivocal and robust conclusions regarding the effectiveness of its new engineering design process than it could have achieved through its current project evaluation practices or other methods. In particular, DEA created comparison groups, by utilizing projects with similar input characteristics and comparing the performance of each project to its most similar peers. Thus, DEA can overcome the paradigm of project uniqueness by providing organizations with a method of accounting for differences in project input characteristics when measuring performance across projects. Unlike ratio and regression methods, DEA does not require that all projects combine inputs in exactly the same way to produce outputs, thus allowing the flexibility necessary for the analysis of a complex task. A final outcome of the DEA analysis reported in this case study, as well as additional analyses not reported here, is that the TSI Department became convinced that DEA is a useful project evaluation tool and has purchased DEA software to perform model computations on an ongoing basis. Additional work includes further analysis of effectiveness of the engineering design process within the TSI Department, particularly as projects are completed in technical complexity categories 2 and 3. A more general research area is the application of DEA to project portfolio management – a challenging competency area still emerging in the project management domain [3]. Project portfolio management has already been somewhat explored for R&D projects (e.g., [84]), but additional research is needed for other types of projects. Another general research area where DEA can be particularly useful is identifying best practices and problem areas in project management practices, both within and across organizations. For instance, DEA efficiency scores can be used to guide case study research by identifying “best performing,” “worst performing,” and “average” projects, which can then be analyzed in detail to identify specific project management practices that contributed to their relative efficiency or inefficiency (e.g., [56]). For example, the TSI Department is currently using DEA to develop a profile of the types of projects that are good candidates for outsourcing. By examining the worstperforming DMU in the data set, the TSI Department can identify distinguishing characteristics of these projects, and develop an outsourcing profile. From this analysis, the TSI Department found that projects requiring the purchase of certain types of components with long supply lead-times are one category of projects that is a good candidate for outsourcing. Legal requirements for supplier selection for the military result in substantially longer supply lead-times if these projects are completed by the TSI Department, while industry can more quickly purchase the needed components, or may even possess an inventory on hand, thus enabling faster completion of these projects. While the TSI Department can complete these projects at significantly lower cost than industry, this cost efficiency had to be weighed against the tradeoff in project duration, since responsiveness (e.g., reduced project duration) is critical for the Belgian military. One additional promising avenue for future research is the application of DEA to project planning. The target-setting capability of DEA is a powerful tool, which project managers can apply to determine appropriate, objective targets for project duration or other project variables. Accurate and appropriate es- 480 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 timates are vital both to project planning, and to goal-based project evaluation methods such as EVM. However, obtaining accurate estimates has often been problematic [18]. DEA could be used to obtain estimates (targets) for project duration of new projects, by including estimates of the input variables and a dummy variable (i.e., a very large value) for project duration. Finally, research can further investigate how DEA can be applied to evaluate the performance of ongoing projects, by generating an efficiency score using estimated final input and output levels (based on current project performance). If the efficiency score is lower than a desired threshold and/or indicates that the estimated final levels of some input(s) or output(s) differ from acceptable levels, action can be taken to bring project performance more in line with management targets. ACKNOWLEDGMENT The authors wish to acknowledge the support of Lt. Col. I.M.M. Duhamel and Col. I.M.M. Rotsaert for their support of this research. REFERENCES [1] S. Newell, “Enhancing cross-project learning,” Eng. Manag. J., vol. 16, no. 1, pp. 12–20, Mar. 2004. [2] A Guide to the Project Management Body of Knowledge Project Management Institute, Newton Square, PA, 2004, ANSI/PMI 99-001-2004, 3rd. [3] T. Cooke-Davies, “The ’real’ success factors on projects,” Int. J. Project Manag., vol. 20, no. 3, pp. 185–190, Apr. 2002. [4] J. P. Lewis, The Project Manager’s Desk Reference. New York: McGraw Hill, 2000. [5] “Effective Benchmarking for Project Management,” White Paper Project Management Institute, Newton Square, PA, 2004. [6] A Practice Standard for Earned Value Management Project Management Institute, Newton Square, PA, 2005. [7] A. J. Shenhar, A. Tishler, D. Dvir, S. Lipovetsky, and T. Lechler, “Refining the search for project success factors: A multivariate, typological approach,” R&D Manag., vol. 32, no. 2, pp. 111–126, Mar. 2002. [8] R. J. Might and W. A. Fischer, “The role of structural factors in determining project management success,” IEEE Trans. Eng. Manag., vol. EM-32, pp. 71–77, May 1985. [9] J. K. Pinto and D. P. Slevin, “Project success: Definitions and measurement techniques,” Project Manag. J., vol. 19, no. 1, pp. 67–73, Feb. 1988. [10] A. M. M. Liu and A. Walker, “Evaluation of project outcomes,” Construction Manag. Econ., vol. 16, no. 2, pp. 209–219, Mar. 1998. [11] D. Dvir, S. Lipovetsky, A. Shenhar, and A. Tishler, “In search of project classification: A non-universal approach to project success factors,” Res. Policy, vol. 27, pp. 915–935, 1998. [12] H. Kerzner, “In search of excellence in project management,” J. Syst. Manag., vol. 38, no. 2, pp. 30–39, Feb. 1987. [13] M. E. Pate-Cornell and R. L. Dillon, “Success factors and future challenges in the management of faster-better-cheaper projects: Lessons learned from NASA,” IEEE Trans. Eng. Manag., vol. 48, no. 1, pp. 25–35, Feb. 2001. [14] J. K. Pinto and S. J. Mantel Jr., “The causes of project failure,” IEEE Trans. Eng. Manag., vol. 37, no. 4, pp. 269–275, Nov. 1990. [15] J. K. Pinto and D. P. Slevin, “Critical factors in successful project implementation,” IEEE Trans. Eng. Manag., vol. EM-34, no. 1, pp. 22–27, Feb. 1987. [16] ——, “Critical success factors in R&D projects,” Res. Technol. Manag., vol. 32, no. 1, pp. 31–35, Jan./Feb. 1989. [17] W. Belassi and O. I. Tukel, “A new framework for determining critical success/failure factors in projects,” Int. J. Project Manag., vol. 14, no. 3, pp. 141–151, Jun. 1996. [18] M. Freeman and P. Beale, “Measuring project success,” Project Manag. J., vol. 23, no. 1, pp. 8–17, Mar. 1992. [19] S. W. Hughes, D. D. Tippett, and W. K. Thomas, “Measuring project success in the construction industry,” Eng. Manag. J., vol. 16, no. 3, pp. 31–37, Sep. 2004. [20] A. S. Pillai, A. Joshi, and K. S. Rao, “Performance measurement of R&D projects in a multi-project, concurrent engineering environment,” Int. J. Project Manag., vol. 20, no. 2, pp. 165–177, Feb. 2002. [21] D. P. Slevin and J. K. Pinto, “The project implementation profile: New tool for project managers,” Project Manag. J., vol. 17, no. 4, pp. 57–70, Sep. 1986. [22] E. S. Andersen and S. A. Jessen, “Project evaluation scheme: A tool for evaluating project status and predicting results,” Project Manag., vol. 6, no. 1, pp. 61–69, 2000. [23] A. Charnes, W. W. Cooper, and E. Rhodes, “Measuring the efficiency of decision making units,” Eur. J. Oper. Res., vol. 2, no. 6, pp. 429–444, Nov. 1978. [24] A. Charnes, W. W. Cooper, A. Y. Lewin, and L. M. Seiford, Eds., Data Envelopment Analysis: Theory, Methodology, and Application. Boston, MA: Kluwer Academic, 1994. [25] E. Thanassoulis, Introduction to the Theory and Application of Data Envelopment Analysis. Boston, MA: Kluwer Academic , 2001. [26] B. Hollingsworth, “A review of data envelopment analysis software,” Econ. J., vol. 107, no. 443, pp. 1268–1270, Jul. 1997. [27] I. Herrero and S. Pascoe, Estimation of Technical Efficiency: A Review of Some of the Stochastic Frontier and DEA Software CHEER, 2002, vol. 15(1) [Online]. Available: http://www.economicsnetwork.ac.uk/ cheer/ch15_1/dea.htm Kendal, Cumbria, UK, [28] Frontier Analyst®Professional, 3.0.3 ed. Banxia Software Ltd., 2001. [29] M. A. Mahmood, K. J. Pettingell, and A. I. Shaskevich, “Measuring productivity of software projects: A data envelopment analysis approach,” Decision Sci.., vol. 27, no. 1, pp. 57–80, Winter 1996. [30] P. D. Chatzoglou and A. C. Soteriou, “A DEA framework to assess the efficiency of the software requirements capture and analysis process,” Decision Sci.., vol. 30, no. 2, pp. 503–531, Spring 1999. [31] P. Kauffmann, R. Unal, A. Fernandez, and C. Keating, “A model for allocating resources to research programs by evaluating technical importance and research productivity,” Eng. Manag. J., vol. 12, no. 1, pp. 5–8, Mar. 2000. [32] R. G. Dyson, R. Allen, A. S. Camanho, V. V. Podinovski, C. S. Sarrico, and E. A. Shale, “Pitfalls and protocols in DEA,” Eur. J. Oper. Res., vol. 132, no. 2, pp. 245–259, Jul. 2001. [33] W. W. Cooper, L. M. Seiford, and K. Tone, Data Envelopment Analysis: A Comprehensive Text with Models, Applications, References and DEA-Solver Software. Boston, MA: Kluwer Academic, 2000. [34] A. Charnes, C. T. Clark, W. W. Cooper, and B. Golany, “A developmental study of data envelopment analysis in measuring the efficiency of maintenance units in the US air forces,” Ann. Operations Res., vol. 2, pp. 95–112, 1985. [35] Y. Roll, B. Golany, and D. Seroussy, “Measuring the efficiency of maintenance units in the Israeli air force,” Eur. J. Oper. Res., vol. 27, no. 2, pp. 136–142, Nov. 1989. [36] R. L. Clarke, “Evaluating USAF vehicle maintenance productivity over time: An application of data envelopment analysis,” Decision Sci., vol. 23, no. 2, pp. 376–384, Mar./Apr. 1992. [37] S. Sun, “Assessing joint maintenance shops in the Taiwanese army using data envelopment analysis,” J. Oper. Manag., vol. 22, no. 3, pp. 233–245, Jun. 2004. [38] J. C. Paradi, Smith, and Schaffnit-Chatterjee, “Knowledge worker performance analysis using DEA: An application to engineering design teams at Bell Canada,” IEEE Trans. Eng. Manag., vol. 49, no. 2, pp. 161–172, May 2002. [39] R. D. Banker, S. M. Datar, and C. F. Kemerer, “Factors affecting software maintenance productivity: An exploratory study,” in Proc. 8th Int. Conf. Information Systems, 1987, pp. 160–175. [40] ——, “A model to evaluate variables impacting the productivity of software maintenance projects,” Manag. Sci., vol. 37, no. 1, pp. 1–18, Jan. 1991. [41] J. C. Paradi, D. N. Reese, and D. Rosen, “Applications of DEA to measure the efficiency of software production at two large Canadian banks,” Ann. Oper. Res., vol. 73, pp. 91–115, 1997. [42] C. Parkan, K. Lam, and G. Hang, “Operational competitiveness analysis on software development,” J. Oper. Res. Soc., vol. 48, no. 9, pp. 892–905, Sep. 1997. FARRIS et al.: EVALUATING THE RELATIVE PERFORMANCE OF ENGINEERING DESIGN PROJECTS [43] R. D. Banker and C. F. Kemerer, “Scale Economies in New Software Development,” IEEE Trans. Softw. Eng., vol. 15, pp. 1199–1205, Oct. 1989. [44] R. D. Banker, H. Chang, and C. F. Kemerer, “Evidence on economies of scale in software development,” Inf. Softw. Technol., vol. 36, no. 15, pp. 275–282, May 1994. [45] R. D. Banker and S. A. Slaughter, “A field study of scale economies in software maintenance,” Manag. Sci., vol. 43, no. 12, pp. 1709–1725, Dec. 1997. [46] E. Stensrud and I. Myrtveit, “Identifying high performance ERP projects,” IEEE Trans. Softw. Eng., vol. 29, no. 5, pp. 398–416, May 2003. [47] Z. Yang and J. C. Paradi, “DEA evaluation of a Y2K software retrofit program,” IEEE Trans. Eng. Manag., vol. 51, no. 3, pp. 279–287, Aug. 2004. [48] M. Oral, O. Kettani, and P. Lang, “A methodology for collective evaluation and selection of industrial R&D projects,” Manag. Sci., vol. 37, no. 7, pp. 871–885, Jul. 1991. [49] W. D. Cook, M. Kress, and L. M. Seiford, “Data envelopment analysis inn the presence of both quantitative and qualitative factors,” J. Oper. Res. Soc., vol. 47, no. 7, pp. 945–953, Jul. 1996. [50] R. H. Green, J. R. Doyle, and W. D. Cook, “Preference voting and project ranking using DEA and cross-evaluation,” Eur. J. Oper. Res., vol. 90, no. 3, pp. 461–472, May 1996. [51] J. D. Linton, S. T. Walsh, and J. Morabito, “Analysis, ranking and selection of R&D projects in a portfolio,” R&D Manag., vol. 32, no. 2, pp. 139–147, Winter 2002. [52] S. A. Thore and L. Lapao, , S.A. Thore, Ed., “Prioritizing R&D projects in the face of technological and market uncertainty: Combining scenario analysis and DEA,” in Technology Commercialization: DEA and Related Analytical Methods for Evaluating the Use and Implementation of Technical Innovation. Boston, MA: Kluwer Academic, 2002, pp. 87–110. [53] S. A. Thore and G. Rich, , S. A. Thore, Ed., “Prioritizing a portfolio of R&D activities, employing data envelopment analysis,” in Technology Commercialization: DEA and Related Analytical Methods for Evaluating the Use and Implementation of Technical Innovation. Boston, MA: Kluwer Academic, 2002, pp. 53–74. [54] C. C. Liu and C. Y. Chen, “A two-dimensional model for allocating resources to R&D programs,” J. Amer. Acad. Bus., vol. 5, no. 1/2, pp. 469–473, Sep. 2004. [55] H. Eliat, B. Golany, and A. Shtub, “Constructing and evaluating balanced portfolios of R&D projects with interactions: A DEA based methodology,” Eur. J. Oper. Res., 2005, to be published. [56] D. Verma and K. K. Sinha, “Toward a theory of project interdependencies in high tech R&D environments,” J. Oper. Manag., vol. 20, no. 1, pp. 451–468, Sep. 2002. [57] B. Yuan and J. N. Huang, , S. A. Thore, Ed., “Applying data envelopment analysis to evaluate the efficiency of R&D projects – A case study of R&D energy technology,” in Technology Commercialization: DEA and Related Analytical Methods for Evaluating the Use and Implementation of Technical Innovation. Boston, MA: Kluwer Academic, 2002, pp. 111–134. [58] E. Revilla, J. Sarkis, and A. Modrego, “Evaluating performance of public-private research collaborations,” J. Oper. Res. Soc., vol. 54, no. 2, pp. 165–174, Feb. 2003. [59] D. K. Chai and D. C. Ho, “Multiple criteria decision model for resource allocation: A case study in an electric utility,” INFOR, vol. 36, no. 3, pp. 151–160, Aug. 1998. [60] S. A. Thore and F. Pimentel, , S. A. Thore, Ed., “Evaluating a portfolio of proposed projects, ranking the relative to a list of existing projects,” in Technology Commercialization: DEA and Related Analytical Methods for Evaluating the Use and Implementation of Technical Innovation. Boston, MA: Kluwer Academic, 2002, pp. 233–246. [61] J. D. Linton and W. D. Cook, “Technology implementation: A comparative study of Canadian and U.S. factories,” INFOR, vol. 36, no. 3, pp. 142–150, Aug. 1998. [62] J. S. Busby and A. Williamson, “The appropriate use of performance measurement in non-production activities: The case of engineering design,” Int. J. Operations Production Manag., vol. 20, no. 3, pp. 336–358, Mar. 2000. [63] E. Van Aken, D. Van Goubergen, and G. Letens, “Integrated enterprise transformation: Case application in a project organization in the Belgian armed forces,” Eng. Manag. J., vol. 15, no. 2, pp. 3–16, Jun. 2003. [64] Systems Engineering – System Life Cycle Processes International Organization for Standardization, Geneva, 2002, ISO/IEC 15288: 2002. 481 [65] D. Gerwin and N. J. Barrowman, “An evaluation of research on integrated product development,” Manag. Sci., vol. 48, no. 7, pp. 938–953, Jul. 2002. [66] M. Ainscough and B. Yazdani, “Concurrent engineering within British industry,” Concurrent Eng.: Res. Applicat., vol. 8, no. 1, pp. 2–11, Mar. 2000. [67] H. Maylor and R. Gosling, “The reality of concurrent new product development,” Integrated Manuf. Syst., vol. 9, no. 2, pp. 69–76, 1998. [68] B. Haque, “Problems in concurrent product development: An in-depth comparative study of three companies,” Integrated Manufact. Syst., vol. 14, no. 3, pp. 191–207, May 2003. [69] N. Bhuiyan, D. Gerwin, and V. Thomson, “Simulation of the new product development process for performance improvement,” Manag. Sci., vol. 50, no. 12, pp. 1690–1703, Dec. 2004. [70] C. Terwiesch and C. H. Loch, “Measuring the effectiveness of overlapping development activities,” Manag. Sci., vol. 45, no. 4, pp. 455–465, Apr. 1999. [71] K. Eisenhardt and B. Tabrizi, “Accelerating adaptive processes: Product innovation in the global computer industry,” Admin. Sci. Quart., vol. 40, no. 1, pp. 84–110, Mar. 1995. [72] A. Griffin, “The effect of project and process characteristics on product development cycle time,” J. Marketing Res., vol. 34, no. 1, pp. 24–35, Feb. 1997. [73] H. D. Sherman, , R. H. Silkman, Ed., “Managing productivity of health care organizations,” in Measuring Efficiency: An Assessment of Data Envelopment Analysis. San Francisco: Jossey-Bass, 1986, pp. 31–46. [74] H. Kerzner, In Search of Excellence in Project Management. New York: Van Nostrand Reinhold, 1998. [75] A. P. C. Chan and A. P. L. Chan, “Key performance indicators for measuring construction success,” Benchmarking, vol. 11, no. 2, pp. 203–221, 2004. [76] O. I. Tukel and W. O. Rom, “An empirical investigation of project evaluation criteria,” Int. J. Oper. Production Manag., vol. 21, no. 3, pp. 400–416, Mar. 2001. [77] A. J. Shenhar and D. Dvir, “Toward a typological theory of project management,” Res. Policy, vol. 25, no. 4, pp. 607–632, Jun. 1996. [78] V. S. D. Krishnan, “Managing the simultaneous execution of coupled phases in concurrent product development,” IEEE Trans. Eng. Manag., vol. 43, no. 3, pp. 210–217, May 1996. [79] T. Raz, A. J. Shenhar, and D. Dvir, “Risk management, project success, and technological uncertainty,” R&D Manag., vol. 32, no. 2, pp. 101–109, Mar. 2002. [80] R. D. Banker, A. Charnes, and W. W. Cooper, “Some models for estimating technical and scale inefficiencies in data envelopment analysis,” Manag. Sci., vol. 30, no. 9, pp. 1078–1092, Sep. 1984. [81] H. Scheel, “Undesirable outputs in efficiency valuations,” Eur. J. Oper. Res., vol. 132, no. 2, pp. 400–410, Jul. 2001. [82] P. L. Brockett and B. Golany, “Using rank statistics for determining programmatic efficiency differences in data envelopment analysis,” Manag. Sci., vol. 42, no. 3, pp. 466–472, Mar. 1996. [83] T. Sueyoshi and S. Aoki, “A use of a nonparametric statistic for DEA frontier shift: The Kruskal and Wallis rank test,” Omega, vol. 29, no. 1, pp. 1–18, Feb. 2001. [84] B. Golany and S. A. Thore, , S. A. Thore, Ed., “On the ranking of R&D projects in a hierarchical organizational structure subject to global resource constraints,” in Technology Commercialization: DEA and Related Analytical Methods for Evaluating the Use and Implementation of Technical Innovation. Boston, MA: Kluwer Academic, 2002, pp. 253–274. Jennifer A. Farris received the B.S. degree in industrial engineering from the University of Arkansas, Fayetteville, and the M.S. degree in industrial and systems engineering from Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg. She working toward the Ph.D. degree in the Enterprise Engineering Research Laboratory, the Grado Department of Industrial and Systems Engineering, Virginia Tech. She is a Graduate Research Assistant, Enterprise Engineering Research Laboratory, the Grado Department of Industrial and Systems Engineering, Virginia Tech. Her research interests are in performance measurement, lean production, kaizen event processes, product development and project management. She is a member of IIE and Alpha Pi Mu. 482 IEEE TRANSACTIONS ON ENGINEERING MANAGEMENT, VOL. 53, NO. 3, AUGUST 2006 Richard L. Groesbeck received the B.S. degree in civil engineering from Brigham Young University, Provo, UT, the M.B.A. degree from Case Western Reserve University, Cleveland, OH, and the Ph.D. degree in industrial engineering from Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg. He is currently a Research Assistant Professor in the Grado Department of Industrial and Systems Engineering at Virginia Tech. His research interests are performance measurement and team-based work systems. Prior to receiving the Ph.D. degree, he spent more than 20 years in industry, working for U.S. Steel, Ore-Ida, and Clorox, in a variety of engineering and manufacturing management positions, including field engineer, design engineer, production supervisor, plant manager, and Division Technology Manager. Dr. Groesbeck is a member of IIE, a senior member of ASQ, a Certified Quality Engineer, and an Examiner for the U.S. Senate Productivity and Quality Award for Virginia. Eileen M. Van Aken received the B.S., M.S., and Ph.D. degrees in industrial engineering from Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg. She is currently an Associate Professor and Assistant Department Head in the Grado Department of Industrial and Systems Engineering, Virginia Tech. She also serves as Director of the Enterprise Engineering Research Laboratory where she conducts research and teaches in the areas of performance measurement, organizational transformation, lean production, and team-based work systems. Prior to joining the faculty at Virginia Tech, she was a Process Engineer with AT&T Microelectronics, Richmond, VA. Dr. Van Aken is a member of ASEM, ASQ, ASEE, a senior member of IIE, and a fellow of the World Academy of Productivity Science. Geert Letens received the M.S. degree in telecommunications engineering from the Royal Military Academy, Brussels, Belgium, the M.S. degree in mechatronics from Katholieke Universiteit, Leuven, Belgium, and the M.S. degree in total quality management from Limburg University Center, Diepenbeek, Belgium. He is currently the TQM-Coordinator of the Competence Center on Communications and Information Systems of the Belgian Armed Forces. He also serves an external military Professor in the Department of Management and Leadership, the Royal Higher Defense Institute, Laken, Belgium. His research interests are performance measurement, organizational transformation, lean six sigma for service, product development, and project management. He has several years of consulting experience in the areas of organizational change and management systems as President of ChI Consulting. Dr. Letens is member of ASEM, IIE, SAVE, and PMI.
Keep reading this paper — and 50 million others — with a free Academia account
Used by leading Academics
omer emirkadi
Karadeniz Technical University
Zoltan Bartha
Miskolc University
Avni Önder Hanedar
Sakarya University
busayo aderounmu
Covenant University Canaanland, Ota.