Managing Outsourced Software Projects:
An Analysis of Project Performance and Customer Satisfaction
Revised Version
Sriram Narayanan*
Sridhar Balasubramanian**
Jayashankar M. Swaminathan***
January 17 2009
THIS IS A DRAFT VERSION. PLEASE DO NOT QUOTE OR CITE WITHOUT PERMISSION
* Department of Supply Chain Management, Eli Broad School of Business, Michigan State University,
East Lansing, MI 48834. Tel: (517) 432-6432; Email: sriram@msu.edu
**Department of Marketing, The Kenan-Flagler Business School, The University of North Carolina at Chapel
Hill, Campus Box 3490, McColl Building, Chapel Hill NC 27599-3490. Phone: (919) 962-3194; Fax: (919)
962-7186; Email: sridhar_balasubramanian@unc.edu
***Department of Operations, Technology, and Innovation Management, Kenan-Flagler Business School,
University of North Carolina at Chapel Hill, Campus Box 3490, McColl Building, Chapel Hill NC 275993490. Tel: 919-843-8341; Fax: 919-843-4897; Email: msj@unc.edu
1
Electronic copy available at: http://ssrn.com/abstract=1548244
Managing Outsourced Software Projects:
An Analysis of Project Performance and Customer Satisfaction
We examine the drivers of project performance and customer satisfaction in outsourced software projects
using a proprietary panel dataset. The data cover 822 customer observations related to 182 unique projects
executed by an India-based software services vendor. Adopting a multidisciplinary perspective, we
investigate how project planning, team stability, and communication effectiveness (antecedent variables)
impact project performance and customer satisfaction. We delineate the direct and interactive influences
of the antecedent variables. We also examine how these influences are moderated by two important
project contexts: (a) the nature of software work (Maintenance & Development versus Testing projects)
and (b) project maturity (New versus Mature projects). Among other results, we demonstrate that when
project planning capabilities are high, the positive impact of team stability and communication
effectiveness on project performance is even higher. In addition, our results suggest that the impact of
communication on project performance is muted when team stability is high. Finally, we also demonstrate
that the impact of the antecedent variables on project performance varies with the nature of software
work. Our findings offer specific and actionable insights to managers that can help them manage
outsourced projects better, and open up new research perspectives in the context of outsourced project
management.
Keywords: Outsourcing, Software Projects, Project planning, Customer Satisfaction, Operations
Management.
1. INTRODUCTION
Modern firms frequently leverage workforce skill sets, time differences, and differential costs to
outsource business activities (Kulkarni 2009). The total global spending on information technology,
including business process outsourcing (IT-BPO), was about $967 billion in 2008 (NASSCOM 2009).
The annual revenues of the India-based IT-BPO services industry were estimated at $71.7 billion during
2009 (NASSCOM 2009). Despite the extensive growth of technology outsourcing, reports of customer
dissatisfaction with the delivery of outsourced services have often surfaced in the popular press.
According to McEachern (2005), “Fifty-one percent of respondents reported terminating an outsourcing
contract. On the satisfaction side, 62 percent of respondents said they were satisfied with their
outsourcing relationships, down from 79 percent a year ago.”
Creating and maintaining satisfied customers can help a company increase profits, and ultimately,
shareholder value (Fornell et al. 2009; Fornell et al. 2006). The existing research on software outsourcing
does not address issues related to customer satisfaction and project performance in outsourced projects
(for a detailed literature review of IT outsourcing, see Dibbern et al. 2004). Against this backdrop, our
research examines the following questions. How do antecedent variables such as project planning,
communication effectiveness, and team stability influence project performance and customer satisfaction
2
Electronic copy available at: http://ssrn.com/abstract=1548244
in outsourced software projects? How do task contexts related to the nature of work and project maturity
moderate these influences?
Our work differs from the current literature on software projects in the following ways. First,
there are no existing studies on the drivers of customer satisfaction in outsourced software projects (see
Ramasubbu et al. 2008a for domains of existing customer satisfaction studies). Two related studies
examine customer satisfaction in the software domain, but neither focuses on outsourced projects.
Specifically, Ramasubbu et al. (2008a) examine how technical or behavioral skills of support personnel
drive customer satisfaction in the context of enterprise software systems (ESS). Kekre et al. (1995) study
user satisfaction with attributes of a software product. Further, outsourced software service providers have
gradually moved beyond developing technical skills and have pursued a broader set of capabilities
covering project management, communication, and turnover management (Ethiraj et al. 2005; Swink et
al. 2006; Gopal et al. 2002). However, little is known about the pathways through which these capabilities
ultimately drive customer satisfaction. Our work addresses these gaps in knowledge.
Second, the nature of work and the levels and kinds of uncertainties involved can vary across
software project types (Cusumano et al. 2003). While some studies have examined the drivers of software
project performance (e,g, Deephouse et al. 1996), evidence on how these drivers vary across project types
is limited. Nidumolu (1995) examines how residual risk mediates the impact of communication on project
performance. Barki et al. (2001) suggest that project performance is contingent on adequate fit between
software development practices and project uncertainty levels. MacCormack and Verganti (2003)
examine how requirement uncertainties in web development projects moderate the impact of development
practices on performance. We add to existing insights by examining how the drivers of project
performance and customer satisfaction vary by: (a) the nature of software task (Maintenance &
Development versus Testing projects) and (b) project maturity (New versus Mature projects).
Third, in addition to these external contexts, the antecedent variables that affect project
performance and customer satisfaction may also mutually moderate their impacts on these outcomes. To
examine such influences, we examine the interactive effects of these variables on project outcomes.
Fourth, managing turnover is important for software projects in general, and for offshore software
projects in particular (Gopal et al. 2003; Narayanan and Swaminathan 2007). Despite frequent discussions
3
in the trade press about the negative effects of turnover, there is little empirical evidence about how team
stability impacts software project performance and customer satisfaction. We explore this issue.
Finally, to the best of our knowledge, the related literature in the software domain has
consistently employed cross sectional data to examine project performance as a dependent variable. We
employ an estimation approach based on longitudinal data with multiple observations for each project.
In §2, we develop the theoretical perspectives that link key antecedent variables to customer
evaluations of project performance and overall satisfaction. Research design and measurement issues are
described in §3. The empirical findings and contextual effects are detailed in §4. In §5, we present the
managerial implications and limitations of our study, and describe some ideas for future research.
2. THEORY
Figure 1 outlines our conceptual model. We first consider the antecedents that influence customer
satisfaction. Next, we consider the antecedents that influence project performance. In developing the
hypotheses, we frequently draw on the structural contingency perspective (Miller 1992; Nidumolu 1995;
Barki et al. 2001) and the risk-based perspective of software development (Nidumolu 1995). In addition,
consistent with the interdisciplinary nature of our work, we also draw on the marketing and service
operations literatures to motivate our arguments.
---------------- INSERT FIGURE 1 ABOUT HERE--------------2.1 Customer Satisfaction
Customer satisfaction is the evaluative response of the client to the services rendered by the
provider (Wirtz and Bateson 1999). While customer satisfaction can be modeled as a function of the gap
between client expectations and the service provider’s performance (Berry et al. 1985), expectations may
be unclear in the context of complex software services and satisfaction may instead be based on
subjective client experiences with delivered services. Direct, customer experience-based measures of
satisfaction are appropriate in such scenarios (Rust et al. 1999). Such measures have been adopted in
other studies in technology-intensive service settings (e.g., Balasubramanian et al. 2003; Krishnan and
4
Ramaswamy 1999; Ramasubbu et al. 2008a). Accordingly, we measure customer satisfaction as the
overall of satisfaction of the client with the service provider team.
Achieving high customer satisfaction is particularly challenging when services involve intensive
customization, complex tasks, remote operations, and contract workers who can be influenced only to a
limited extent (Stewart 2003). The outsourced software projects we study possess these characteristics.
Against this backdrop, we can expect that, first, output timeliness and quality will comprise important
components of overall delivery expectations and can positively impact customer satisfaction.
Second, when customers evaluate services, the interactions that lead to the final outcome may
drive satisfaction in parallel with the outcome itself (Brown and Swartz 1989; Danaher and Mattsson
1994). These interactions are crucial in software projects because the priorities and emerging issues are
often unclear to both customer and service provider. Project planning helps resolve such uncertainty
through frequent exchanges of information that keep the customer continuously involved (Stewart 2003;
Bendoly and Swink 2007). Further, because such planning allows the customer to be involved throughout,
it can increase satisfaction through self-attribution effects (Konana and Balasubramanian 2005).
Third, communication effectiveness can enhance customer perceptions of delivered service
quality (Berry et al. 1985). Further, effective communication can create customer confidence about the
deliverables and provide reassurance that customer needs are being addressed. This is particularly
important in a context where the development activity takes place in a distant, unfamiliar location.
Finally, customers often build strong relationships with individual service-provider employees in
knowledge-intensive service domains. These relationships, which can develop customer confidence in the
competence of specific individuals and lower perceptions of project risk, may even be stronger than the
institutional relationship between the parties (Czepiel 1990). When personnel turnover does occur,
customers are less dissatisfied if the replacement process is properly managed and if they have some input
into the replacement process (Bendapudi and Leone 2002). Therefore:
H1: Project performance, project planning, communication effectiveness, and team stability have a
positive effect on customer satisfaction.
5
2.2 Project Performance
Project performance is measured by the quality and timeliness of the team in meeting the
expected deliverables. The literature on software project performance has been primarily anchored in the
structural contingency perspective (Nidumolu 1995) and the risk-based perspective of software
development (Barki et al. 2001; Nidumolu 1995). Underlying the former is an information processing
view of the organization (Galbraith 1977). Intensive information processing and exchange can reduce the
negative effects of uncertainty (Barki et al. 2001; Nidumolu 1995) and enhance project performance (i.e.
the quality and timeliness of software delivery). In offshore outsourcing, relationships are characterized
by the lack of frequent face-to-face encounters, time zone differences, and cultural divides – this enhances
the role of effective communication (Wright 2005). Effective communication in the offshore outsourcing
context involves two distinct facets. First, the service provider has to provide frequent, timely, and
complete reports of project progress to the customer. Second, the service provider has to effectively
articulate issues, many of which are technically complex and prone to multiple interpretations, through
oral and written communication. Accordingly, communication effectiveness is determined by (i)
communication intensity – as measured by the frequency and quality of reporting and (ii) communication
ability – as measured by the ability of the offshore counterparts to articulate issues when interacting
through conference calls or emails. Such communication skills are pivotal inputs into effectively
managing the project and reducing customer’s perception of project uncertainty (Apte et al. 1997;
Nidumolu 1995).
Effective communications can reduce uncertainty through multiple means. First, by providing a
clear idea about customer needs and about the level, kind, and temporal scheduling of resources that
would be required to meet those needs, such communications can induce superior customer perceptions of
project planning (Gopal et al. 2002). Second, frequent and effective communications help the developer
keep track of the shifting priorities of customers, and help align development resources to be responsive
to the requirements of customers. Third, frequent and effective communications can help reduce the gap
between the current state of the project and the customer’s desired trajectory.
Further, we expect that the positive influence of communication effectiveness will be accentuated
when the team possesses strong project planning capabilities. Project planning can help manage
6
uncertainties through effective planning, estimation, and prioritization of project activities, and the
proactive management of potential problems within the project environment (Nidumolu 1995). These
uncertainties include: (a) changes in project requirements that can call for substantial and unanticipated
rework; (b) the implementation of promised changes in software specifications and functionality that call
for effort and time allocations that far exceed initial estimates; and (c) the implementation of features and
functionality that require expertise that may take longer to learn than expected (see Barki et al. 1993 for a
summary of risks). Effective communications enable timely and relevant knowledge sharing between the
parties and a sound understanding of mutual expectations. However, to ultimately effect project
outcomes, this information must be leveraged within a strong project planning process. Otherwise,
information that is generated through robust communications will not be effectively translated into
project-level decisions and actions that will ultimately impact the quality and timeliness of service
delivery. Therefore:
H2: Communication effectiveness positively impacts project performance, and this impact is higher when
project planning capabilities are high.
In knowledge-intensive work environments, productivity and work quality are impaired when
employees exit the team (Napoleon and Gaimon 2004; Narayanan et al. 2009). When teams are stable, the
service provider can adhere to planned estimates, execute work without disruption, and better manage the
changes in priorities – this enables superior project planning (Abdel-Hamed 1989). When turnover is
high, there is limited information about the new entrants’ capabilities (Höffler and Sliwka 2002) – this
leads to non-optimal work allocation and incorrect expectations. Second, the management of project
priorities is affected because experienced team members need to guide the new entrants up the learning
curve (Chapman 1998; Huckman and Staats 2009). Further, the knowledge resident in the departing
members, which is often of a tacit nature, has to be documented and absorbed by others within the team.
Finally, reduced stability decreases the team’s ability to recognize and manage sources of risk. New team
members are less likely to identify problems at an early stage, and are less capable of taking quick
remedial actions.
Team stability is likely to be particularly useful when project planning capabilities are strong.
Good project planning involves the proactive estimation and planning of work schedules. When project
planning capability is high, the presence of stable teams enables even better project performance. When
7
teams are stable, there is lesser uncertainty around the committed plans to the customer on account of the
reliable pool of knowledge and capabilities available within the team. This enables the team to meet the
planned deadlines, enhancing its credibility in the eyes of the customer. Finally, the presence of superior
planning capabilities enables managers to better leverage the stable set of knowledge and capabilities
embedded within the team to respond to disruptions and schedule slippages. Therefore:
H3: Team stability positively impacts project performance, and this impact is higher when project
planning capabilities are high.
Cultural differences between service providers and customers can lead to communication
problems in the context of offshore outsourcing (Kobayashi-Hillary 2005; Wright 2005). Engineers in
software service firms often undergo intensive training to help them understand the customer’s native
culture and associated social etiquettes (Abdel-Hamed 1989). However, communication skills are seldom
well developed during a short training course or in a formal learning environment. Learning about the
client culture and developing a communication style fits with the environment takes time (Torbiron
1982). Further, the engineers must develop the ability to communicate in a mutually understandable
technical language that reflects knowledge about the customer’s software application domain and the way
that domain maps into the software design (Curtis et al. 1988).
When teams are stable, the customer can form a close relationship with individual team members
and better understand their communication patterns. Based on this knowledge of the service provider’s
team, the customer can arrive at more confident estimates of the team’s ability to meet project goals.
Correspondingly, frequent and explicit communication of mutual expectations and progress reports
between parties may be substituted by an enhanced mutual understanding of requirements and
commitments, and a shared trust that the parties will fulfill responsibilities at each end. In contrast, when
teams are not stable, the parties have to engage in frequent and detailed communication to ensure that the
correct expectations are set and that progress is continually tracked. In addition, when there is some team
turnover, the service provider has to assure the customer counterpart that the task in hand is not affected
due to the reallocation of activities, describe procedures implemented to minimize work disruption, and
may sometimes even need to vet potential new team members with the customer. Therefore:
H4: The impact of communication effectiveness on project performance is lower when team stability is
high.
8
3. RESEARCH DESIGN
3.1 Research Setting and Data
Our data are sourced from a large, export-oriented, India-based software services company with
over 20,000 employees and over $500 million in annual revenues at the time of the study. The data were
based on software projects that were executed by the firm for a single, large global client over multiple
years. The service provider’s operations were consistent with the concept of a “dedicated” offshore center
– all resources in the center served a single client. The center was certified at SEI-CMM Level 4 at the
time of the study. Despite a common process framework, there was some variation in the implementation
of quality processes across projects – this is consistent with other studies that have examined software
engineering practices implemented under the SEI-CMM framework (e.g., Ramasubbu et al. 2008b).
As a first step, we spent two months at the service provider’s sites in India to interview managers,
project leads, and quality supervisors. The facilitated a first-hand understanding of the key issues involved
in project execution. In parallel, we collected data on customer satisfaction and productivity-related issues
across multiple projects. We also learned about the nature of recruiting and training within the
organization. Software engineers were recruited mainly straight out of undergraduate engineering
programs. A small fraction of the engineers did not have a formal engineering background – instead, they
held post-graduate qualifications in software programming or other technical areas. Some engineers were
hired directly from competitors. Every engineer underwent introductory training that comprised a mix of
technical, quality, and cultural training. In addition, the field work allowed us to examine the variation in
project types managed by the firm and the operational measures in place to improve engineer productivity
and manage turnover (new member entry and exit) in the project team.
Consistent with Kraut and Streeter (1995), a project in our context is defined as a group of
members engaged in a specific software activity who report to a supervisor and have an associated project
sponsor from the client side. Projects could focus on software development or maintenance and testing
tasks, and could be located within the application or system software domains. Our definition of a
software project is consistent with that of Wright (2005), who notes: “… (a) project could be anything
from new development to the conversion of an existing application, support or maintenance… The
outsourcing partner can contribute with any of the roles except the project sponsor role.” Our unit of
9
analysis is a single instance of an evaluation of an offshore project team by a client manager (sponsor).
Each evaluation consists – among other variables – of the client manager’s ratings of project performance,
project planning, communication, team stability, and overall satisfaction. The data was collected at six
month intervals. Importantly, only the primary project sponsor who directly interfaced with the service
provider team evaluated the team. If the initial project sponsor was not associated with the project during
the survey administration, all the other client managers associated with the project during the time period
responded to the survey and their responses were averaged.
Given the nature of the survey, we could collect longitudinal data for each project across multiple
time periods. Our overall data comprised compiled surveys related to 182 unique projects – these yielded
822 usable observations. Projects in the dataset had a minimum of two observations each. Ten projects
had the maximum number of 10 observations each. The average number of observations per project was
4.51. The surveys were designed to capture perceptual ratings of the constructs discussed in §2 using 5point Likert scales (1= strongly disagree, and 5 = strongly agree). Non-response bias was not an issue
because feedback was submitted for over 95% of the projects. Further, the examination of multiple
projects within a single client-service provider dyad naturally controls for numerous variations that would
occur if the observations were spread across companies.
3.2 Scale Validity and Reliability
Given the archival nature of the survey, we rigorously validated the scales used. First, content
validation was done by mapping the scales to those used in the software project management literature
(see Table 1). Next, we performed a Confirmatory Factor Analysis (CFA) using LISREL 8.72 for the
entire set of 822 observations. The RMSEA for the CFA was 0.048, (chi-square (44) = 125.4). The GFI
and AGFI were 0.974 and 0.955 respectively.1 The CFA revealed that the average variance explained
(AVE or R-square) for each item was greater than 50% except managing interim goals and quality of
delivery. For these items the AVE was 43%.
Second, as a robustness check, we performed a CFA on only the first round of data from the 182
unique projects. The RMSEA for the new dataset was 0.052, chi-square (44) – 66.08. The GFI and AGFI
1
We also performed an Exploratory Factor Analysis (EFA) with varimax rotation on two random subsets of 300 and
200 observations from the data. In both cases, the overall factor loadings for each of the constructs was above 0.5
and about 77% of the overall variance in each sample was explained.
10
were 0.94 and 0.90 respectively. Each construct had a Cronbach’s alpha well over the suggested 0.70
threshold (Nunnally 1978) in both the overall sample and unique projects – see Table 1. Further, all tvalues in both the CFAs were greater than 10 suggesting good convergent validity.
Third, we tested the constructs for discriminant validity by comparing an unconstrained model
with each pair of constructs grouped together and a constrained model with the covariance between the
two constructs set to unity (Venkatraman 1989). A significant difference in chi-square between the
models indicated that the constructs were distinct. Our tests revealed that each construct was distinct
(Table 2). Overall, these findings suggest that the scales are robust.
Finally, given that the service provider collected survey data from a single respondent for each
project, one could potentially be concerned about common method bias. We address this issue in § 4.3.1.
---------------- INSERT TABLES 1 AND 2 HERE--------------3.3 Control variables
First, team size can influence project management abilities and project performance (Kraut and
Streeter 1995). We measure size as the total person-months of effort expended on the project (e.g., Kraut
and Streeter 1995). This measure accounts for team additions and attrition over time, as well as the total
expended effort (Ethiraj et al. 2005). To address team turnover, managers at the studied service provider
attempted to cross train employees on multiple tasks. However, team turnover in smaller projects is
particularly detrimental because managers have limited flexibility in cross training the limited staff on
such teams (Narayanan et al. 2009). Therefore, we introduce team size as a control and also allow for an
interaction between team size and team stability (see Appendix for scale items).
Second, we control for: (a) the type of project (M&D versus Testing); and (b) relative project
maturity (New versus Mature).
Third, we control for the fraction of total time available to the client manager that he or she
expends in interfacing with the service provider’s offshore team (management overhead). During our field
work, managers at the offshore service provider suggested that their services reduced the required
attention of the client managers on the software management front and freed them up to focus on other
important tasks within the client firm – the management overhead variable controls for this effect.
11
Fourth, we control for the perception of the productivity of the service provider team in relation
to their local team based within the customer’s organization. This control is designed to accommodate the
effects of any relative productivity comparisons that the client manager may make in evaluating the
offshore service provider. Finally, to control for structural changes over time, we include dummy
variables to represent the time period during which the data was collected.
4
ANALYSIS AND FINDINGS
4.1 Analysis
Descriptive statistics are in Table 3. We estimated the models in equations (1) and (2) below, in the
sequence described in Table 4. The variables outlined in equations (1) and (2) are enumerated in Table 1.
SATit 10 11 LN Sizeit 12 DT 13 NM 14 PPit 15 PPLAN it 16 COMM it 17 STABit
181,16 TD i it
(1)
PPit 2 21 LN Size it 22 DT 23 NM 24 PROD 25 MGTO 26 PPLAN it 27 COMM it
28 STAB it 29 STAB it * COMM it 2,10 PM it * COMM it 2,11 STAB it * PM it
2,12 STABit * LN Sizeit 2,13 2, 21TD i it
( 2)
We used scale averages for each latent variable in equations (1) and (2). To avoid problems with
multicollinearity, we grand-mean centered the continuous variables (Kreft et al. 1995).2 Our model
contained both fixed effects (corresponding to the project types and time) and random effects (to capture
between-project variability). Therefore, we used a mixed effects estimation approach using the
XTMIXED procedure in STATA. This approach also allows us to (a) incorporate repeated project
measures; (b) examine higher level random effects related to individual projects nested within project
types; and (c) perform likelihood ratio tests that facilitate the comparison of nested models.
We performed other specification checks. First, we treated the projects as nested within project
type (primarily M&D and Testing) and examined the variance at the project type level. Further, we
checked for the presence of cross random effects between projects and their maturity classification, given
that each project matures over time. None of the higher level variances, including the cross random effect
2
Kreft et al. (1995) note that coefficients of grand-mean centered measures in the hierarchical data when using
interaction terms are equivalent to the coefficients of the raw score measures.
12
parameters, were significant. Finally, we examined alternative estimation methods including Restricted
Maximum Likelihood and Maximum Likelihood. Our findings were robust across these methods.
------------------- INSERT TABLES 3 AND 4 HERE -------------------
4.2 Findings
Our estimates are detailed in Table 4. All the antecedent variables have a positive influence on
customer satisfaction (Model OS2 – Table 4) – this supports H1. Further, the likelihood ratio test suggests
that the model with the antecedent variables is superior to one with only control variables (Likelihood
Ratio chi-square (4): 686.21, p <0.01, Model OS2, Table 4). As posited in H2-H4, the direct effects of
Project planning (PPLAN), Communication effectiveness (COMM) and Team Stability (TS) positively
influence Project performance (PP) (Model PP2, Table 4).
Focusing on the interaction effects posited in H2-H4, the positive effect of communication on
project performance is strengthened in the presence of superior project planning capabilities (β=0.160,
p<0.01, Model PP3, Table 4) – this validates H2. Next, the positive effect of team stability on project
performance is strengthened in the presence of superior project planning (β=0.080, p<0.05, Model PP3,
Table 4), i.e., H3 is supported. Further, the positive effect of communication effectiveness on project
performance is weakened when team stability is high, supporting H4 (β=(-)0.203, p<0.01, Model PP3,
Table 4).
Finally, as an overall test for the interactive influences of the antecedent variables on project
performance, we examined the likelihood ratio test comparing models PP2 (which includes only direct
effects and no interactions) and PP3 (which also includes all posited interactions) in Table 4. The test
suggests that the model PP3 is superior (Likelihood Ratio chi-square (4): 40.66, p<0.01).
4.2.1
Contextual Effects: An exploratory analysis
We employed a two-fold classification of projects: (a) Maintenance and Development (M&D)
versus Testing projects and (b) New versus Mature projects. M&D projects focused on new code
development, or on improving, rectifying, and modifying software in response to maintenance requests.
In contrast, testing projects focused on verification testing and test automation, including the
implementation of well defined customer-specified test routines, followed up with a clear reporting of the
13
test findings. This also involved the development and execution of standard test scripts to automate
verification testing. Managers in charge of M&D projects had to cope with the random arrival of requests
for new software features and for defect resolutions and enhancements, and with the frequent involvement
of the customer. Therefore, M&D projects were characterized by higher uncertainty than testing projects.
The age of the project influences the extent to which the team may generate, institutionalize, and
share project-related knowledge. During the initial stages of a project, uncertainty is high because the
project scope may not be fully defined and the capabilities of the project team are untested. Such
uncertainty typically reduces with time.
The fit between the degree of uncertainty and organizational process characteristics influences
organizational performance (Drazin and Van de Ven 1985). Likewise, software project performance may
be enhanced when there is a good fit between the level and kind of project risks involved and the
deployment of specific risk management practices such as project planning and proactive human resource
management (Barki et al. 2001). Accordingly, the impact of project planning, communication
effectiveness and team stability on project performance may differ across project types and project
maturity. In addition, practices that mitigate risk may also influence customer satisfaction. Accordingly,
we examined whether the impact of antecedent variables varied by project type and maturity.
To explore for contextual effects, we introduced interaction terms between the antecedent
variables and the corresponding group dummies in equations (1) and (2). To examine whether the impact
of project performance varied across groups in equation 2, we additionally included an interaction of the
group dummy with project performance. Finally, we performed an overall likelihood ratio test to examine
whether the impact of the antecedent variables varied by group (see Table 4, PP4, PP5, OS3 and OS4).
4.2.2
Contextual Effects: Findings
The effects of the antecedent variables on customer satisfaction do not vary across the considered
contexts (see Table 4 – Models OS3 and OS4). However, as described below, the effects of the
antecedents on project performance do vary across the contexts. Importantly, given that project
performance impacts customer satisfaction, this does not rule out the situation where the total impact of
the antecedent variables on customer satisfaction – this comprises the sum of the direct and mediated
impact via project performance – does vary across contexts. Sobel tests for indirect effects of project
14
planning (t-value=9.93, p<0.01), communication effectiveness (t-value=3.15, p<0.01) and team stability
(t-value=6.17, p<0.01) on customer satisfaction mediated by project performance using models PP2 and
OS2 in Table 4 revealed a partial mediation of project performance on overall satisfaction.3 We now
summarize our results for the context variables.
Maintenance & Development [M&D] projects versus Testing projects. We find that the
positive effect of communication effectiveness on project performance is stronger in M&D projects than
in testing projects (β=0.206, p<0.01, Model PP4, Table 4). Effective communication provides the
customer with confidence that the project is on track and that the evolving customer specifications and
interim project goals are being met. Such communication can more strongly influence project
performance and customer satisfaction when uncertainty is high, as is the case in M&D projects.
Second, we find that the impact of project planning on project performance is lower in M&D than
in testing projects (β=(-)0.142, p<0.05, Model PP4, Table 4). This counters arguments in the software
engineering literature that the impact of project planning procedures – including task prioritization, work
planning, and resource allocation – more strongly impact project performance when project uncertainty is
high (Barki et al. 2001). However, our finding is consistent with arguments from organizational theory,
where high environmental uncertainty is best managed with less formal planning and stronger liaison
mechanisms (Miller 1992; Miller and Friesen 1984). In the presence of external uncertainty, software
projects could benefit from practices that support flexibility and responsiveness, rather than practices that
pre-commit the project to a rigid plan of schedules and tasks (MacCormack and Verganti 2003;
MacCormack et al. 2001).
Finally, we find that the impact of team stability on project performance is higher in testing
projects than in M&D projects (β=(-)0.109, p<0.01, Model PP4, Table 4). Similar to the argument
advanced above, flexibility in the team can help manage uncertainty. Further, as Katz and Allen (1982)
suggested in their seminal work on R&D management, teams may cut down on communicating with
external agencies and become more internally focused when uncertainty is high. In this context, some
instability can help team members avoid the Not-Invented-Here (NIH) syndrome.
3
Specifically, we ran an additional regression similar to OS2 without Project Performance variable. In this
regression, the coefficients of PPLAN and STAB were significantly higher than in model OS2. The coefficients for
COMM were higher but not significant. Overall these provide evidence of partial mediation.
15
As seen in Table 4, the likelihood ratio test reveals that the overall impact of the antecedent
variables (project management, communication and team stability) on project performance is significantly
different across the two classifications, i.e., model PP4 is significantly different than PP3 (Likelihood
Ratio chi-square (3): 16.46, p<0.01).
New versus Mature projects. Our findings suggest that the effects of the antecedent variables on
project performance do not vary by project duration. However, in the context of customer satisfaction
(Model OS4 in Table 4), the impact of team stability on satisfaction is stronger (β=(-) 0.094; p<0.1,
Model OS4, Table 4) in mature projects than in new ones. Intuitively, managers we interviewed suggested
that, as projects stabilized, teams were often expected to go beyond the basic deliverables in terms of
adding value to the customer. For example, a team may proactively initiate a process to improve testing
or rewrite software code to ease future maintenance. In addition, as the team engages with a client
manager over time, it obtains a better idea of the key drivers of satisfaction that are important to that
manager and can also better manage his or her expectations.
As seen in Table 4, likelihood ratio tests suggest that adding the interaction terms (Models OS3
and OS4 in Table 4) to the main model (OS2 – Table 4) does not significantly improve model fit
(Likelihood Ratio chi-square (4):OS2 to OS3 = 2.43; OS2 to OS4: 5.34).
4.2.3
Control variables
Consistent with our expectations, the customer’s perceptions of the project team’s productivity
positively influence perceptions of project performance (β=0.034, p<0.01, Model PP3, Table 4). Once the
main effect variables are introduced, perceptions of management overhead did not have a significant
influence on project performance. Finally, the coefficient of the interaction term between team stability
and size is negative and significant indicating that, consistent with expectations, team stability has a lesser
influence on project performance in larger projects (β=(-)0.04, p<0.05, Model PP3, Table 4).
4.3 Robustness Checks
4.3.1
Common Method Bias
Studies using data from single respondents can invoke concerns about common method variance
(CMV). However, as argued below, such bias may not be a major concern in our context.
16
First, our hypotheses frequently involve interaction effects. Contrary to the belief that CMV
uniformly deflates standard errors, recent research indicates it inflates standard errors when interaction
terms are studied. As Siemsen et al. (2009, p. 17) note: “A finding of significant quadratic or interaction
effects connotes that researchers can be confident that they are not the result of CMV.” Because CMV
reduces the likelihood of finding significant effects in an interaction model, this suggests that our findings
are robust to any CMV.
Second, longitudinal designs of the type we employ are less vulnerable to CMV than cross
sectional ones (Sanchez and Visvesvaran 2002; Rindfleisch et al. 2008; Podaskoff and Organ 1986).
Further, on occasions where the team reported to more than one client manager during the time frame of
the project that was pertinent to a data collection event, all those managers were surveyed and their
response scores were averaged. Of the 822 total observations, 50 involved multiple respondents.
Third, we performed the Harman one-factor test (McFarlin and Sweeney 1992) by comparing a
model with all the factors loaded on a single construct versus the hypothesized model. We found that the
overall RMSEA increased from 0.048 (chi-square (44) = 125.4) to 0.14 (chi-square (54) = 916.8). This
suggests that common method bias may not be a serious issue.
Fourth, CMV is lowered when respondents possess credible and deep knowledge about the
responses (Miller and Roth 1994; Phillips 1981). Our survey respondent was the client manager to whom
the offshore project team directly reported on the client side. The client manager was actively involved in
the project, interacted frequently with the offshore team and was very familiar with the project work
content. As such, the client manager was the person who could provide an objective and credible
evaluation of the team’s performance.
Fifth, a concern is that positive (or negative) halo effects that the respondent associates with the
subject of the study can systematically influence their responses on scale items, thereby leading to CMV.
To adjust for such halo effects, we include control variables that account for the total fraction of the
respondent’s time that is expended on interfacing with the service provider (management overhead
/MGTO), and the respondent’s overall productivity perception of the offshore team (productivity/PROD).
Finally, concerns related to CMV are particularly relevant when survey questions induce social
desirability biases. For example, questions related to the respondent’s managerial style or organizational
17
climate would likely induce such bias. In our case, respondents had little incentive to provide inaccurate
responses. If the responses were downward-biased – towards lower performance appraisals – that would
discourage the offshore team and ultimately affect work performance. Likewise, if the responses were
upward-biased, the client manager could be held responsible for poor team feedback and management if
the project work was later judged to be shoddy or inefficiently performed. This would ultimately impact
the client manager’s own evaluations within the client organization.
4.3.2
Other Checks
We examined the residuals from our estimated model for outliers. Of the 822
observations we detected on only four error terms that could be considered outliers. Our findings were not
sensitive to the removal of the outliers. We also conducted Shapiro-Wilk normality tests to check whether
the residuals conformed to a normal distribution. The computed residuals were normally distributed for
both the project performance and customer satisfaction models. Next, given that some of our IV’s had
high correlations, we examined our analysis for multicollinearity issues. Using OLS techniques on an ex
post basis, we examined the variance inflation factors (VIFs) and the condition numbers for each variable
using an ordinary least squares approach. The VIFs for each independent variable was below the
prescribed limit of 4 (overall mean VIF 2.25) and the condition number (25.19) was below 30 (Cohen et
al. 2003). These tests suggest that our estimates are stable. Following that, we checked whether the
findings related to new versus mature project classification was sensitive to the time cutoff employed. We
employed a two-year cutoff as suggested by Kaka and Sinha (2005) – altering this to a one-year cutoff did
not substantially impact the findings.
Given that similar independent variables are included in both equation (1) and equation (2), we
used the Breusch-Pagan test to examine whether the errors in equations (1) and (2) were significantly
correlated. A significant correlation would imply that running individual regressions may yield biased
standard errors. However, the null hypotheses that the correlation is zero was accepted (chi-square (1) =
0.003; p = 0.953) suggesting that individual regressions were appropriate. Finally, to examine whether
measures of perceived team stability were correlated with data on real turnover, we sourced turnover data
for a sub-sample of about 117 projects (data for the remaining projects were not available). The
18
correlation between team stability and turnover was - 0.288 (p<0.01), suggesting that perceived team
stability and actual turnover were significantly related.
4
DISCUSSION AND CONCLUSIONS
Research Implications
We demonstrate the key antecedents in the project management context – project planning, team
stability, and effective communication – directly influence project performance and customer satisfaction.
Beyond these direct effects, we demonstrate that these antecedents interactively influence project
performance. For example, whereas project planning in itself is an important capability, its role in driving
project performance is enhanced when it operates in conjunction with effective communication and team
stability. From a research perspective, this suggests that attention must be paid to not just establishing the
direct effects of capabilities, but also to the notion of configuring a portfolio of capabilities so that their
total effect – both direct and interactive – on performance outcomes of interest is maximized.
Further, while the importance of team stability has been discussed in the literature, our study
documents the specific impact of team stability on project performance and customer satisfaction in
outsourced software projects. We find that team stability positively influences both project management
and customer satisfaction, and also interacts with project size, planning capabilities and communication
effectiveness in influencing those outcomes. Team stability can compensate for communication frequency
and quality in long-term projects because, with a stable team, there is an increased mutual understanding
and shared knowledge between the team and the client manager. These findings complement recent
research on how team stability supports knowledge management and learning in outsourced software
projects (Narayanan et al. 2009), and open up new perspectives for research related to team stability.
From a research perspective, our analysis highlights the role of mediating variables in influencing
customer satisfaction. In general, the antecedent variables – project planning, communication
effectiveness and team stability – directly affect customer satisfaction and project performance. The
interaction effects involving these variables influence project performance, but not customer satisfaction.
However, these interactive influences may drive customer satisfaction indirectly through increased project
performance. And, as noted earlier, the impact of communication effectiveness on project performance is
19
muted when team stability is high. From a research perspective, this highlights the importance of careful
theorizing to construct pathways of influence that link the variables of interest in study setting similar to
ours. A sole focus on direct effects may yield, at best, an incomplete story.
Our findings also reveal an important tension between the need to manage uncertainty through
structured approaches such as detailed project planning, and the need to build flexible and agile
operations that quickly respond to changes. The planning approach has been highlighted in research that
adopts the software risk management perspective (e.g., Barki et al. 2001). In particular, planning is a trait
of traditional waterfall model of software development where the emphasis is on an early prediction of
challenges and the proactive design of approaches to tackle them. In some contexts, our findings are more
consistent with work in organizational theory that emphasizes the role of flexibility and agility in enabling
quick responses to rapidly changing market and competitive environments (e.g., MacCormack and
Verganti 2003). Such an approach is characteristic of more recent software development methodologies
such as agile software development and “scrum” in which collaboration and constant adaptation are
emphasized to a greater extent than rigid planning (Highsmith and Cockburn 2001). Future research must
be sensitive to these alternative theoretical perspectives.
Finally, our findings reveal that it is important for future research to consider the role of
contextual influences.
Managerial Implications
We demonstrate that project planning, team stability, and communication significantly impact
project performance and customer satisfaction. This suggests that, while achieving benchmark ratings that
reflect technical capabilities (e.g., SEI-CMM) is important for outsourced software service providers, they
must also emphasize “softer” skills related to communication, and to managing human resources towards
ensuring project team stability. Further, stable teams can also enhance knowledge-sharing and learning
(Narayanan et al. 2009). But, managers must implement the appropriate incentives and build a supportive
team culture for knowledge-sharing to occur (Siemsen et al. 2007).
Our findings further suggest that managers must pay careful attention to the project
characteristics that influence how the antecedent variables impact project performance and customer
satisfaction. For example, managers who are focused on highly structured project planning approaches
20
may benefit from stepping back and demarcating project contexts where such approaches are most useful,
as opposed to contexts where flexibility and agility are more relevant. Building flexibility is particularly
important in software project settings, which are often characterized by drifting environments caused by
changing customer requirements and expectations (Kreiner 1995). However, managers are often not
comfortable working with less-structured processes (Olsson 2006). Managers may need to be trained in
approaches that allow flexibility, including late locking of requirements, incremental commitment to
decisions as evidenced in stage gate models, dynamic resource allocation, and contingency planning.
Finally, managers must broaden their focus from achieving excellence at solely one or a couple of
competencies related to project planning, effective communications, and human resource management at
the team level. Rather they should view these, and possibly other, competencies as a portfolio of
capabilities that must be jointly strengthened. As demonstrated by our findings, the effect of a single
competency on the outcomes of importance could depend on the levels of other competencies.
Limitations
Issues related to the cultural fit between the service provider and the client are broadly relevant in
the offshore outsourcing context (Kalainagam et al. 2009). Future research that examines multiple service
provider-customer dyads can focus on how issues related to cultural fit drive project performance and
customer satisfaction, after controlling for other variances across the dyads. Future research can examine
the robustness of our findings in other outsourced software service provider contexts using multi-method,
multi-respondent data. Some of our measures could also be more robust. For example, we were
constrained by the data to use a single item measure of customer satisfaction. Finally, ethnographic and
other qualitative studies may provide valuable insights related to the relationships between software
service providers and clients that are difficult to obtain through statistically-oriented research approaches.
References
Abdel-Hamed, T. K. 1989. The Economics of Software Quality Assurance: A Simulation Based Case
Study. MIS Quarterly. 12(3) 395-411.
Apte, U. M., M. G. Sobol, S. Hanaoka, T. Shimada, T. Saarinen, T. Salmela, A. P. J. Vepsalainen. 1997.
IS Outsourcing Practices in the USA, Japan and Finland: A Comparative Study. Journal of Information
Technology. 12(4) 289-304.
Balasubramanian, S., P. Konana, N. M. Menon. 2003. Customer Satisfaction in Virtual Environments: A
Study of Online Investing. Management Science. 49(7) 871-889.
21
Barki, H., S. Rivard, J. Talbot. 1993. Toward an Assessment of Software Development Risk. Journal of
Management Information Systems. 10(2) 203-225.
Barki, H., S. Rivard, J. Talbot. 2001. An Integrative Contingency Model of Software Project Risk
Management. Journal of Management Information Systems. 17(4) 37-69.
Bendapudi, N., R. P. Leone. 2002. Managing Business-to-Business Customer Relationships Following
Key Contact Employee Turnover in a Vendor Firm. Journal of Marketing. 66(2) 83-101.
Bendoly, E., M. Swink. 2007. Moderating Effects of Information Access on Project Management
Behavior, Performance and Perceptions. Journal of Operations Management. 25(3) 604-622.
Berry, L. L., A. Parasuraman, V. A. Zeithaml. 1985. Quality Counts in Services, Too. Business Horizons.
May-June 44-52.
Boehm, B. W. 1989. Software Risk Management. IEEE Computer society press.
Brown, S. W., T. A. Swartz. 1989. A Gap Analysis of Professional Service Quality. Journal of
Marketing. 53(2) 92-98.
Chapman, R. J. 1998. The role of System Dynamics in Understanding the Impact of Changes to Key
Project Personnel on Design Production within Construction Projects. International Journal of project
Management. 16(4) 235-247.
Cohen, J., P. Cohen, L. S. Aiken, S. G. West. 2003. Applied Multiple Regression - Correlation Analysis
for the Behavioral Sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum.
Curtis, B., H. Krasner, N. Iscoe. 1988. A Field Study of the Software Design Process for Large Systems.
Communications of the ACM. 31(11) 1268-1287.
Cusumano, M., A. MacCormack, C.F. Kemerer, B. Crandall. 2003. Software Development Worldwide:
The State of Practice. IEEE Software. November/December 28-34.
Czepiel, J. A. 1990. Service Encounters and Service Relationships: Implications for Research. Journal of
Business Research. 20(1) 13-21.
Danaher, P. J., J. Mattsson. 1994. Customer Satisfaction During the Service Delivery Process. European
Journal of Marketing. 28(5) 5 – 16.
Deephouse, C., T. Mukhopadhyay, D. R. Goldenson, M. I. Kellner. 1996. Software Processes and Project
Performance. Journal of Management Information Systems. 12(3) 185-203.
Dibbern, J., T. Goles, R. Hirschheim, B. Jayatilaka. 2004. Information Systems Outsourcing: A Survey
and Analysis of the Literature. ACM SIGMIS Database. 35(4) 6-102.
Drazin, R., A. H. Van de Ven. 1985. Alternative Forms of Fit in Contingency Theory. Administrative
Science Quarterly. 30(4) 514–539.
22
Ethiraj, S. K., K. Prashant, M. S. Krishnan, J. V. Singh. 2005. Where Do Capabilities Come From and
How Do They Matter? A Study in the Software Services Industry. Strategic Management Journal. 26(1)
25-45.
Fornell, C. S. Mithas, F. V. Morgeson. 2009. The Economic and Statistical Significanceof Stock Returns
on Customer Satisfaction. Marketing Science 28(5) 2009, 820-825.
Fornell, C., S. Mithas, F.V. Morgeson, M.S. Krishnan. 2006. Customer Satisfaction and Stock Prices:
High Returns, Low Risk. Journal of Marketing 70(1) 3-14.
Galbraith, J.R. 1977. Organization Design. Reading, MA: Addison Wesley.
Gopal, A., T. Mukhopadhyay, M. S. Krishnan. 2002. Virtual extension: The Role of Software Processes
and Communication in Offshore Software Development. Communications of the ACM. 45(4) 193-200.
Gopal, A., K. Sivaramakrishnan, M. S. Krishnan, T. Mukhopadhyay. 2003. Contracts in Offshore
Software Development: An Empirical Analysis. Management Science. 49(12) 1671-1683.
Highsmith, J., A. Cockburn. 2001. Agile Software Development: The Business of Innovation,Computer
34(9) 120-127.
Höffler, F., D. Sliwka. 2002. Do New Brooms Sweep Clean? When and Why Dismissing a Manager
Increases the Subordinates’ Performance. European Economic Review. 47(5) 877-890.
Huckman, R. S., B.R. Staas. 2009. Fluid Teams and Fluid Tasks: The Impact of Team Familiarity and
Variation in Experience. HBS Working Paper, 09-145.
Katz, R., T.J. Allen. 1982. Investigating the Not Invented Here (NIH) Syndrome: a look at the
performance, tenure and communication patterns of 50 R&D project groups. R&D Management.
12(1), 7-19.
Kaka, N., J. Sinha. 2005. An Upgrade for the Indian IT Services Industry. Mckinsey Quarterly (2005
Special edition) 85-89.
Kalainagam, K., T. Kushwaha, Jan-Benedict E.M. Steenkamp, K. Tuli. 2009. Outsourcing of
Customer-Facing CRM Processes: When and How Does it Impact Shareholder Value?, Working
Paper, UNC Kenan-Flagler Business School.
Kekre, S., M. S. Krishnan, K. Srinivasan. 1995. Drivers of Customer Satisfaction for Software Products:
Implications for design. Management Science. 41(9) 1456-1461.
Kobayashi-Hillary, M. 2005. A passage to India, ACM Queue. 3(1) 54-60.
Konana, P., S. Balasubramanian. 2005. The Social-Economic-Psychological Model of Technology
Adoption and Usage: An Application to Online Investing. Decision Support Systems. 39(3) 505-524.
Kraut, R. E., L. A. Streeter. 1995. Coordination in Software Development. Communications of the ACM.
38(3) 69 – 81.
23
Kreft, I. G. G., J. d. Leeuw, L. S. Aiken. 1995. The Effect of Different Forms of Centering in Hierarchical
Linear Models. Multivariate Behavioral Research. 30(1) 1-21.
Kreiner, K. 1995. In Search of Relevance: Project Management in Drifting Environments. Scandinavian
Journal of Management. 11(4) 335-346.
Krishnan, M. S., V. Ramaswamy. 1999. Customer Satisfaction for Financial Services: The Role of
Products, Services, and Information. Management Science. 45(9) 1194-1210.
Kulkarni, V. 2009. Offshore to Win Not Shrink. In J. M. Swaminathan (Ed.), Indian Economic
Superpower Fiction or Future? World Scientific Publishing Company, Singapore.
MacCormack, A., R. Verganti, M. Iansiti. 2001. Developing Products on Internet Time: The Anatomy of
a Flexible Development Process. Management Science. 47(1) 133–150 .
MacCormack, A., R. Verganti. 2003. Managing the Sources of Uncertainty: Matching Process and
Context in Software Development. The Journal of Product Innovation Management. 20(3) 217 – 232.
McEachern, C. 2005. A Look Inside Offshoring: Customers are Less Satisfied with Offshore Service
Providers. VARBusiness, from www.varbusiness.com/article/showArticle.jhtml?articleId=166403041
McFarlin, D.B., P.D. Sweeney. 1992. Distributive and Procedural Justice as Predictors of Satisfaction
with Personal and Organizational Outcomes. Academy of Management Journal. 35(3) 626-637.
Miller, D. 1992. Environmental Fit Versus Internal Fit. Organization Science. 3(2) 159– 178.
Miller, D., P. H. Friesen. 1984. Organizations: A Quantum View. Englewood Cliffs: Prentice Hall.
Miller, J.G., A. V. Roth. 1994. A Taxonomy of Manufacturing Strategies. Management Science. 40(3) 285304.
Napoleon, K., C. Gaimon. 2004. The Creation of Output and Quality in Services: A Framework to
Analyze Information Technology Worker Systems. Production and Operations Management. 13(3) 245259.
Narayanan, S., S. Balasubramanian, J. M. Swaminathan. 2009. A Matter of Balance: Specialization, Task
Variety, and Individual Learning in a Software Maintenance Environment. Management Science, 55(11),
1861-1876.
Narayanan, S., J. M. Swaminathan. 2007. Information Technology Offshoring to India: Pitfalls,
Opportunities and Trends. New Models of Firm's Restructuring after Globalization, edited by Janez
Prasnikar and Anreja Cirman, 327-345, 2007. (translated in Slovenian language).
NASSCOM. 2009. StrategicReview 2009: The IT industry in India. National Association for Software
and Service Companies: New Delhi, India.
Nidumolu, S. R. 1995. The effect of Coordination and Uncertainty on Software Project Performance:
Residual Performance Risk as an Intervening Variable. Information Systems Research. 6(3) 191-217.
Nunnally, J. C. 1978. Psychometric Theory (2nd ed. ed.). McGraw-Hill New York.
24
Olsson, N. 2000. Management of Flexibility in Projects. International Journal of Project Management.
24(1) 66-74.
Phillips. L, W, 1981. Assessing Measurement Error in Key Informant Reports: A Methodological Note on
Organizational Analysis in Marketing. Marketing Research. 18(4) 395- 415.
Podsakoff, P. M., D. W. Organ. 1986. Self-Reports in Organizational Research: Problems and Prospects.
Journal of Management. 12(4) 531-544.
Ramasubbu, N., S. Mithas, M. S. Krishnan, 2008a. High Tech, High Touch: The Effect of Employee
Skills and Customer Heterogeneity on Customer Satisfaction with Enterprise System Support Services .
Decision Support Systems. 44(2) 509-523.
Ramasubbu, N., S. Mithas., M.S. Krishnan, C.F. Kemerer, 2008b. Work Dispersion, Process-Based
Learning and Offshore Software Development Performance. MIS Quarterly. 32(2) 437-458.
Rindfleisch, A., A. J. Malter, S. Ganesan, C. Moorman. 2008. Cross-Sectional Versus Longitudinal
Survey Research: Concepts, Findings, and Guidelines. Journal of Marketing Research. 45(3) 261–279.
Rust, R. T., J. J. Inman, J. Jia, A. Zahorik. 1999. What You Don't Know About Customer-Perceived
Quality: The Role of Customer Expectation Distribution. Marketing Science. 18(1) 77-93.
Sanchez, J. I., C. Viswesvaran. 2002. The Effects of Temporal Separation on the Relations Between SelfReported Work Stressors and Strains. Organizational Research Methods. 5(2) 173-183.
Siemsen, E., S. Balasubramanian, A. Roth. 2007. Incentives that Induce Task-Related Effort,
Helping, and Knowledge Sharing in Workgroups. Management Science. 53(10) 1533-1550.
Siemsen E., Roth A. M., Oliviera P. 2009. Common Method Bias in Regression Models with Linear,
Quadratic, and Interaction Effects. Organizational Researach Methods, Online First.
Stewart, D. M. 2003. Piecing together Service Quality: A framework for Robust Service. Production and
Operations Management. 12(2) 246-266.
Swink, M., S. Talluri, T. Pandejpong. 2006. Faster, Better, Cheaper: A Study of NPD Project Efficiency
and Performance Tradeoffs. Journal of Operations Management. 24(5) 542-562.
Torbiron, L. 1982. Living Abroad. Wiley New York.
Venkatraman, N. 1989. Strategic Orientation of Business Enterprises: The Construct, Dimensionality, and
Measurement. Management Science. 35(8) 942-962.
Wallace, L., M. Keil, A. Rai. 2004. Understanding Software Project Risk: A Cluster Analysis.
Information and Management 42(1) 115-125.
Wirtz, J., J. E. G. Bateson. 1999. Introducing Uncertain Performance Expectations in Satisfaction Models
for Services. International Journal of Service Industry Management. 10(1) 82-99.
Wright, R. L. 2005. Successful IT Outsourcing: Dividing Labor for Success. www.Softwaremag.com.
From http://www.softwaremag.com/L.cfm?Doc=2005-04/2005-04outsourcing
25
FIGURE AND TABLES
Figure 1: Conceptual framework
Project
Planning
Communication
Effectiveness
Project
Performance
H2
H4
H3
H1
H1
Team Stability
H1
Customer
Satisfaction
H1
Table 1: Scales and reliability measures
Construct
Project Planning
(PPLAN)
Project Performance
(PP)
Communication
Intensity (CI)
Communication
Ability (CA)
Team Stability (STAB)
Overall item Description
a) Work planning and estimation
b) Managing changes in project schedules/priorities
c) Risk identification and management
d) Overall Quality of delivery
e) Overall Timeliness of delivery
f) Interim Goals
g) Quality of status reports
h) Timeliness of reports
Reference
Ethiraj et al. (2005)
Wallace et al. (2004)
Boehm (1989)
Deephouse et al. (1996)
Deephouse et al. (1996)
Alpha†
Nidumolu (1995)
0.783
(0.755)
i) Oral communication ability
j) Written communication ability
k) Duration of stay of engineers in team
l) Management of transitions within team
Nidumolu (1995)
Nidumolu (1995)
0.801
(0.791)
0.760
(0.738)
0.798
(0.780)
0.716
(0.742)
Other variables in equations (1) and (2) are as follows: SAT: Overall Satisfaction; COMM: Communication Effectiveness; SIZE: Project
Size in person Months; DT: M&D versus testing groups (1 indicates M&D and 0 indicates testing); NM: New versus mature project
groups (1 indicates new projects and 0 indicates mature projects); TD: time dummies; MGTO: Management overhead; PROD: Relative
productivity perception
† Number in parentheses corresponds to reliabilities computed for the 182 unique projects. The numbers not in parentheses are based on
the entire sample size.
26
Table 2: Difference in chi-square from a constrained model with covariance 1 and free model
Constrained
Model
Unconstrained
Model
DF
Difference
chi-square
Difference
p-value
235.51
207.27
253.27
243.79
15.24
2.89
3.85
7.58
1
1
1
1
220.27
204.38
249.42
236.21
<.001
<.001
<.001
<.001
217.47
285.45
273.05
5.45
8.13
15.14
1
1
1
212.02
277.32
257.91
<.001
<.001
<.001
244.51
239.04
1.04
4.57
1
1
243.47
234.47
<.001
<.001
245.14
1.20
1
PPLAN with
PP
STAB
CI
CA
PP with
STAB
CI
CA
STAB with
CI
CA
CI with
CA
243.94
<.001
Table 3: Descriptive statistics
COMM
PPLAN
STAB
PP
SAT
LNsize
NM
DT
Mean
S.D.
COMM
PPLAN
STAB
PP
SAT
LNsize
NM
4.596
4.476
4.222
4.463
4.558
3.274
0.217
0.595
0.438
0.530
0.726
0.521
0.570
0.824
0.412
0.491
0.661*
0.358*
0.546*
0.548*
0.036
-0.066
0.055
0.398*
0.721*
0.688*
0.070*
-0.047
0.019
0.453*
0.479*
-0.065
-0.019
0.099*
0.735*
0.028
-0.019
0.044
0.052
-0.007
0.004
-0.121*
-0.081*
0.007
* p<0.05
27
TABLE 4: Regression results for project performance and overall satisfaction
PROJECT PERFORMANCE (PP)
OVERALL SATISFACTION (SAT)
Model
PP1
PP2
PP3
PP4
PP5
OS1
OS2
OS3
OS4
LNsize
-0.003
-0.017
-0.018
-0.015
-0.017
0.037
0.023
0.022
0.025
NM
0.060
0.069*
0.077**
0.081**
-0.109
-0.038
-0.038
-0.039
-0.288
DT
0.009
-0.002
-0.002
0.149
-0.001
-0.014
-0.041
0.238
-0.038
PROD
0.119*** 0.033***
0.034***
0.034***
0.034***
MGTO
0.033**
-0.009
-0.003
-0.003
-0.002
PP
N/A
N/A
N/A
N/A
N/A
0.462***
0.461*** 0.441***
PPLAN
0.503***
0.556***
0.660***
0.549***
0.272***
0.228*** 0.289***
STAB
0.129***
0.125***
0.200***
0.120***
0.129***
0.135*** 0.146***
COMM
0.119***
0.108***
-0.039
0.109***
0.116***
0.192*** 0.091**
PPLAN*COMM
0.160***
0.172***
0.158***
PPLAN*STAB
0.080**
0.085**
0.079**
COMM*STAB
-0.203***
-0.202*** -0.202***
STAB*LNsize
-0.040**
-0.041**
-0.039*
DT*PP
N/A
0.003
DT*PPLAN
-0.142**
0.066
DT*STAB
-0.109***
-0.011
DT*COMM
0.206***
-0.118
NM*PP
N/A
0.090
NM*PPLAN
0.033
-0.070
NM*STAB
0.019
-0.094*
NM*COMM
-0.009
0.121
CONSTANT
3.953*** 1.134***
0.928***
0.806***
0.969***
4.489***
0.146
-0.018
0.206
AIC
1032
570
538
527
543
1308
630
635
632
LL
-499
-265
-245
-237
-244
-639
-296
-295
-293
Residual
0.380*** 0.310***
0.303***
0.302***
0.303***
0.463***
0.334***
0.333*** 0.332***
Likelihood Ratio chi-square
467.42***
40.66***
16.46***
0.78
686.21***
2.43
5.34
Notes: (1) For the LR test, we compare (a) PP2 with PP1 (b) PP3 with PP2 (c) PP4 and PP5 with PP3 (d) OS2 with OS1 (e) OS3 and OS4 with OS2; (3) *p
<0.1 **p <0.05 ***p <0.01; (4) N=822 (5) In addition to the variables shown in the table, nine dummy variables were added for both project performance
and overall satisfaction equations to account for common effects of the time period in which the data was collected (6) N/A indicates that the variables were
not applicable in the regression.
28
APPENDIX – LIST OF ITEMS
Agreement to the following questions were rated on a five point scale from 1 (very low) to 5 (very high)
with respect to the offshore service provider team
PROJECT PLANNING (PPLAN)
Your team planned and estimated their work well
Your team managed changes in priorities and schedules well
Your team identified and assessed project risks early and adopted workarounds to meet the project goals
PROJECT PERFORMANCE (PP)
You are satisfied with the quality of the deliverables of your team.
You are satisfied with the timeliness of the deliverables of your team.
Your team proactively meets interim expectations of the project
COMMUNICATION EFFECTIVENESS (COMM)
Your team has consistently provided the status reports you need to manage your work.
Your team communicates frequently with you through the use of emails, conference calls etc
The ability of your team to communicate clearly through oral means is high
The ability of your team to communicate clearly through writing is high
TEAM STABILITY (TS)
You are satisfied with how long engineers remain on your team.
When an engineer leaves your team, you are satisfied with how the transition is managed.
PRODUCTIVITY PERCEPTIONS (PROD)
Compared to your local engineering team the productivity of the service provider team is:
Equal to Local team
90 - 99% of Local Team
80 - 89% of Local Team
70 - 79% of Local Team
Less than 70% of Local Team
MANAGEMENT OVERHEAD (MGTO)
What percentage of your time is spent in managing the service provider team?
< 10%
10% to 20%
20% to 30%
30% to 40%
40% to 50%
>50%
OVERALL SATISFACTION (SAT)
How would you rate your overall satisfaction in using the team?
i