Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
UNIVERSITY CATHOLIC OF LOUVAIN Final report for the course of research methodology: “Conjoint Analysis Methodology” Authors: González Calleros Juan Manuel Stanciule scu Adrian Responsable: Christophe Benavent Introduction When we have to deal with problems related on how to explain the preferences of a consumer between different objects with the help of attributes and characteristics that describe the object [6], the solution is the use of conjoint analysis or experimental choice analysis. Conjoint analysis represents a widely applied methodology for measuring and analyzing consumer preferences. It was born from psychometric works that tries to reveal the behavior of a consumer in different dimensions such as social, individual and as an economic agent [6], the term Conjoint analysis were coined by Green and Srinivasan to refer to a number of paradigms in psychology, economics and marketing that are concerned with the quantitative description of consumer preferences or value trade-offs [2]. Different authors classify the conjoint analysis as an explicative method because it explains an ordinal variable (preference) through several nominal independent variables (attributes) [6]. The ambition of the method is to explain and describe the behavior of the consumer [6]. It is useful to understand why consumers prefer or choose certain products or services, perhaps in new conditions [3]. 1. Objectives Conjoint analysis has two major objectives: (i) to determine the contribution of predictor variables (attribute levels) to consumer overall preferences, and (ii) to establish a valid model of consumer judgments useful in predicting consumer acceptance of any combination of attributes, even those not originally evaluated by consumers [4], [6]. In order to achieve these objectives, coefficients called utilities (or partworths) are estimated for the various attribute levels making upon the alternatives of interest by decomposing the measured overall preferences for product profiles into these part-worth utilities according to some a priori defined combination rule which specifies how subjects are assumed to integrate those separate part-worth utilities to arrive at an overall preference or choice. The profile utility is an overall utility or ‘worth’ of a profile calculated by combining all utilities of attributes levels defined in that profile, according to the assumed combination rule [3]. 2. Theoretical presentation The name "Conjoint analysis" implies the study of the joint effects. Marketing applications are studying the joint effects of multiple product attributes on product choice. There are a couple of alternative Conjoint Analysis Methodologies. Some of them are named below: Stimulus Construction Data Collection Model Type Measurement Scale Estimation Procedure Simulation Analysis Two Factor at a Time; Full Factorial Design; Fractional Factorial Design Two Factor at a Time Tradeoff Analysis; Full Profile Concept Evaluation Compensatory and NonCompensatory Models Part Worth Function; Vector Model; Mixed Model; Ideal Point Model; Rating Scale; Paired Comparisons; Constant Sum; Rank Order Metric and Non-Metric Regression; MONANOVA; PREFMAP; LINMAP; Nonmetric Tradeoff; Multiple Regression; LOGIT; PROBIT; Hybrid; TOBIT; Discrete Choice Maximum Utility; Average Utility (Bradley-Terry-Luce); LOGIT; PROBIT 3.1. Conjoint measurement Consumers or decision makers often think in terms of concepts, objects or solutions, rather than relative numerical values. Conjoint measurement (as distinguished from conjoint analysis) permits the use of rank or rating data, when evaluating pairs of attributes or attribute profiles (rather than single attributes). Based on this rank or rating input, the conjoint measurement procedures are applied to identify a mathematical function of the m brand attributes, which: (1) is interval scaled (produces a set of interval scaled output), (2) best corresponds to the set of subjective evaluations (ordinal judgments) of the brand alternatives made by the respondent, and (3) is either a categorical or polynomial function in the attributes for the rank order data. The conjoint measurement model assumes that: (1) the set of objects being evaluated is at least weakly ordered (may contain ties), (2) each object evaluated may be represented by an additive combination of separate utilities existing for the individual attribute levels, and (3) the derived evaluation model is interval scaled and comes as close as possible to recovering the original rank order [non- metric] or rating [metric] input data. The power of conjoint measurement to convert non-metric input into interval scaled output has resulted in many methodological advancements, including multidimensional scaling and conjoint analysis. Several different implementations of conjoint measurement are evidenced in conjoint analysis algorithms and computer programs. These implementations reflect both algorithmic differences and alternative approaches to data collection and measurement. The most noticeable are categorical conjoint measurement, monotone ANOVA models, Ordinary Least Squares (OLS) regression methods and linear programming methods. In the following, it will be discussed only the OLS regression approach that offers a simple, yet robust method of deriving alternative forms of respondent utilities (part-worth, vector, or ideal point models). The attractiveness of the OLS model is in part a result of the ability to scale respondent choices using rating scales, rather than rankings. The ability to implement designs having larger numbers of attributes and levels (through fractional factorial designs) has made this methodology the defacto standard for conjoint analysis. The objective of OLS conjoint analysis is to produce a set of additive part-worth utilities (vector or ideal point utilities may also be estimated) that identify each respondent's preference for each level of a set of product attributes. In application, the OLS model solve s for utilities using a dummy matrix of independent variables. Each independent variable indicates the presence or absence of a particular attribute level. The dependent variable is the respondent's evaluation of one of the profiles described by the independent variables. This model is expressed: where: • B = the beta weights estimated in the regression • X = the matrix of dummy values identifying the levels of the factorial design, and • y = the ranking or rating evaluations of the respondent. The first step in the analysis is to develop either a full or fractional factorial design. The use of fractional factorial designs permits the estimation of a parameter for the main effect of each attribute included in the analysis. The utilities are additive. 3.2 The measurement of preference The measurement of preference is an established part of consumer research that is based in expectancy value models of attitude theory and measurement. In conjoint analysis, there is an examination of preference for a set of brands or other choice alternatives that are described by an inventory of attributes. The domain of preference research in conjoint analysis is both broad and multifaceted. It extends to such diverse issues as how many attributes should be measured; the influence of the number of attribute levels; the appropriateness of measuring choice behavior rather than rating or ranking choice alternatives; the advantages of constructing individualized rather than generic attribute sets. A second line of preference research has focused on the appropriateness of alternate scaling methodologies for the measuring of preferences. Modeling the decision process itself is a third major area of research, and includes the appropriateness of alternative decision models (compensatory , conjunctive, disjunctive, Elimination by Aspect, etc.) that may be used either singularly or in combination to predict preference; the form of the utility preference model estimated for a given attribute (part worth, linear, or ideal point); and the type of simulation models used to estimate choice preferences. 3.3. Preference models Utility preference models are the mathematical formulations that define the utility levels for each of the attributes. In practice, the attributes are modeled as either a piecewise linear (part-worth), linear, or curvilinear function. The Part-Worth Model is the simplest of the utility estimation models. This model represents attribute utilities by a piecewise linear curve. This curve is formed by a set of straight lines that connect the point estimates of the utilities for the attribute levels. The part-worth function is defined as: where: • • • sj = preference for the stimulus object at level j fp = the function representing the part worth of each of the j different levels of the stimulus object, Yjp for the pth attribute. Yjp = the level of the pth attribute for the jth stimulus object. The part-worth model reflects a utility function that defines a different utility (part-worth) value for each of the j levels of a given attribute. Because of design considerations, most conjoint studies constrain the number of levels to be less than 5, though in actuality, the number of levels varies from 2 to 9 or more. The implications of specifying a given preference model (part-worth, linear, or ideal point) extend beyond the actual shape of the preference curve being modeled. Each preference model requires that a different number of parameters be estimated. The part worth model requires that each level of an attribute be defined by a dummy variable distinct column within the design matrix. As would be expected, a total of j-1 dummy variables are required to estimate j levels. The Vector Model is represented by a single linear function that assumes preference will increase as the quantity of attribute p increases (preference decreases if the function is negative). Preference for the jth attribute is defined as: where: • Wp = the individual's weights assigned to each of the p attributes. One weight is derived for each attribute • Yjp = Level of the pth attribute for the jth Stimulus The vector model for the attribute with four levels would appear as a straight line, with the levels on the line. The vector model requires that a single parameter be estimated for each variable treated as a vector. In contrast to the part-worth model, the vector model defines the attribute variable not as a series of dummy variables, but as a single linear variable where the values are the measured values or levels associated with the attribute. The Ideal Point Model. The ideal point function is operationalized as a curvilinear function that defines an optimum or ideal amount of an attribute. The ideal point model is appropriate for many qualitative attributes, such as those associated with taste or smell. The ideal point model establishes an inverse relationship between preferences and the weighted distance (dj2) between the location of the jth stimulus and the individual's ideal point, Xp. The ideal point model is expressed as: where: • Yjp = Level of the jth Stimulus with respect to the individual's ideal point, • Xp = the Individual's ideal point, p • Wp = the individual's weights assigned to each of the p attributes. One weight is derived for each attribute • Yjp = level of the pth attribute for the jth Stimulus The ideal-point model for the attribute with three levels would appear as a curve with the center of the curve higher than either end. The highest point being the ideal quantity of the attribute. Mathematically, the implication of specifying each of the models ultimately extends to the number of parameters that must be estimated. The vector model treats the variable Yjp as a continuous (interval scaled) variable, such that only t parameters (j=1,...,t) must be estimated. For the ideal point model, 2t parameters must be estimated (Wp and Xp), and for the part-worth model, (q-1)t parameters must be estimated, where q is specified to the number of levels for each of the t attributes. 3. Analysis Procedure Developing a conjoint analysis involves the following steps: 1. Selecting the attributes (characteristics) that are assumed to influence the choice behavior of interest, is the first task to do in conjoint analysis. Defining the attributes and the population, creates the framework of the study [6]. 2. The attributes that are selected need to be classified, this means, to determine the point of reference (if there is one) for the evaluation of the attribute, and the unit of measure for the attribute [6]. This measure could be numerical or categorical levels [3]. 3. The next step is to cho ose the form in which the combinations of attributes are to be presented to respondents [7], according to some statistical design. Thus, the researched products are described in terms of profiles. Each profile is a combination of attribute levels for the selected attributes [3]. 4. Data Collection plays an important role as it defines how the information will be collected. For this purpose five methods are described in [6] : • Trade-off tables, the respondent have to choose among defined responses. • Techniques of complete profiles, each respondent is exposed to a group of combinations of attributes, and after selecting the attributes combination (concept) it is demanded to qualify the choice. • Composition techniques, is a model to measure the importance of the attributes and its evaluation. • Hybrid techniques each respondent receive both an evaluation task auto explained, and a small ensemble of profiles to evaluate. • Analysis conjoint adaptable, the respondent is exposed to a profile and each time they choose among two options. 5. Before searching a method to reduce the data collected, it is mandatory to analyze the attributes and determine if they constitute a: • determinant set of attributes, if they are important and differentiating, • independence between them, measuring the correlation of the attributes (how related are they), expecting a low value to continue, • manipulability: the attributes must be understandable and identifiable. 6. Finally, select the technique to be used to analyze the collected data. The partworth model is one of the simpler models used to express the utilities of the various attributes [7], [8]. 7. The data is processed by statistical software written specially for conjoint analysis which comes in a variety of forms. For example, Sawtooth Software offers a suite of conjoint software packages: Adaptative Conjoint Analysis (ACA), Traditional Full-Profile Conjoint Analysis, Choice-based Conjoint (CBC) and Partial-Profile CBC [9]. These packages are discussed in the next section. 4.1 Selecting a package to process the data The application of conjoint analysis techniques implies the study of the joint effects of multiple products attributes on product preferences or product choice. In a conjoint study, the object of study is the utility, which is the preference of an individual for an object among some other objects; a cause of it will be more useful, because it will have a bigger usage value [6]. It is know, that the user does not have the choice of an ideal object, the domain is limited in size, so the user decision is limited to a set of objects. If it is supposed that the individual chooses based on the attributes of the object, each attribute represent a partial utility for the object [6]. So, the utility of an object, for a consumer (its usability value), is in function of its partial utilities. It is paradoxical that many new developments in the conjoint analysis field have made the methods better than ever, but they have also made it more difficult to choose among them. Many differentiating limitations, which earlier caused researchers to reject one conjoint analysis in the favor of another, have been overcome, thus blurring the lines between the unique capabilities of the approaches. Sawtooth Software is one of the most important developers of analytical and survey interviewing systems which are some of the most widely used in the world. They have designed software packages that bring unique advantages to different research situations [9]. 4.1.1 Adapatative Conjoint Analysis (ACA) ACA’s main advantage is its ability to measure more attributes than is advisable with traditional full-profile conjoint. In ACA, respondents do not evaluate all attributes at the same time, which helps solve the problem of “ information overload” that is present in many full-profiles study. The idea is that respondents cannot effectively process more than about 6 attributes at a time in full profile context. ACA can include up to 30 attributes, although typ ical ACA projects involve about 8 to 15 attributes. With 6 or fewer attributes, ACA’s results are similar to the full-profile approach, though there is a little compelling reason to use ACA in these situations. In terms of limitations, the foremost is that ACA must be computer –administer. The interview adapts to respondents’ previous answer, which cannot be done via paper-and-pencil. Like most traditional conjoint approaches, ACA is main-effects model. This means that part-worth utilities for attributes are measured in an “ all else equal” context, without the inclusion of attribute interaction. ACA also exhibits another limitation with respect to pricing studies: when price is included as just one of many variables, its importance is likely to be understand , and the degree of understanding increases as the number of attributes studied increases. ACA is a hybrid approach, combing stated evaluations about attributes and levels with conjoint pair wise comparison. The first part of the interview approximates the self-explicated approach. Respondents rank (or rate) attribute levels, and then assign a weight (importance) to each attribute. Using the information from the self-explicated section, ACA then presents trade-off questions. Two or more products are shown, and the respondents indicate which is preferred, using a relative rating scale. The product combinations are tailored to each respondent, to ensure that each is relevant and meaningfully challenging. Each of the products is displayed in partial-profile, meaning that only a subset (usually two or three) of the attributes is shown for any given question. Because of the self-explicated introductory section, the adaptive nature of the questionnaire, and the ratings-based conjoint tradeoffs, ACA is able to stabilize estimates of respondent’s preferences for more attributes using smaller sample sizes than the other conjoint methods. ACA does well for modeling highinvolvement purchases, where respondents focus on each of a number of product attributes before making a carefully considered decision. Purchases for low involvement product categories described on only a few attributes along with pricing research studies are better handled using another method. 4.1.2 Traditional Full-Profile Conjoint Analysis (CVA) Full-profile conjoint has been a mainstay of the conjoint community for a long time now. The full profile approach is useful for measuring up to about six attributes. That number varies from project to project depending on the length of the attribute level text, the respondents’ familiarity with the category, and whether attributes are shown as prototypes or pictures. CVA is designed for paper-and-pencil studies, whereas ACA must be administered via computer. CVA can also be used for computerized interviews in a CAPI or Internet survey. CVA calculates a set of part worth for each individual, using traditional fullprofile card-sort (either ratings or ranked) or pair wise ratings. Up to 30 attributes with 15 levels can be measured. Through the use of compound attributes, in a limited way CVA can measure interactions between attributes such as brand and price. Compound attributes are created by including all combinations of levels from two or more attributes. For example, two attributes each with two levels can be combined into a single four-level attribute. However, interactions can only be measured in a limited sense with this approach. Interactions between attributes with more than 2 or 3 levels each are better measured using CBC. CVA can design pair wise conjoint questionnaires, or single concept (card-sort) designs. Showing one product at a time encourages respondents to evaluate products individually, rather than in direct comparison with a competitive set of products. It focuses more on probing the acceptability of an offering rather than the differences between competitive products. If the comparative task is desired, CVA’s pair wise approach may be used. Another alternative is to conduct a cardsort exercise. However, respondents view one product per card, in the process of evaluating the deck they usually compare them side-by-side and in sets. Because respondents see the products in full-profile (all attributes at once), respondents tend to use simplification strategies if faced with too much information to process. Respondents may key on two or three salient attributes and largely ignore the others. 4.1.3 Choice-based Conjoint (CBC) CBC interviews closely mimic the purchase process for products in competitive contexts. Instead of rating or ranking product concepts, respondents are shown a set of products on the screen (in full-profiles) and asked to indicate which one they would purchase: If you were shopping for a credit card, and these were your only options, which would you choose? VISA Mastercard Discover NONE: I would $40 annual fee $20 annual fee No annual fee defer my purchase 10% interest rate 18% interest rate 14% interest rate $2,000 credit limit $5,000 credit limit $1,000 credit limit As in the real world, respondents can decline to purchase in a CBC interview by choosing “ None.” If the aim of conjoint research is to predict product or service choices, it seems natural to use data resulting from choices. Choice tasks are more immediate and concrete than abstract rating or ranking tasks. They seem to ask respondents how they would choose now, given a set of potential offerings. Choice tasks show sets of products, and therefore mimic buying behavior in competitive contexts. Because choice -based questions show sets of products in full-profile, they encourage even more respondent simplification than traditional full-profile questions. Attributes that are important will get even greater emphasis (importance), and less important factors will receive less emphasis relative to CVA or ACA. CBC can measure up to 30 attributes with 15 levels each. In contrast to either ACA or CVA (which automatically provide respondent level part worth preference scores), CBC results have traditionally been analyzed at the aggregate, or group level. But with the availability of latent class and hierarchical Bayes (HB) estimation methods, group-based and individual-level analyses are now accessible and practical. There are a number of ways to analyze choice results: • Aggregate Choice Analysis: was the first and generally only form of analyzing CBC results, prior to advances in algorithms and the availability of faster computers. It was argued that aggregate analysis could permit estimation of subtle interaction effects (between brand and price), due to its ability to leverage a great deal of data across respondents. For most commercial applications, respondents often cannot provide enough information with even ratings- or sorting -based approaches to measure interactions at the individual level. While this advantage seems to favor aggregate analysis from choice data, academics and practitioners have argued that consumers have unique preferences and idiosyncrasies, and that aggregate-level models, which assume homogeneity, cannot be as accurate as individual-level models. • Latent Class Analysis: addresses respondent heterogeneity in choice data. Instead of developing a single set of part worths to represent all respondents (aggregate analysis), Latent Class simultaneously detects homogeneous respondent segments and calculates segment-level part worths. If the market is truly segmented, Latent Class can reveal much about market structure (including group membership for respondents) and improve the predictability over aggregate choice models. Subtle interactions also can be modeled in Latent Class, which seems to offer a compromise position, leveraging the benefits of aggregate estimation while recognizing market heterogeneity. • ICE (Individual Choice Estimation): is largely an interesting historical side note now, though some researchers still enthusiastically use this technique for individual-level estimation from CBC data. Before computers became fast enough to permit estimation of moderate to large CBC datasets under hierarchical Bayes (HB) within a reasonable amount of time, ICE offered a compelling speed advantage. Computers are now fast enough that this is usually not an issue, and in general, the industry has embraced HB. HB (Hierarchical Bayes Estimation) HB offers a very powerful way for “ borrowing” information from every respondent in the data set to improve the accuracy and stability of each individual’s part worths. It has consistently proven successful in reducing the IIA (Independence from Irrelevant Alternatives) problem and in improving the predictive validity of both individual -level models and market simulation share results. HB estimation can employ either main effects or models that additionally include interaction terms. But, researchers are finding that many (if not most) of the interaction effects that were discovered using aggregate CBC analysis were actually due to unrecognized heterogeneity. 4.1.4 Partial-Profile CBC Many researchers that favor choice-based conjoint rather than ratings-based approaches have looked for ways to increase the number of attributes that can effectively be measured using CBC. One solution that is gaining momentum over the last few years is partial-profile CBC. With partial-profile CBC, each choice question includes a subset of the total number of attributes being studied. These attributes are randomly rotated into the tasks, so across all tasks in the survey each respondent typically considers all attributes and levels. The problem with partial-profile CBC is that the data are spread quite thin, because each task has many attribute omissions, and the response is still the less informative (though more natural) 0/ 1 choice. As a result, partial-profile CBC requires larger sample sizes to stabilize results (relative to ACA), and individuallevel estimation under HB doesn’t always produce stable individual level part worths. Despite these shortcomings, some researchers who used to use ACA for studying many attributes have shifted to partial-profile choice. The individuallevel parameters have less stability than with ACA, but if the main goal is achieving accurate market simulations (and large enough samples are used), some researchers are willing to give up the individual-level stability. Partialprofile CBC results tend to reflect greater discrimination between most and least important attributes relative to ACA, though it is not a given that this means improved accuracy in predicting real world choices. 4.1.5 Choosing a method The problem that arises is which method should be chosen. The answer to this question is: the chosen method is the one that adequately reflects how buyers make decision in the actual marketplace. This includes not only the competitive context, but the way in which products are described (text), displayed (multimedia or physical prototypes), and considered. If there is a need to study many attributes, ACA or possibly partial-profile CBC should be considered. If there is a need to include attribute interactions in your models, you should probably use CBC. In many cases, survey populations don't have access to PCs, and it may be to o expensive to bring PCs to them, or viceversa. If the study must be administered paper-and-pencil, consider using CVA or CBC with its paper-and-pencil module. In case of dealing with relatively small sample sizes, you should be cautious about using CBC, unless respondents are able to answer more than the usual number of choice tasks. ACA and CVA are able to stabilize estimates using relatively smaller samples than CBC. Many researchers include more than one conjoint method in their surveys. For example, some studies need to measure a dozen or more attributes, and also require brand-specific demand curves. ACA followed by CBC can solve this problem within a single questionnaire. ACA would include all the attributes, while brand, price, and a few key performance variables would be studied using CBC. ACA provides the product design and feature importance model, while CBC provides price sensitivity estimates for each brand and a powerful pricing simulator. 5. Application Domain Conjoint analysis has been use in the following domains: • virtual reality studies [3][5] • analysis of the segmentation of the markets, optimization of the product based on constraints in its production, simulation of market strategies, causes of bad diffusion of French daily, evaluation of services of an enterprise, extension of product’s lines, the study of preference of groups of objects, the study of consumer behavior, study of decisions in individual medium [6] • international comparison of consumers of different countries [1] • marketing new product/ concept identification, pricing, advertising and distribution , to family decision making; tourism, tax analysis; time management; direct foreign; and medicine [8]. Bibliography [1] Böcker. F. Hausruckinger, G, Herker, A,"Pays d'origine et qualités écologiques comme caractéristique des biens de consommation durables : Une analyse comparative du comportement des consommateurs français et allemands". Recherche et applications en Marketing Vol VI N°391 21-30, 1991. [2] Dijkstra, J. , W.A.H. Roelen and H.J.P. Timmermans (1996). “ Conjoint Measurement In Virtual Environments: A Framework” . Timmermans, H.J.P. (ed.) 1996. DDSS 96. Proceedings 3rd Design and Decision Support Systems in Architecture and Urban Planning Conference, Vol. 1: Architecture Pro ceedings, Eindhoven University of Technology, Eindhoven, pp. 132-142. [3] Dijkstra, J. and H.J.P. Timmermans, "Conjoint Analysis and Virtual Reality - a Review" In: Proceedings of 4-rd Design & Decision Support Systems in Architecture & Urban Planning Conference, 1998. [4] Hair, J.F., R.E. Anderson, R.L. Tatham and W.C. Black, “Conjoint analysis, in Multivariate data Analysis”, Prentice Hall, Englewood Cliffs NJ, pp. 556-599, 1995. [5] Ifedayo A., Adelola, Abdur Rahman, Sara L., Cox, “ Conjoint analysis in virtual reality based powered wheelchair rehabilitation of children with disabilities” , Center On Disabilities Technology And Persons With Disabilities Conference 2003, available on http:/ / www.csun.edu/ cod/ conf/ 2003/ proceedings/ 263.htm. [6] Jean-Claude Liquet et Christophe Benavent, “ L’analyse et ses applications en marketing” , Equipe de recherche en Marketing – IAE de Lille, available on http:/ / christophe.benavent.free.fr/ IMG/ pdf/ conjointe.pdf [7] QuickMBA: Knowledge to Power Your Business, Marketing, “ Conjoint analysis” , available on http:/ / www.quickmba.com/ marketing/ research/ conjoint/ [8] Scott M. Smith, “Conjoint Analysis Tutorial” , Professor of Marketing Marriott School of Management Brigham Young University, available on http:/ / marketing.byu.edu/ htmlpages/ tutorials/ conjoint.htm. [9] Bryan Ome, “ Which conjoint method should I use?” , Sawtooth Software, Research papers, 2003.