Cluster Analysis
Cluster Analysis
Cluster Analysis
Learning Objectives
After reading this chapter you should understand:
The basic concepts of cluster analysis.
How basic cluster algorithms work.
How to compute simple clustering results manually.
The different types of clustering procedures.
The SPSS clustering outputs.
Are there any market segments where Web-enabled mobile telephony is taking
off in different ways? To answer this question, Okazaki (2006) applies a two-
step cluster analysis by identifying segments of Internet adopters in Japan. The
findings suggest that there are four clusters exhibiting distinct attitudes towards
Web-enabled mobile telephony adoption. Interestingly, freelance, and highly
educated professionals had the most negative perception of mobile Internet
adoption, whereas clerical office workers had the most positive perception.
Furthermore, housewives and company executives also exhibited a positive
attitude toward mobile Internet usage. Marketing managers can now use these
results to better target specific customer segments via mobile Internet services.
Introduction
market segments based on practical grounds, industry practice and wisdom, cluster
analysis allows segments to be formed that are based on data that are less dependent
on subjectivity.
The segmentation of customers is a standard application of cluster analysis, but it
can also be used in different, sometimes rather exotic, contexts such as evaluating
typical supermarket shopping paths (Larson et al. 2005) or deriving employers
branding strategies (Moroko and Uncles 2009).
A B
7
C
6
D E
Brand loyalty (y)
3 F
G
2
0
0 1 2 3 4 5 6 7
Price consciousness (x)
Choose a clustering
algorithm
business contexts. Thus, we have to ensure that the segments are large enough
to make the targeted marketing programs profitable. Consequently, we have to
cope with a certain degree of within-cluster heterogeneity, which makes targeted
marketing programs less effective.
In the final step, we need to interpret the solution by defining and labeling the
obtained clusters. This can be done by examining the clustering variables mean
values or by identifying explanatory variables to profile the clusters. Ultimately,
managers should be able to identify customers in each segment on the basis of
easily measurable variables. This final step also requires us to assess the clustering
solutions stability and validity. Figure 9.2 illustrates the steps associated with a
cluster analysis; we will discuss these in more detail in the following sections.
segments and, consequently, to deficient marketing strategies. Thus, great care should
be taken when selecting the clustering variables.
There are several types of clustering variables and these can be classified into
general (independent of products, services or circumstances) and specific (related
to both the customer and the product, service and/or particular circumstance),
on the one hand, and observable (i.e., measured directly) and unobservable
(i.e., inferred) on the other. Table 9.2 provides several types and examples of
clustering variables.
The types of variables used for cluster analysis provide different segments and,
thereby, influence segment-targeting strategies. Over the last decades, attention has
shifted from more traditional general clustering variables towards product-specific
unobservable variables. The latter generally provide better guidance for decisions
on marketing instruments effective specification. It is generally acknowledged that
segments identified by means of specific unobservable variables are usually more
homogenous and their consumers respond consistently to marketing actions (see
Wedel and Kamakura 2000). However, consumers in these segments are also
frequently hard to identify from variables that are easily measured, such as demo-
graphics. Conversely, segments determined by means of generally observable
variables usually stand out due to their identifiability but often lack a unique
response structure.1 Consequently, researchers often combine different variables
(e.g., multiple lifestyle characteristics combined with demographic variables),
benefiting from each ones strengths.
In some cases, the choice of clustering variables is apparent from the nature of
the task at hand. For example, a managerial problem regarding corporate communi-
cations will have a fairly well defined set of clustering variables, including con-
tenders such as awareness, attitudes, perceptions, and media habits. However, this
is not always the case and researchers have to choose from a set of candidate
variables.
Whichever clustering variables are chosen, it is important to select those that
provide a clear-cut differentiation between the segments regarding a specific
managerial objective.2 More precisely, criterion validity is of special interest; that
is, the extent to which the independent clustering variables are associated with
1
See Wedel and Kamakura (2000).
2
Tonks (2009) provides a discussion of segment design and the choice of clustering variables in
consumer markets.
242 9 Cluster Analysis
one or more dependent variables not included in the analysis. Given this relation-
ship, there should be significant differences between the dependent variable(s)
across the clusters. These associations may or may not be causal, but it is essential
that the clustering variables distinguish the dependent variable(s) significantly.
Criterion variables usually relate to some aspect of behavior, such as purchase
intention or usage frequency.
Generally, you should avoid using an abundance of clustering variables, as this
increases the odds that the variables are no longer dissimilar. If there is a high
degree of collinearity between the variables, they are not sufficiently unique to
identify distinct market segments. If highly correlated variables are used for cluster
analysis, specific aspects covered by these variables will be overrepresented in the
clustering solution. In this regard, absolute correlations above 0.90 are always
problematic. For example, if we were to add another variable called brand pre-
ference to our analysis, it would virtually cover the same aspect as brand loyalty.
Thus, the concept of being attached to a brand would be overrepresented in the
analysis because the clustering procedure does not differentiate between the clus-
tering variables in a conceptual sense. Researchers frequently handle this issue
by applying cluster analysis to the observations factor scores derived from a
previously carried out factor analysis. However, according to Dolnicar and Grun
(2009), this factor-cluster segmentation approach can lead to several problems:
1. The data are pre-processed and the clusters are identified on the basis of trans-
formed values, not on the original information, which leads to different results.
2. In factor analysis, the factor solution does not explain a certain amount of
variance; thus, information is discarded before segments have been identified
or constructed.
3. Eliminating variables with low loadings on all the extracted factors means that,
potentially, the most important pieces of information for the identification of
niche segments are discarded, making it impossible to ever identify such groups.
4. The interpretations of clusters based on the original variables become question-
able given that the segments have been constructed using factor scores.
Several studies have shown that the factor-cluster segmentation significantly
reduces the success of segment recovery.3 Consequently, you should rather reduce
the number of items in the questionnaires pre-testing phase, retaining a reasonable
number of relevant, non-redundant questions that you believe differentiate the
segments well. However, if you have your doubts about the data structure, factor-
clustering segmentation may still be a better option than discarding items that may
conceptually be necessary.
Furthermore, we should keep the sample size in mind. First and foremost,
this relates to issues of managerial relevance as segments sizes need to be substan-
tial to ensure that targeted marketing programs are profitable. From a statistical
perspective, every additional variable requires an over-proportional increase in
3
See the studies by Arabie and Hubert (1994), Sheppard (1996), or Dolnicar and Gr
un (2009).
Conducting a Cluster Analysis 243
4
See Wedel and Kamakura (2000), Dolnicar (2003), and Kaufman and Rousseeuw (2005) for a
review of clustering techniques.
244 9 Cluster Analysis
Hierarchical Methods
Step 1
A, B, C, D, E
Agglomerative clustering
Step 4
Step 2
Divisive clustering
A, B C, D, E
Step 3
Step 3
A, B C, D E
Step 2
Step 4
A, B C D E
Step 1
Step 5
A B C D E
The Euclidean distance is the square root of the sum of the squared differences in
the variables values. Using the data from Table 9.1, we obtain the following:
q p
dEuclidean B; C 6 52 7 62 2 1:414
This distance corresponds to the length of the line that connects objects B and C.
In this case, we only used two variables but we can easily add more under the root
sign in the formula. However, each additional variable will add a dimension to our
research problem (e.g., with six clustering variables, we have to deal with six
dimensions), making it impossible to represent the solution graphically. Similarly,
we can compute the distance between customer B and G, which yields the following:
q p
dEuclidean B; G 6 12 7 22 50 7:071
Likewise, we can compute the distance between all other pairs of objects. All
these distances are usually expressed by means of a distance matrix. In this distance
matrix, the non-diagonal elements express the distances between pairs of objects
5
Note that researchers also often use the squared Euclidean distance.
246 9 Cluster Analysis
and zeros on the diagonal (the distance from each object to itself is, of course, 0). In
our example, the distance matrix is an 8 8 table with the lines and rows
representing the objects (i.e., customers) under consideration (see Table 9.3). As
the distance between objects B and C (in this case 1.414 units) is the same as
between C and B, the distance matrix is symmetrical. Furthermore, since the
distance between an object and itself is zero, one need only look at either the
lower or upper non-diagonal elements.
Table 9.3 Euclidean distance matrix
Objects A B C D E F G
A 0
B 3 0
C 2.236 1.414 0
D 2 3.606 2.236 0
E 3.606 2 1.414 3 0
F 4.123 4.472 3.162 2.236 2.828 0
G 5.385 7.071 5.657 3.606 5.831 3.162 0
There are also alternative distance measures: The city-block distance uses the
sum of the variables absolute differences. This is often called the Manhattan metric
as it is akin to the walking distance between two points in a city like New Yorks
Manhattan district, where the distance equals the number of blocks in the directions
North-South and East-West. Using the city-block distance to compute the distance
between customers B and C (or C and B) yields the following:
Lastly, when working with metric (or ordinal) data, researchers frequently use
the Chebychev distance, which is the maximum of the absolute difference in the
clustering variables values. In respect of customers B and C, this result is:
Figure 9.4 illustrates the interrelation between these three distance measures
regarding two objects, C and G, from our example.
Conducting a Cluster Analysis 247
C
Brand loyalty (y)
Chebychev distance
There are other distance measures such as the Angular, Canberra or Mahalanobis
distance. In many situations, the latter is desirable as it compensates for collinearity
between the clustering variables. However, it is (unfortunately) not menu-accessible
in SPSS.
In many analysis tasks, the variables under consideration are measured on
different scales or levels. This would be the case if we extended our set of clustering
variables by adding another ordinal variable representing the customers income
measured by means of, for example, 15 categories. Since the absolute variation
of the income variable would be much greater than the variation of the remaining
two variables (remember, that x and y are measured on 7-point scales), this would
clearly distort our analysis results. We can resolve this problem by standardizing
the data prior to the analysis.
Different standardization methods are available, such as the simple z standardi-
zation, which rescales each variable to have a mean of 0 and a standard deviation of
1 (see Chap. 5). In most situations, however, standardization by range (e.g., to a
range of 0 to 1 or 1 to 1) performs better.6 We recommend standardizing the data
in general, even though this procedure can reduce or inflate the variables influence
on the clustering solution.
6
See Milligan and Cooper (1988).
248 9 Cluster Analysis
Based on the allocation scheme in Table 9.5, we can compute different matching
coefficients, such as the simple matching coefficient (SM):
ad
SM
abcd
This coefficient is useful when both positive and negative values carry an equal
degree of information. For example, gender is a symmetrical attribute because the
number of males and females provides an equal degree of information.
Conducting a Cluster Analysis 249
Lets take a look at an example by assuming that we have a dataset with three
binary variables: gender (male 1, female 2), customer (customer 1, non-
customer 2), and disposable income (low 1, high 2). The first object is a male
non-customer with a high disposable income, whereas the second object is a female
non-customer with a high disposable income. According to the scheme in Table 9.4,
a b 0, c 1 and d 2, with the simple matching coefficient taking a value
of 0.667.
Two other types of matching coefficients, which do not equate the joint absence
of a characteristic with similarity and may, therefore, be of more value in segmen-
tation studies, are the Jaccard (JC) and the Russel and Rao (RR) coefficients. They
are defined as follows:
a
JC
abc
a
RR
abcd
These matching coefficients are just like the distance measures used to
determine a cluster solution. There are many other matching coefficients such as
Yules Q, Kulczynski or Ochiai, but since most applications of cluster analysis rely
on metric or ordinal data, we will not discuss these in greater detail.7
For nominal variables with more than two categories, you should always convert
the categorical variable into a set of binary variables in order to use matching
coefficients. When you have ordinal data, you should always use distance measures
such as Euclidean distance. Even though using matching coefficients would be
feasible and from a strictly statistical standpoint even more appropriate, you
would disregard variable information in the sequence of the categories. In the end, a
respondent who indicates that he or she is very loyal to a brand is going to be closer
to someone who is somewhat loyal than a respondent who is not loyal at all.
Furthermore, distance measures best represent the concept of proximity, which is
fundamental to cluster analysis.
Most datasets contain variables that are measured on multiple scales. For
example, a market research questionnaire may ask about the respondents income,
product ratings, and last brand purchased. Thus, we have to consider variables
measured on a ratio, ordinal, and nominal scale. How can we simultaneously
incorporate these variables into one analysis? Unfortunately, this problem cannot
be easily resolved and, in fact, many market researchers simply ignore the scale
level. Instead, they use one of the distance measures discussed in the context
of metric (and ordinal) data. Even though this approach may slightly change
the results when compared to those using matching coefficients, it should not be
rejected. Cluster analysis is mostly an exploratory technique whose results provide
a rough guidance for managerial decisions. Despite this, there are several proce-
dures that allow a simultaneous integration of these variables into one analysis.
7
See Wedel and Kamakura (2000) for more information on alternative matching coefficients.
250 9 Cluster Analysis
First, we could compute distinct distance matrices for each group of variables; that
is, one distance matrix based on, for example, ordinally scaled variables and another
based on nominal variables. Afterwards, we can simply compute the weighted
arithmetic mean of the distances and use this average distance matrix as the input
for the cluster analysis. However, the weights have to be determined a priori
and improper weights may result in a biased treatment of different variable types.
Furthermore, the computation and handling of distance matrices are not trivial.
Using the SPSS syntax, one has to manually add the MATRIX subcommand, which
exports the initial distance matrix into a new data file. Go to the 8 Web Appendix
(! Chap. 5) to learn how to modify the SPSS syntax accordingly.
Second, we could dichotomize all variables and apply the matching coefficients
discussed above. In the case of metric variables, this would involve specifying
categories (e.g., low, medium, and high income) and converting these into sets of
binary variables. In most cases, however, the specification of categories would be
rather arbitrary and, as mentioned earlier, this procedure could lead to a severe loss
of information.
In the light of these issues, you should avoid combining metric and nominal
variables in a single cluster analysis, but if this is not feasible, the two-step clustering
procedure provides a valuable alternative, which we will discuss later. Lastly, the
choice of the (dis)similarity measure is not extremely critical to recovering the
underlying cluster structure. In this regard, the choice of the clustering algorithm
is far more important. We therefore deal with this aspect in the following section.
After having chosen the distance or similarity measure, we need to decide which
clustering algorithm to apply. There are several agglomerative procedures and they
can be distinguished by the way they define the distance from a newly formed
cluster to a certain object, or to other clusters in the solution. The most popular
agglomerative clustering procedures include the following:
l Single linkage (nearest neighbor): The distance between two clusters corre-
sponds to the shortest distance between any two members in the two clusters.
l Complete linkage (furthest neighbor): The oppositional approach to single
linkage assumes that the distance between two clusters is based on the longest
distance between any two members in the two clusters.
l Average linkage: The distance between two clusters is defined as the average
distance between all pairs of the two clusters members.
l Centroid: In this approach, the geometric center (centroid) of each cluster is
computed first. The distance between the two clusters equals the distance bet-
ween the two centroids.
Figures 9.59.8 illustrate these linkage procedures for two randomly framed
clusters.
Conducting a Cluster Analysis 251
Each of these linkage algorithms can yield totally different results when used
on the same dataset, as each has its specific properties. As the single linkage
algorithm is based on minimum distances, it tends to form one large cluster with
the other clusters containing only one or few objects each. We can make use of
this chaining effect to detect outliers, as these will be merged with the remain-
ing objects usually at very large distances in the last steps of the analysis.
Generally, single linkage is considered the most versatile algorithm. Conversely,
the complete linkage method is strongly affected by outliers, as it is based on
maximum distances. Clusters produced by this method are likely to be rather
compact and tightly clustered. The average linkage and centroid algorithms tend
to produce clusters with rather low within-cluster variance and similar sizes.
However, both procedures are affected by outliers, though not as much as
complete linkage.
Another commonly used approach in hierarchical clustering is Wards method.
This approach does not combine the two most similar objects successively. Instead,
those objects whose merger increases the overall within-cluster variance to the
smallest possible degree, are combined. If you expect somewhat equally sized
clusters and the dataset does not include outliers, you should always use Wards
method.
To better understand how a clustering algorithm works, lets manually examine
some of the single linkage procedures calculation steps. We start off by looking at
the initial (Euclidean) distance matrix in Table 9.3. In the very first step, the two
objects exhibiting the smallest distance in the matrix are merged. Note that we
always merge those objects with the smallest distance, regardless of the clustering
procedure (e.g., single or complete linkage). As we can see, this happens to two
pairs of objects, namely B and C (d(B, C) 1.414), as well as C and E (d(C, E)
1.414). In the next step, we will see that it does not make any difference whether we
first merge the one or the other, so lets proceed by forming a new cluster, using
objects B and C.
Having made this decision, we then form a new distance matrix by considering
the single linkage decision rule as discussed above. According to this rule, the
distance from, for example, object A to the newly formed cluster is the minimum of
d(A, B) and d(A, C). As d(A, C) is smaller than d(A, B), the distance from A to the
newly formed cluster is equal to d(A, C); that is, 2.236. We also compute the
distances from cluster [B,C] (clusters are indicated by means of squared brackets)
to all other objects (i.e. D, E, F, G) and simply copy the remaining distances such
as d(E, F) that the previous clustering has not affected. This yields the distance
matrix shown in Table 9.6.
Continuing the clustering procedure, we simply repeat the last step by merging
the objects in the new distance matrix that exhibit the smallest distance (in this case,
the newly formed cluster [B, C] and object E) and calculate the distance from this
cluster to all other objects. The result of this step is described in Table 9.7.
Try to calculate the remaining steps yourself and compare your solution with the
distance matrices in the following Tables 9.89.10.
Conducting a Cluster Analysis 253
Table 9.6 Distance matrix after first clustering step (single linkage)
Objects A B, C D E F G
A 0
B, C 2.236 0
D 2 2.236 0
E 3.606 1.414 3 0
F 4.123 3.162 2.236 2.828 0
G 5.385 5.657 3.606 5.831 3.162 0
Table 9.7 Distance matrix after second clustering step (single linkage)
Objects A B, C, E D F G
A 0
B, C, E 2.236 0
D 2 2.236 0
F 4.123 2.828 2.236 0
G 5.385 5.657 3.606 3.162 0
Table 9.8 Distance matrix after third clustering step (single linkage)
Objects A, D B, C, E F G
A, D 0
B, C, E 2.236 0
F 2.236 2.828 0
G 3.606 5.657 3.162 0
Table 9.9 Distance matrix after fourth clustering step (single linkage)
Objects A, B, C, D, E F G
A, B, C, D, E 0
F 2.236 0
G 3.606 3.162 0
Table 9.10 Distance matrix after fifth clustering step (single linkage)
Objects A, B, C, D, E, F G
A, B, C, D, E, F 0
G 3.162 0
By following the single linkage procedure, the last steps involve the merger
of cluster [A,B,C,D,E,F] and object G at a distance of 3.162. Do you get the same
results? As you can see, conducting a basic cluster analysis manually is not that
hard at all not if there are only a few objects in the dataset.
A common way to visualize the cluster analysiss progress is by drawing a
dendrogram, which displays the distance level at which there was a combination
of objects and clusters (Fig. 9.9).
We read the dendrogram from left to right to see at which distance objects
have been combined. For example, according to our calculations above, objects
B, C, and E are combined at a distance level of 1.414.
254 9 Cluster Analysis
0 1 2 3
Distance
Research has suggested several other procedures for determining the number of
clusters in a dataset. Most notably, the variance ratio criterion (VRC) by Calinski
and Harabasz (1974) has proven to work well in many situations.8 For a solution
with n objects and k segments, the criterion is given by:
where SSB is the sum of the squares between the segments and SSW is the sum of the
squares within the segments. The criterion should seem familiar, as this is nothing
but the F-value of a one-way ANOVA, with k representing the factor levels.
Consequently, the VRC can easily be computed using SPSS, even though it is not
readily available in the clustering procedures outputs.
To finally determine the appropriate number of segments, we compute ok for
each segment solution as follows:
In the next step, we choose the number of segments k that minimizes the value in
ok. Owing to the term VRCk1, the minimum number of clusters that can be
selected is three, which is a clear disadvantage of the criterion, thus limiting its
application in practice.
Overall, the data can often only provide rough guidance regarding the number of
clusters you should select; consequently, you should rather revert to practical
considerations. Occasionally, you might have a priori knowledge, or a theory on
which you can base your choice. However, first and foremost, you should ensure
that your results are interpretable and meaningful. Not only must the number of
clusters be small enough to ensure manageability, but each segment should also be
large enough to warrant strategic attention.
8
Milligan and Cooper (1985) compare various criteria.
9
Note that the k-means algorithm is one of the simplest non-hierarchical clustering methods.
Several extensions, such as k-medoids (Kaufman and Rousseeuw 2005) have been proposed to
handle problematic aspects of the procedure. More advanced methods include finite mixture
models (McLachlan and Peel 2000), neural networks (Bishop 2006), and self-organizing maps
(Kohonen 1982). Andrews and Currim (2003) discuss the validity of some of these approaches.
256 9 Cluster Analysis
A B
CC1
D E
Brand loyalty (y)
CC2
10
Note this holds for the algorithms original design. SPSS does not choose centers randomly.
Conducting a Cluster Analysis 257
A B
CC1
D E
Brand loyalty (y)
CC2
A B
CC1
CC1
C
D E
Brand loyalty (y)
CC2
CC2
F
A B
CC1
C
Brand loyalty (y)
D E
CC2
represent.11 After this (step 2), Euclidean distances are computed from the cluster
centers to every single object. Each object is then assigned to the cluster center with
the shortest distance to it. In our example (Fig. 9.11), objects A, B, and C are
assigned to the first cluster, whereas objects D, E, F, and G are assigned to the
second. We now have our initial partitioning of the objects into two clusters.
Based on this initial partition, each clusters geometric center (i.e., its centroid)
is computed (third step). This is done by computing the mean values of the objects
contained in the cluster (e.g., A, B, C in the first cluster) regarding each of the variables
(price consciousness and brand loyalty). As we can see in Fig. 9.12, both clusters
centers now shift into new positions (CC1 for the first and CC2 for the second cluster).
In the fourth step, the distances from each object to the newly located cluster
centers are computed and objects are again assigned to a certain cluster on the basis
of their minimum distance to other cluster centers (CC1 and CC2). Since the
cluster centers position changed with respect to the initial situation in the first step,
this could lead to a different cluster solution. This is also true of our example, as
object E is now unlike in the initial partition closer to the first cluster center
(CC1) than to the second (CC2). Consequently, this object is now assigned to the
first cluster (Fig. 9.13). The k-means procedure now repeats the third step and
re-computes the cluster centers of the newly formed clusters, and so on. In other
11
Conversely, SPSS always sets one observation as the cluster center instead of picking some
random point in the dataset.
Conducting a Cluster Analysis 259
words, steps 3 and 4 are repeated until a predetermined number of iterations are
reached, or convergence is achieved (i.e., there is no change in the cluster affiliations).
Generally, k-means is superior to hierarchical methods as it is less affected by
outliers and the presence of irrelevant clustering variables. Furthermore, k-means
can be applied to very large datasets, as the procedure is less computationally
demanding than hierarchical methods. In fact, we suggest definitely using k-means
for sample sizes above 500, especially if many clustering variables are used. From
a strictly statistical viewpoint, k-means should only be used on interval or ratio-
scaled data as the procedure relies on Euclidean distances. However, the procedure is
routinely used on ordinal data as well, even though there might be some distortions.
One problem associated with the application of k-means relates to the fact that
the researcher has to pre-specify the number of clusters to retain from the data. This
makes k-means less attractive to some and still hinders its routine application in
practice. However, the VRC discussed above can likewise be used for k-means
clustering (an application of this index can be found in the 8 Web Appendix !
Chap. 9). Another workaround that many market researchers routinely use is to
apply a hierarchical procedure to determine the number of clusters and k-means
afterwards.12 This also enables the user to find starting values for the initial cluster
centers to handle a second problem, which relates to the procedures sensitivity to
the initial classification (we will follow this approach in the example application).
Two-Step Clustering
12
See Punji and Stewart (1983) for additional information on this sequential approach.
260 9 Cluster Analysis
Before interpreting the cluster solution, we have to assess the solutions stability
and validity. Stability is evaluated by using different clustering procedures on the
same data and testing whether these yield the same results. In hierarchical cluster-
ing, you can likewise use different distance measures. However, please note that it
is common for results to change even when your solution is adequate. How much
variation you should allow before questioning the stability of your solution is a
matter of taste. Another common approach is to split the dataset into two halves and
to thereafter analyze the two subsets separately using the same parameter settings.
You then compare the two solutions cluster centroids. If these do not differ
significantly, you can presume that the overall solution has a high degree of
stability. When using hierarchical clustering, it is also worthwhile changing the
order of the objects in your dataset and re-running the analysis to check the results
stability. The results should not, of course, depend on the order of the dataset. If
they do, you should try to ascertain if any obvious outliers may influence the results
of the change in order.
Assessing the solutions reliability is closely related to the above, as reliability
refers to the degree to which the solution is stable over time. If segments quickly
change their composition, or its members their behavior, targeting strategies are
likely not to succeed. Therefore, a certain degree of stability is necessary to ensure
that marketing strategies can be implemented and produce adequate results. This
can be evaluated by critically revisiting and replicating the clustering results at
a later point in time.
To validate the clustering solution, we need to assess its criterion validity.
In research, we could focus on criterion variables that have a theoretically based
relationship with the clustering variables, but were not included in the analysis.
In market research, criterion variables usually relate to managerial outcomes
such as the sales per person, or satisfaction. If these criterion variables differ signifi-
cantly, we can conclude that the clusters are distinct groups with criterion validity.
To judge validity, you should also assess face validity and, if possible, expert
validity. While we primarily consider criterion validity when choosing clustering
variables, as well as in this final step of the analysis procedure, the assessment of face
validity is a process rather than a single event. The key to successful segmentation is
to critically revisit the results of different cluster analysis set-ups (e.g., by using
Conducting a Cluster Analysis 261
different algorithms on the same data) in terms of managerial relevance. This under-
lines the exploratory character of the method. The following criteria will help you
make an evaluation choice for a clustering solution (Dibb 1999; Tonks 2009; Kotler
and Keller 2009).
l Substantial: The segments are large and profitable enough to serve.
l Accessible: The segments can be effectively reached and served, which requires
them to be characterized by means of observable variables.
l Differentiable: The segments can be distinguished conceptually and respond
differently to different marketing-mix elements and programs.
l Actionable: Effective programs can be formulated to attract and serve the
segments.
l Stable: Only segments that are stable over time can provide the necessary
grounds for a successful marketing strategy.
l Parsimonious: To be managerially meaningful, only a small set of substantial
clusters should be identified.
l Familiar: To ensure management acceptance, the segments composition should
be comprehensible.
l Relevant: Segments should be relevant in respect of the companys competencies
and objectives.
l Compactness: Segments exhibit a high degree of within-segment homogeneity
and between-segment heterogeneity.
l Compatibility: Segmentation results meet other managerial functions require-
ments.
The final step of any cluster analysis is the interpretation of the clusters.
Interpreting clusters always involves examining the cluster centroids, which are
the clustering variables average values of all objects in a certain cluster. This step
is of the utmost importance, as the analysis sheds light on whether the segments are
conceptually distinguishable. Only if certain clusters exhibit significantly different
means in these variables are they distinguishable from a data perspective, at least.
This can easily be ascertained by comparing the clusters with independent t-tests
samples or ANOVA (see Chap. 6).
By using this information, we can also try to come up with a meaningful name or
label for each cluster; that is, one which adequately reflects the objects in the
cluster. This is usually a very challenging task. Furthermore, clustering variables
are frequently unobservable, which poses another problem. How can we decide to
which segment a new object should be assigned if its unobservable characteristics,
such as personality traits, personal values or lifestyles, are unknown? We could
obviously try to survey these attributes and make a decision based on the clustering
variables. However, this will not be feasible in most situations and researchers
therefore try to identify observable variables that best mirror the partition of the
objects. If it is possible to identify, for example, demographic variables leading to a
very similar partition as that obtained through the segmentation, then it is easy to
assign a new object to a certain segment on the basis of these demographic
262 9 Cluster Analysis
While companies often develop their own market segments, they frequently use
standardized segments, which are based on established buying trends, habits, and
customers needs and have been specifically designed for use by many products in
mature markets. One of the most popular approaches is the PRIZM lifestyle
segmentation system developed by Claritas Inc., a leading market research com-
pany. PRIZM defines every US household in terms of 66 demographically and
behaviorally distinct segments to help marketers discern those consumers likes,
dislikes, lifestyles, and purchase behaviors.
Visit the Claritas website and flip through the various segment profiles. By
entering a 5-digit US ZIP code, you can also find a specific neighborhoods top
five lifestyle groups.
One example of a segment is Gray Power, containing middle-class, home-
owning suburbanites who are aging in place rather than moving to retirement
communities. Gray Power reflects this trend, a segment of older, midscale
singles and couples who live in quiet comfort.
http://www.claritas.com/MyBestSegments/Default.jsp
Example
http://www.teslamotors.com
266 9 Cluster Analysis
The pretest sample of 15, randomly taken, cars is shown in Fig. 9.14. In practice,
clustering is done on much larger samples but we use a small sample size to
illustrate the clustering process. Keep in mind that in this example, the ratio
between the objects and clustering variables is much too small. The dataset used
is cars.sav (8 Web Appendix ! Chap. 9).
In the next step, we will run several different clustering procedures on the basis
of these nine variables. We first apply a hierarchical cluster analysis based on
Euclidean distances, using the single linkage method. This will help us determine
a suitable number of segments, which we will use as input for a subsequent k-means
clustering. Finally, we will run a two-step cluster analysis using SPSS.
Before we start with the clustering process, we have to examine the variables for
substantial collinearity. Just by looking at the variable set, we suspect that there are
some highly correlated variables in our dataset. For example, we expect rather high
correlations between speed and acceleration. To determine this, we run a bivariate
correlation analysis by clicking Analyze Correlate Bivariate, which will open
a dialog box similar to that in Fig. 9.15. Enter all variables into the Variables box
and select the box Pearson (under Correlation Coefficients) because these are
continuous variables.
The correlation matrix in Table 9.12 supports our expectations there are
several variables that have high correlations. Displacement exhibits high (absolute)
correlation coefficients with horsepower, speed, and acceleration, with values well
above 0.90, indicating possible collinearity issues. Similarly, horsepower is highly
correlated with speed and acceleration. Likewise, length shows a high degree of
correlation with width, weight, and trunk.
A potential solution to this problem would be to run a factor analysis and
perform a cluster analysis on the resulting factor scores. Since the factors obtained
are, by definition, independent, this would allow for an effective handling of the
collinearity issue. However, as this approach is associated with several problems
(see discussion above) and as there are only several variables in our data set,
we should reduce the variables, for example, by omitting displacement, horsepower,
Example 267
and length from the subsequent analyses. The remaining variables still provide a
sound basis for carrying out cluster analysis.
To run the hierarchical clustering procedure, click on Analyze Classify
Hierarchical Cluster, which opens a dialog box similar to Fig. 9.16.
Move the variables moment, width, weight, trunk, speed, and acceleration into
the Variable(s) box and specify name as the labeling variable (box Label Cases
by). The Statistics option gives us the opportunity to request the distance matrix
(labeled proximity matrix in this case) and the agglomeration schedule, which
provides information on the objects being combined at each stage of the clustering
process. Furthermore, we can specify the number or range of clusters to retain from
the data. As we do not yet know how many clusters to retain, just check the box
Agglomeration schedule and continue.
Under Plots, we choose to display a dendrogram, which graphically displays the
distances at which objects and clusters are joined. Also ensure you select the icicle
diagram (for all clusters), which is yet another graph for displaying clustering
solutions.
The option Method allows us to specify the cluster method (e.g., single linkage
or Wards method), the distance measure (e.g., Chebychev distance or the Jaccard
coefficient), and the type of standardization of values. In this example, we use the
single linkage method (Nearest neighbor) based on Euclidean distances. Since
the variables are measured on different levels (e.g., speed versus weight), make sure
to standardize the variables, using, for example, the Range 1 to 1 (by variable) in
the Transform Values drop-down list.
Table 9.12 Correlation matrix
268
Correlations
displacement moment horsepower length width weight trunk speed acceleration
displacement Pearson Correlation 1 .875** .983** .657** .764** .768** .470 .967** .969**
Sig. (2-tailed) .000 .000 .008 .001 .001 .077 .000 .000
N 15 15 15 15 15 15 15 15 15
moment Pearson Correlation .875** 1 .847** .767** .766** .862** .691** .859** .861**
Sig. (2-tailed) .000 .000 .001 .001 .000 .004 .000 .000
N 15 15 15 15 15 15 15 15 15
horsepower Pearson Correlation .983** .847** 1 .608* .732** .714** .408 .968** .961**
Sig. (2-tailed) .000 .000 .016 .002 .003 .131 .000 .000
N 15 15 15 15 15 15 15 15 15
length Pearson Correlation .657** .767** .608* 1 .912** .921** .934** .741** .714**
Sig. (2-tailed) .008 .001 .016 .000 .000 .000 .002 .003
N 15 15 15 15 15 15 15 15 15
width Pearson Correlation .764** .766** .732** .912** 1 .884** .783** .819** .818**
Sig. (2-tailed) .001 .001 .002 .000 .000 .001 .000 .000
N 15 15 15 15 15 15 15 15 15
weight Pearson Correlation .768** .862** .714** .921** .884** 1 .785** .778** .763**
Sig. (2-tailed) .001 .000 .003 .000 .000 .001 .001 .001
N 15 15 15 15 15 15 15 15 15
trunk Pearson Correlation .470 .691** .408 .934** .783** .785** 1 .579* .552*
Sig. (2-tailed) .077 .004 .131 .000 .001 .001 .024 .033
N 15 15 15 15 15 15 15 15 15
speed Pearson Correlation .967** .859** .968** .741** .819** .778** .579* 1 .971**
Sig. (2-tailed) .000 .000 .000 .002 .000 .001 .024 .000
N 15 15 15 15 15 15 15 15 15
acceleration Pearson Correlation .969** .861** .961** .714** .818** .763** .552* .971** 1
Sig. (2-tailed) .000 .000 .000 .003 .000 .001 .033 .000
N 15 15 15 15 15 15 15 15 15
**. Correlation is significant at the 0.01 level (2-tailed).
9 Cluster Analysis
Lastly, the Save option enables us to save cluster memberships for a single
solution or a range of solutions. Saved variables can then be used in subsequent
analyses to explore differences between groups. As a start, we will skip this option,
so continue and click on OK in the main menu.
6 13 14 .267 0 4 11
7 11 12 .321 0 0 9
8 2 3 .353 0 5 10
9 10 11 .357 0 7 11
10 1 2 .389 0 8 14
11 10 13 .484 9 6 13
12 8 9 .575 0 0 13
13 8 10 .618 12 11 14
14 1 8 .910 10 13 0
270 9 Cluster Analysis
First, we take a closer look at the agglomeration schedule (Table 9.13), which
displays the objects or clusters combined at each stage (second and third column)
and the distances at which this merger takes place. For example, in the first stage,
objects 5 and 6 are merged at a distance of 0.149. From here onward, the resulting
cluster is labeled as indicated by the first object involved in this merger, which is
object 5. The last column on the very right tells you in which stage of the algorithm
this cluster will appear next. In this case, this happens in the second step, where it is
merged with object 7 at a distance of 0.184. The resulting cluster is still labeled 5,
and so on. Similar information is provided by the icicle diagram shown in
Fig. 9.17. Its name stems from the analogy to rows of icicles hanging from the eaves
of a house. The diagram is read from the bottom to the top, therefore the columns
correspond to the objects being clustered, and the rows represent the number of
clusters.
Scree Plot
1
0.9
0.8
Coefficient (distance)
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
Number of clusters
separately using Microsoft Excel - (Fig. 9.18) does not show such a distinct break.
Note that unlike in the factor analysis we do not pick the solution with one
cluster less than indicated by the elbow. The sharp increase in distance when
switching from a one to a two-cluster solution occurs in almost all analyses and
must not be viewed as a reliable indicator for the decision regarding the number of
segments.
The scree plot in Fig. 9.18 shows that there is no clear elbow indicating a suitable
number of clusters to retain. Based on the results, one could argue for a five-segment
or six-segment solution. However, considering that there are merely 15 objects in the
dataset, this seems too many, as we then have very small (and, most probably,
meaningless) clusters. Consequently, a two, three or four-segment solution is
deemed more appropriate.
Lets take a look at the dendrogram shown in Fig. 9.19. We read the dendrogram
from left to right. Vertical lines are objects and clusters joined together their position
indicates the distance at which this merger takes place. When creating a dendrogram,
SPSS rescales the distances to a range of 025; that is, the last merging step to a one-
cluster solution takes place at a (rescaled) distance of 25. Note that this differs from our
manual calculation shown in Fig. 9.9, where we did not do any rescaling! Again, the
analysis only provides a rough guidance regarding the number of segments to retain.
The change in distances between the mergers indicates that (besides a two-segment
solution) both a three and four-segment solution are appropriate.
To clarify this issue, lets re-run the analysis, but this time we pre-specify dif-
ferent segment numbers to compare these with regard to content validity. To do
so, just re-run the analysis using hierarchical clustering. Now switch to the Save
option, specify a range of solutions from 2 to 4 and run the analysis. SPSS generates
272 9 Cluster Analysis
the same output but also adds three additional variables to your dataset (CLU4_1,
CLU3_1, and CLU2_1), which reflect each objects cluster membership for the
respective analysis. SPSS automatically places CLU in front, followed by the number
of clusters (4, 3, or 2), to identify each objects cluster membership. The results are
Table 9.14 Cluster memberships
Name Four clusters, Three clusters, Two clusters,
observation member observation member observation member
of cluster of cluster of cluster
Kia Picanto 1.1 Start 1 1 1
Suzuki Splash 1.0 1 1 1
Renault Clio 1.2 1 1 1
Dacia Sandero 1.6 1 1 1
Fiat Grande Punto 1.4 1 1 1
Peugot 207 1.4 1 1 1
Renault Clio 1.6 1 1 1
Porsche Cayman 2 2 2
Nissan 350Z 3 2 2
Mercedes C 200 CDI 4 3 2
VW Passat Variant 2.0 4 3 2
Skoda Octavia 2.0 4 3 2
Mercedes E 280 4 3 2
Audi A6 2.4 4 3 2
BMW 525i 4 3 2
Example 273
illustrated in Table 9.14. SPSS does not produce this table for us, so we need to
enter these cluster memberships ourselves in a table or spreadsheet.
When we view the results, a three-segment solution appears promising. In this
solution, the first segment comprises compact cars, whereas the second segment
contains sports cars, and the third limousines. Increasing the solution by one
segment would further split up the sports cars segment into two sub-segments.
This does not appear to be very helpful, as now two of the four segments comprise
only one object. This underlines the single linkage methods tendency to identify
outlier objects in this case the Nissan 350Z and Porsche Cayman. In this specific
example, the Nissan 350Z and Porsche Cayman should not be regarded as outliers
in a classical sense but rather as those cars which may be Teslas main competitors
in the sports car market.
In contrast, the two-segment solution appears to be rather imprecise considering
the vast differences in the mix of sports and middle-sized cars in this solution.
To get a better overview of the results, lets examine the cluster centroids; that is, the
mean values of the objects contained in the cluster on selected variables. To do so, we
split up the dataset using the Split File command ( Data Split File) (see Chap. 5).
This enables us to analyze the data on the basis of a grouping variables values. In this
case, we choose CLU3_1 as the grouping variable and select the option Compare
groups. Subsequently, we calculate descriptive statistics ( Analyze Descriptive
Statistics Descriptives, also see Chap. 5) and calculate the mean, minimum
and maximum values, as well as the standard deviations of the clustering variables.
Table 9.15 shows the results for the variables weight, speed, and acceleration.
From the descriptive statistics, it seems that the first segment contains light-weight
compact cars (with a lower maximum speed and acceleration). In contrast, the second
segment comprises two sports cars with greater speed and acceleration, whereas the
third contains limousines with an increased weight and intermediate speed and
acceleration. Since the descriptives do not tell us if these differences are significant,
we could have used a one-way ANOVA (menu Analyze Compare Means One-
Way ANOVA) to calculate the cluster centroids and compare the differences formally.
In the next step, we want to use the k-means method on the data. We have previously
seen that we need to specify the number of segments when conducting k-means
clustering. SPSS then initiates cluster centers and assigns objects to the clusters
based on their minimum distance to these centers. Instead of letting SPSS choose the
centers, we can also save the centroids (cluster centers) from our previous analysis as
input for the k-means procedure. To do this, we need to do some data management in
SPSS, as the cluster centers have to be supplied in a specific format. Conse-
quently, we need to aggregate the data first (briefly introduced in Chap. 5).
By selecting Data Aggregate, a dialog box similar to Fig. 9.20 opens up.
Note that we choose Display Variable Names instead of Display Variable Labels
by clicking the right mouse button on the left box showing the variables in the
dataset. Now we proceed by choosing the cluster membership variable (CLU3_1)
as a break variable and move the moment, width, weight, trunk, speed, and
acceleration variables into the Summaries of Variable(s) box. When using the
default settings, SPSS computes the variables mean values along the lines of the
break variable (indicated by the postfix _mean, which is added to each aggregate
variables name), which corresponds to the cluster centers that we need for the
k-means analysis. You can change each aggregate variables name from the original
one by removing the postfix _mean using the Name & Label option if you want to.
Lastly, we do not want to add the aggregated variables to the active dataset, but rather
need to create a new dataset comprising only the aggregated variables. You must
therefore check this under SAVE and specify a dataset label such as aggregate. When
clicking on OK, a new dataset labeled aggregate is created and opened automatically.
The new dataset is almost in the right format but we still need to change the
break variables name from CLU3_1 to cluster_ (SPSS will issue a warning but this
can be safely ignored). The final dataset should have the form shown in Fig. 9.21.
Now lets proceed by using k-means clustering. Make sure that you open the
original dataset and go to Analyze Classify K-Means Cluster, which brings up a
new dialog box (Fig. 9.22).
As you did in the hierarchical clustering analysis, move the six clustering
variables to the Variables box and specify the case labels (variable name). To
use the cluster centers from our previous analysis, check the box Read initial and
click on Open dataset. You can now choose the dataset labeled aggregate. Specify
3, which corresponds to the result of the hierarchical clustering analysis, in the
Number of Clusters box. The Iterate option is of less interest to us. Instead, click
on Save and check the box Cluster Membership. This creates a new variable
indicating each objects final cluster membership. SPSS indicates whether each
observation is a member of cluster 1, 2, or 3. Under Options, you can request
several statistics and specify how missing values should be treated. Ensure that you
request the initial cluster centers as well as the ANOVA table and that you exclude
the missing values listwise (default). Now start the analysis.
The k-means procedure generates Tables 9.16 and 9.17, which show the initial
and final cluster centers. As you can see, these are identical (also compare
Fig. 9.21), which indicates that the initial partitioning of the objects in the first
step of the k-means procedure was retained during the analysis. This means that it
276 9 Cluster Analysis
Cluster
1 2 3
moment 117 347 282
width 1699 1808 1814
weight 1116 1475 1560
trunk 249 323 543
speed 170 263 224
acceleration 12.96 5.60 9.12
Cluster
1 2 3
Cluster Error
Mean Square df Mean Square df F Sig.
Since we used the prior analysis results from hierarchical clustering as an input
for the k-means procedure, the problem of selecting the correct number of
segments is not problematic in this example. As discussed above, we could have
278 9 Cluster Analysis
also used the VRC to make that decision. In the 8 Web Appendix (! Chap. 9), we
present a VRC application to this example.
As a last step of the analysis, we conduct a two-step clustering approach. First,
go to Analyze Classify Two-Step Cluster. A new dialog box is opened, similar
to that shown in Fig. 9.23.
Move the variables we used in the previous analyses to the Continuous Vari-
ables box.
The Distance Measure box determines how the distance between two objects or
clusters is computed. While Log-likelihood can be used for categorical and contin-
uous variables, the Euclidean distance can only be applied when all of the variables
are continuous. Unless your dataset contains categorical variables (e.g., gender) you
should choose the Euclidean distance measure, as this generally provides better
results. If you use ordinal variables and therefore use the Log-likelihood procedure,
check that the answer categories are equidistant. In our dataset, all variables are
continuous, therefore select the second option, namely Euclidean.
Example 279
Model Summary
Algorithm TwoStep
Input Features 6
Clusters 2
Cluster Quality
The lower part of the output (Fig. 9.24) indicates the quality of the cluster
solution. The silhouette measure of cohesion and separation is a measure of the
clustering solutions overall goodness-of-fit. It is essentially based on the average
distances between the objects and can vary between 1 and 1. Specifically, a
silhouette measure of less than 0.20 indicates a poor solution quality, a measure
between 0.20 and 0.50 a fair solution, whereas values of more than 0.50 indicate a
good solution (this is also indicated under the horizontal bar in Fig. 9.24). In our
case, the measure indicates a satisfactory cluster quality. Consequently, you can
proceed with the analysis by double-clicking on the output. This will open up the
model viewer (Fig. 9.25), an evaluation tool that graphically presents the structure
of the revealed clusters.
The model viewer provides us with two windows: the main view, which initially
shows a model summary (left-hand side), and an auxiliary view, which initially
features the cluster sizes (right-hand side). At the bottom of each window, you can
request different information, such as an overview of the cluster structure and the
overall variable importance as shown in Fig. 9.25.
In the main view, we can see a description of the two clusters, including their
(relative) sizes. Furthermore, the output shows each clustering variables mean
values across the two clusters as well as their relative importance. Darker shades
(i.e., higher values in feature importance) denote the variables greater importance
for the clustering solution. Comparing the results, we can see that moment is the
most important variable for each of the clusters, followed by weight, speed, width,
acceleration, and trunk. Clicking on one of the boxes will show a graph with
the frequency distribution of each cluster. The auxiliary view shows an overview
Case Study 281
of the variables overall importance for the clustering solution, which provides the
same result as the cluster-specific analysis. The model viewer provides us with
additional options for visualizing the results or comparing clustering solutions. It is
worthwhile to simply play around with the different self-explanatory options. So go
ahead and explore the model viewers features yourself!
Case Study
Facing dramatically declining sales and decreased turnover, retailers such as Saks
Fifth Avenue and JCPenney are rethinking their pricing strategies, scaling back
inventories, and improving the fashion content. Mens accessories are one of the
bright spots and Saks Fifth Avenue has jumped on the trend with three recently
opened shops prominently featuring this category. The largest mens store opened
in Beverly Hills in the late 2008 and stocks top brands in jewelry, watches,
sunglasses, and leather goods. By providing a better showcase for mens acces-
sories, Saks aims at strengthening its position in a market that is often neglected in
the larger department store arena. This is because the mens accessories business
generally requires expertise in buying since this typically involves small, artisan
vendors an investment many department stores are not willing to make.
282 9 Cluster Analysis
The Beverly Hills store was chosen to spearhead the accessories program
because it is considered the companys West Coast flagship and the department
had not had a significant facelift since the store opened in 1995.13
Sakss strategy seemed to be successful if one considers that the newly opened
boutiques already exerted an impact on sales during their first holiday season.
However, before opening accessories shops in any other existing Saks stores, the
company wanted to gain further insights into their customers preferences. Con-
sequently, a survey was conducted among visitors of the Beverly Hills store to gain
a deeper understanding of their attitudes to buying and shopping. Overall, 180
respondents were interviewed using mall-intercept interviewing. The respondents
were asked to indicate the importance of the following factors when buying products
and services using a 5-point scale (1 not at all important, 5 very important):
l Saving time (x1)
l Getting bargains (x2)
l Getting products that arent on the high street (x3)
l Trying new things (x4)
l Being aware of what companies have to offer (x5)
The resulting dataset Buying Attitudes.sav (8 Web Appendix ! Chap. 9) also
includes each respondents gender and monthly disposable income.14
1. Given the levels of measurement, which clustering method would you prefer?
Carry out a cluster analysis using this procedure.
2. Interpret and profile the obtained clusters by examining cluster centroids.
Compare differences across clusters on observed variables using ANOVA and
post-hoc tests (see Chap. 6).
3. Use a different clustering method to test the stability of your results. If necessary,
omit or rescale certain variables.
4. Based on your evaluation of the dataset, make recommendations to the manage-
ment of Sakss Beverly Hills store.
Questions
1. In your own words, explain the objective and basic concept of cluster analysis.
2. What are the differences between hierarchical and partitioning methods? When
do we use hierarchical or partitioning methods?
3. Run the k-means analysis again from the example application (Cars.sav, 8 Web
Appendix ! Chap. 9). Compute a three-segment solution and compare the
results with those obtained by the initial hierarchical clustering.
13
For further information, see Palmieri JE (2008). Saks Adds Mens Accessories Shops,
Womens Wear Daily, 196 (128), 14.
14
Note that the data are artificial.
Further Readings 283
4. Run the k-means analysis again from the example application (Cars.sav, 8 Web
Appendix ! Chap. 9). Use a factor analysis considering all nine variables and
perform a cluster analysis on the resulting factor scores (factor-cluster segmen-
tation). Interpret the results and compare these with the initial analysis.
5. Repeat the manual calculations of the hierarchical clustering procedure from the
beginning of the chapter, but use complete or average linkage as clustering
method. Compare the results with those of the single linkage method.
6. Make a list of the market segments to which you belong! What clustering
variables did you take into consideration when you placed yourself in those
segments?
Further Readings
analysis techniques and provide sample applications. Probably the most comprehen-
sive text in the market.
References
Andrews RL, Currim IS (2003) Recovering and profiling the true segmentation structure in
markets: an empirical investigation. Int J Res Mark 20(2):177192
Arabie P, Hubert L (1994) Cluster analysis in marketing research. In: Bagozzi RP (ed) Advanced
methods in marketing research. Blackwell, Cambridge, pp 160189
Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin
Calinski, T, Harabasz J (1974) A dendrite method for cluster analysis. Commun Stat Theory
Methods 3(1):127
Chiu T, Fang D, Chen J, Wang Y, Jeris C (2001) A robust and scalable clustering algorithm for
mixed type attributes in large database environment. In: Proceedings of the 7th ACM SIGKDD
international conference in knowledge discovery and data mining, Association for Computing
Machinery, San Francisco, CA, pp 263268
Dibb S (1999) Criteria guiding segmentation implementation: reviewing the evidence. J Strateg
Mark 7(2):107129
Dolnicar S (2003) Using cluster analysis for market segmentation typical misconceptions,
established methodological weaknesses and some recommendations for improvement.
Australas J Mark Res 11(2):512
Dolnicar S, Grun B (2009) Challenging factor-cluster segmentation. J Travel Res 47(1):6371
Dolnicar S, Lazarevski K (2009) Methodological reasons for the theory/practice divide in market
segmentation. J Mark Manage 25(34):357373
Formann AK (1984) Die Latent-Class-Analyse: Einf uhrung in die Theorie und Anwendung. Beltz,
Weinheim
Kaufman L, Rousseeuw PJ (2005) Finding groups in data. An introduction to cluster analysis.
Wiley, Hoboken, NY
Kohonen T (1982) Self-organized formation of topologically correct feature maps. Biol Cybern 43
(1):5969
Kotler P, Keller KL (2009) Marketing management, 13th edn. Pearson Prentice Hall, Upper
Saddle River, NJ
Larson JS, Bradlow ET, Fader PS (2005) An exploratory look at supermarket shopping paths.
Int J Res Mark 22(4):395414
McLachlan GJ, Peel D (2000) Finite mixture models. Wiley, New York, NY
Milligan GW, Cooper M (1985) An examination of procedures for determining the number of
clusters in a data set. Psychometrika 50(2):159179
Milligan GW, Cooper M (1988) A study of variable standardization. J Classification 5(2):181204
Moroko L, Uncles MD (2009) Employer branding and market segmentation. J Brand Manage
17(3):181196
Okazaki S (2006) What do we know about Mobile Internet Adopters? A Cluster Analysis. Inf
Manage 43(2):127141
Punji G Stewart DW (1983) Cluster analysis in marketing research: review and suggestions for
application. J Mark Res 20(2):134148
Sheppard A (1996) The sequence of factor analysis and cluster analysis: differences in segmenta-
tion and dimensionality through the use of raw and factor scores, tourism analysis. Tourism
Anal 1(Inaugural Volume):4957
Tonks DG (2009) Validity and the design of market segments. J Mark Manage 25(34):341356
Wedel M, Kamakura WA (2000) Market segmentation: conceptual and methodological founda-
tions, 2nd edn. Kluwer, Boston, NE
http://www.springer.com/978-3-642-12540-9