Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Smart PDF

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Excerpt from Multi-criteria decision analysis for use in transport decision making, DTU Transport

Compendium Series part 2, 2014.

The Simple Multi Attribute Rating Technique


(SMART)
The SMART technique is based on a linear additive model. This means that an overall value of a given
alternative is calculated as the total sum of the performance score (value) of each criterion (attribute)
multiplied with the weight of that criterion.
The main stages in the analysis are (adapted from Olson (1996)):

Stage 1: Identify the decision-maker(s)

Stage 2: Identify the issue of issues: Utility depends on the context and purpose of the decision

Stage 3: Identify the alternatives: This step would identify the outcomes of possible actions, a data
gathering process.

Stage 4: Identify the criteria: It is important to limit the dimensions of value. This can be
accomplished by restating and combining criteria, or by omitting less important criteria. It has been
argued that it was not necessary to have a complete list of criteria. Fifteen were considered too
many, and eight was considered sufficiently large. If the weight for a particular criterion is quite
low, that criterion need not be included. There is no precise range of the number of criteria
appropriate for decisions.

Stage 5: Assign values for each criteria: For decisions made by one person, this step is fairly
straightforward. Ranking is a decision task that is easier than developing weights, for instance. This
task is usually more difficult in group environments. However, groups including diverse opinions
can result in a more thorough analysis of relative importance, as all sides of the issue are more
likely to be voiced. An initial discussion could provide all group members with a common
information base. This could be followed by identification of individual judgments of relative
ranking.

Stage 6: Determine the weight of each of the criteria: The most important dimension would be
assigned an importance of 100. The next-most-important dimension is assigned a number reflecting
the ratio of relative importance to the most important dimension. This process is continued,
checking implied ratios as each new judgment is made. Since this requires a growing number of
comparisons there is a very practical need to limit the number of dimensions (objectives). It is
expected that different individuals in the group would have different relative ratings.

Stage 7: Calculate a weighted average of the values assigned to each alternative: This step allows
normalization of the relative importance into weights summing to 1.
Stage 8: Make a provisional decision

Stage 9: Perform sensitivity analysis

In SMART, ratings of alternatives are assigned directly, in the natural scales of the criteria. For instance,
when assessing the criterion "cost" for the choice between different road layouts, a natural scale would be
a range between the most expensive and the cheapest road layout. In order to keep the weighting of the
criteria and the rating of the alternatives as separate as possible, the different scales of criteria need to be
converted into a common internal scale. In SMART, this is done mathematically by the decision-maker by
means of a Value Function. The simplest and most widely used form of a value function method is the
additive model, which in the most simple cases can be applied using a linear scale (e.g. going from 0 to
100).

SMART Exploiting Ranks (SMARTER)


The assessment of vaIue functions and swing weights in SMART can sometimes be a difficult task, and
decision-makers may not always be confident about it. Because of this, Edwards and Barron have suggested
a simplified form of SMART named SMARTER (SMART Exploiting Ranks) (Roberts and Goodwin, 2002).
Using the SMARTER technique the decision-makers places the criteria into an importance order: for
example Criterion 1 is more important than Criterion 2, which is more important than Criterion 3, which is
more important Criterion 4 and so on, C1 C2 C3 C4. . . . SMARTER then assigns surrogate weights
according to the Rank Order Distribution method or one of the similar methods which are described below.

Barron and Barret (1996) believe that generated weights may be more precise than weights produced by
the decision-makers who may be more comfortable and confident with a simple ranking of the importance
of each criterion swing, especially if it represents the considered outcome of a group of decision-makers.
Therefore a number of methods that enable the ranking to be translated into surrogate weights
representing an approximation of the true weights have been developed. A few of these methods are
described below. Here  > 0 are weights reflecting the relative importance of the ranges of the criteria
values, where 
 = 1, = 1, ,  is the rank of the criteria, and n is the number of criteria in the
decision problem.

Rank order centroid (ROC) weights: The ROC weights are defined by (Roberts and Goodwin, 2002):

  = 1  1 , = 1, , 




Rank sum (RS) weights: The RS weights are the individual ranks normalized by dividing by the sum of the
ranks. The RS weights are defined by (Ibid):

 + 1 
  =  + 1/2 , = 1, , 

Rank reciprocal (RR) weights: This method uses the reciprocal of the ranks which are normalized by dividing
each term by the sum of the reciprocals. The RR weights are defined by (Ibid):
1
  =
 1 , rank = 1, , , option  = 1, , 



For each of these methods, the corresponding weights for each rank, for numbers of criteria ranging from n
= 2 - 10 are listed in Table 0.1 - Table 0.3.

Table 0.1: (ROC) weights (Roberts and Goodwin, 2002)

Criteria
Rank 2 3 4 5 6 7 8 9 10
1 0.7500 0.6111 0.5208 0.4567 0.4083 0.3704 0.3397 0.3143 0.2929
2 0.2500 0.2778 0.2708 0.2567 0.2417 0.2276 0.2147 0.2032 0.1929
3 0.1111 0.1458 0.1567 0.1583 0.1561 0.1522 0.1477 0.1429
4 0.0625 0.0900 0.1028 0.1085 0.1106 0.1106 0.1096
5 0.0400 0.0611 0.0728 0.0793 0.0828 0.0846
6 0.0278 0.0442 0.0543 0.0606 0.0646
7 0.0204 0.0334 0.0421 0.0479
8 0.0156 0.0262 0.0336
9 0.0123 0.0211
10 0.0100

Table 0.2: (RS) weights (Roberts and Goodwin, 2002)


Criteria
Rank 2 3 4 5 6 7 8 9 10
1 0.6667 0.5000 0.4000 0.3333 0.2857 0.2500 0.2222 0.2000 0.1818
2 0.3333 0.3333 0.3000 0.2667 0.2381 0.2143 0.1944 0.1778 0.1636
3 0.1667 0.2000 0.2000 0.1905 0.1786 0.1667 0.1556 0.1455
4 0.1000 0.1333 0.1429 0.1429 0.1389 0.1333 0.1273
5 0.0667 0.0952 0.1071 0.1111 0.1111 0.1091
6 0.0476 0.0714 0.0833 0.0889 0.0909
7 0.0357 0.0556 0.0667 0.0727
8 0.0278 0.0444 0.0545
9 0.0222 0.0364
10 0.0182
Table 0.3: (RR) weights (Roberts and Goodwin, 2002)
Criteria
Rank 2 3 4 5 6 7 8 9 10
1 0.6667 0.5455 0.4800 0.4379 0.4082 0.3857 0.3679 0.3535 0.3414
2 0.3333 0.2727 0.2400 0.2190 0.2041 0.1928 0.1840 0.1767 0.1707
3 0.1818 0.1600 0.1460 0.1361 0.1286 0.1226 0.1178 0.1138
4 0.1200 0.1095 0.1020 0.0964 0.0920 0.0884 0.0854
5 0.0876 0.0816 0.0771 0.0736 0.0707 0.0682
6 0.0680 0.0643 0.0613 0.0589 0.0569
7 0.0551 0.0525 0.0505 0.0488
8 0.0460 0.0442 0.0427
9 0.0393 0.0379
10 0.0341

Rank order distribution (ROD) is a weight approximation method that assumes that valid weights can be
elicited through direct rating. In the direct rating method the most important criterion is assigned a weight
of 100 and the importance of the other criteria is then assessed relative to this benchmark. The raw
weights,   obtained are then normalized to sum to 1. Assuming that all criteria have some importance,
this means that the ranges of the possible raw weights will be:

 = 100, 0 < + 100, 0 < - +

And in general:

0 <  .

where 1

These ranges can be approximated by representing all of the inequalities by less-than-or-equal-to


expressions. The uncertainty about the true weights can then be represented by assuming uniform
distribution for them. To determine ROD weights for general problems it is needed to consider the
probability distributions for the normalised weights that follow from the assumptions about the
distributions of the raw weights. For n > 2 the density functions are a series of piecewise equations.

The means of each rank order distribution (ROD) for n = 2 to 10 have been found mathematically and are
displayed in Table 0.4. For further information about the calculations behind see Roberts and Goodwin
(2002).
Table 0.4: ROD weights (Roberts and Goodwin, 2002)
Attributes
Rank 2 3 4 5 6 7 8 9 10
1 0.6932 0.5232 0.4180 0.3471 0.2966 0.2590 0.2292 0.2058 0.1867
2 0.3068 0.3240 0.2986 0.2686 0.2410 0.2174 0.1977 0.1808 0.1667
3 0.1528 0.1912 0.1955 0.1884 0.1781 0.1672 0.1565 0.1466
4 0.0922 0.1269 0.1387 0.1406 0.1375 0.1332 0.1271
5 0.0619 0.0908 0.1038 0.1084 0.1095 0.1081
6 0.0445 0.0679 0.0805 0.0867 0.0893
7 0.0334 0.0531 0.0644 0.0709
8 0.0263 0.0425 0.0527
9 0.0211 0.0349
10 0.0173

A graphical comparison of the ROD, ROC and RS weights for 9 criteria can be seen in Figure 0.1 (Roberts and
Goodwin, 2002).

0,35

0,30

0,25

0,20
Weight

RS
0,15
ROC
0,10 ROD

0,05

0,00
1 2 3 4 5 6 7 8 9
Rank

Figure 0.1: Comparison of weights for 9 attributes (Roberts and Goodwin, 2002)

There is a very close match between the ROD and RS weights. This matching is found whatever the number
of criteria. Indeed, in general, the ROD weights tend towards the RS weights as the number of criteria
increases. Thus, given that ROD weights are difficult to calculate when the number of attributes is large, a
practical solution is to use RS weights for large criteria problems. The ROC weights depart markedly from
both the RS and ROD weights.

The figure also demonstrates another benefit of using ROD instead of ROC weights. ROC weights are
extreme in that the ration of the highest to the lowest weights is so large that the lowest ranked criterion
will only have a very marginal influence on the decision. In practice, criteria with a relative importance as
low as this, would usually be eliminated from the decision model. The use of ROD weights goes some way
to reducing this extreme value problem. However, it can be argued that the inclusion of criteria with very
low weights, e.g. 0.02, does not contribute in any way to the overall result and therefore should be omitted
from the analysis. For a discussion of this see Barfod et al. (2011).

Pros and cons of SMART


Pros: The structure of the SMART method is similar to that of the traditional CBA in that the total value is
calculated as a weighted sum of the impact scores. In the CBA the unit prices act as weights and the
impacts scores are the quantified (not normalized) CBA impacts. This close relationship to the well-
accepted CBA method is appealing and makes the method easier to grasp for the decision maker.

Cons: In a screening phase where some poorly performing alternatives are rejected leaving a subset of
alternatives to be considered in more detail the SMART method is not always the right choice. This is
because, as noted by Hobbs and Meier (2000), SMART tends to oversimplify the problem if used as a
screening method as the top few alternatives are often very similar. Rather different weight profiles should
be used and alternatives that perform well under each different weight profile should be picked out for
further analysis. This also helps identify the most robust alternatives. The SMART method has rather high
demands on the level of detail in input data. Value functions need to be assessed for each of the lowest-
level attributes, and weights should be given as trade-off

In SMART analysis the direct rating method of selecting raw weights is normally used as it is cognitively
simpler and therefore is assumed to yield more consistent and accurate judgments from the decision-
maker. These raw weights are then normalised and this normalisation process yields different theoretical
distributions for the ranks. The means of these distributions are the ROD weights.

The formulae for the distribution of the ROD weights become progressively more complex as the number of
criteria increase. Since the RS weights are so easy to calculate and closely match the ROD weights for higher
numbers of criteria it is recommended to use RS weights when working with problems involving large
numbers of criteria, and in cases where it can be assumed that the appropriate alternative method for
eliciting the true weights would have been the direct rating method.

You might also like