PRIME: An Introduction and Assessment

1 downloads 0 Views 748KB Size Report
Aug 17, 1999 - exchange of y to receive x. It can be shown that there exists a cardinal (or measurable) value function v( ) that satisfies (2-2) and the following.
17.8.1999

PRIME: AN INTRODUCTION AND ASSESSMENT

Janne Gustafsson Systems Analysis Laboratory Helsinki University of Technology

TABLE OF CONTENTS PRIME: AN INTRODUCTION AND ASSESSMENT.............................................................................................1 1. 2.

INTRODUCTION ................................................................................................................................................3 VALUE TREES ..................................................................................................................................................4 2.1 Concepts and Terminology ..........................................................................................................................4 2.2 On Value Theory.........................................................................................................................................4 2.2.1

Preference Relations........................................................................................................................................... 5

2.2.2

Ordinal and Cardinal Value Functions ................................................................................................................ 6

2.2.3

Extension to Additive Multiattribute Models....................................................................................................... 6

2.2.4

Normalisation and Worst and Best Achievement Levels ...................................................................................... 7

2.3 Notations ....................................................................................................................................................8 3. THE PRIME METHOD ....................................................................................................................................10 3.1 Introduction to PRIME..............................................................................................................................10 3.2 The Modelling Process..............................................................................................................................10 3.3 Value Differences, Ratio Comparisons, and Linear Optimisation ...............................................................11 3.4 Score Elicitation and Holistic Comparisons...............................................................................................13 3.5 Weight Assessment ....................................................................................................................................14 3.6 Normalisation Constraints.........................................................................................................................15 3.7 Dominance Structures and Decision Rules.................................................................................................16 4. ANALYSIS OF THE PRIME METHOD ................................................................................................................18 4.1 Weaknesses of PRIME...............................................................................................................................18 4.1.1 Implicit value functions ......................................................................................................................................... 18 4.1.2 The Effort Required to Solve a Model.................................................................................................................... 18 4.1.3 Absence of Sensitivity Analysis Features ............................................................................................................... 19

4.2 Strength of PRIME: Uncertainty................................................................................................................20 4.3 Issues on Ordinal and Cardinal Ranking ...................................................................................................21 4.4 Value Tree Weighting Biases.....................................................................................................................22 4.5 Whether to Use Top Down or Bottom Up Weighting ..................................................................................23 4.6 Computational Aspect: Handling Single Achievement Levels .....................................................................23 5. CASE STUDY: GM CROPS...............................................................................................................................25 5.1 Introduction to the Study and the GM Crops ..............................................................................................25 5.1.1

The Purpose of This Study................................................................................................................................ 25

5.1.2

What is Genetical Modification?....................................................................................................................... 25

5.1.3

The GM Crops ................................................................................................................................................. 26

5.2 Construction of the Model .........................................................................................................................26 5.2.1

Selection of the Object and Extent of the Study ................................................................................................. 26

5.2.2

Identification of Attributes and Goals ............................................................................................................... 26

5.2.3

Identification of Alternatives ............................................................................................................................ 27

5.3 Preference Elicitation: Applying PRIME ...................................................................................................29 5.4 Analysis of Results ....................................................................................................................................30 6. CONCLUSIONS................................................................................................................................................33

2

1. Introduction

This paper concentrates on the preference elicitation method PRIME (Preference Ratios In Multiattribute Evaluation) and analysis of its benefits and drawbacks. PRIME has several advantages in comparison with standard decision making methods. It permits the decision maker (DM) to define a range of possible values for his judgements instead of forcing him or her to give only point estimates. From the given preference statements PRIME calculates the limits for alternatives’ possible values. PRIME formulates the decision maker’s preferences as a set of linear constraints, and therefore linear and linear fractional programming techniques, such as Simplex, provide a natural means of solving PRIME models. Furthermore, linear programming techniques permit PRIME to uncover the superior alternatives in the model through specific dominance structures. These features make PRIME a unique tool for imprecise preference modelling.

The basic concepts of value trees and notations are introduced in the next section, and the basics of PRIME follow in Section 3. Section 4 considers some issues that arise from additive value trees and the PRIME method itself. Section 5 illustrates the method with an example.

3

2. Value Trees

2.1 Concepts and Terminology

Before considering the PRIME method itself, it is necessary to be familiar with the terminology used in this paper. Albeit concepts are mostly common to many other decision analysis methods, the relationship between different concepts and exact definitions of some notions demand clarification.

Decision analysis often starts with a problem. The problem is identified by some decision, which is usually a choice between available alternatives. Alternatives differ from each other in their properties. Plants, for example, have different size, crop yield, and, particularly, name. These properties are called attributes in the decision analytic terminology. Attributes are captured by the decision analytic model.

The attributes are often structured as a tree, in which they constitute the leaves. On the other hand, nonleaf nodes of the tree are called goals. The root of the tree is the main goal of the model, and the tree itself is called the value tree. The main goal breaks down into separate subgoals, which in turn break down further until an attribute is reached at a leaf node. The purpose of this classification is to clarify relationships between different attributes.

On each attribute, alternatives map into consequences. Consequences represent some of the achievement levels on that attribute. Achievement levels in turn are possible values that a consequence may assume. A consequence or an achievement level does not define its desirability for the decision maker as such. The amount of value associated with a consequence or an achievement level is called a score.

2.2 On Value Theory This section concentrates on common issues regarding scores, value, the DM‘s preferences, and their connection with a real-valued value function. The purpose of this chapter is to make a quick glance at the basic axioms of the value theory that form also the basis of the PRIME method.

4

In Section 2.1 score was presented to be “an amount of value associated with a consequence”. A value function is something that defines that association between the consequence and its score. Value functions are mappings of a consequence space X into real numbers and they are denoted by v( ), such that v(x) is the value of consequence x.

Although it is straightforward to define an arbitrary function from X to real numbers, it is more complex to associate the function with preferences of a decision maker. There are two aspects that should be considered specifically. First, one has to find a sound description of the DM’s preferences in mathematical terms. Second, if the model consists of multiple attributes, the implications of the preference relations on individual attributes on the preferences of entire alternatives should be explored.

2.2.1

Preference Relations In decision analysis, the DM’s preferences are modelled as relations between objects of a consequence space X. The following preference relations (defined on X) are used to describe the DM’s preferences:

1. Strict preference

xf y

x is more preferred than y

(2-1a)

x is equally preferred to y

(2-1b)

x is better than or equally preferred to y

(2-1c)

2. Indifference

x~ y 3. Weak preference

x f y

where x and y belong to a set of objects X.

It is clear that to represent a rational man’s preferences the relation (2-1a) must be both transitive and asymmetric. As well, the indifference relation (2-1b) must satisfy transitivity, reflexivity, and symmetricity. The relation (2-1c) can be thought as a combination of the relations (2-1a) and (2-1b) just like ≥ combines > and =. Then (2-1c) must be comparable and transitive, and consistency of indifference and weak preference and consistency of strict preference and weak preference must hold (see [8] for all these definitions).

5

2.2.2

Ordinal and Cardinal Value Functions Let X be a set of consequences and x, y ∈ X. It can be shown that there exists an ordinal value function v( ) that satisfies the following equivalence (see [8])

x f y ⇔ v(x) ≥ v(y)

(2-2)

Such value functions are unique up positive affine transformations, i.e. one can multiply the function with a positive number and add an arbitrary number to it without violating (2-2). Thus, if v( ) satisfies (2-2) then v’( ) = αv( ) + β, α > 0 ⇒ v’(x) ≥ v’(y)

(2-3)

Value theory specifies also the conditions for a preference relation f e , that determines the strength of the DM’s preferences. The relation is defined on a space of pairs of consequences Xe = {(x ← y) | x, y ∈ X} that represent an exchange of y to receive x. It can be shown that there exists a cardinal (or measurable) value function v( ) that satisfies (2-2) and the following equivalence (see e.g. [8]) (x1 ← x2) f e (y1 ← y2) ⇔ v(x1) - v(x2) ≥ v(y1) - v(y2)

2.2.3

(2-4)

Extension to Additive Multiattribute Models Let vi( ) be a value function on the attribute (consequence space) Xi. A value function v( ) is additive if it is of the form n

v( x1 , x2 ,..., xn ) = ∑ vi ( xi )

(2-5)

i =1

where xi ∈ Xi. Let x be a vector defined on X I = × X i where I is an arbitrary nonempty set i∈ I

of attribute indices. Let us denote the corresponding additive value function with the following notation

v( x) = ∑ vi ( xi ) . i∈ I

6

(2-6)

It has been shown that there exists an additive value function which represents

f on X I = × X i in the sense that i∈ I

x f y ⇔

v( x) = ∑ vi ( xi ) ≥ ∑ vi ( y i ) = v( y ) i∈ I

(2-7)

i∈ I

if f is a weak order (cf. (2-1c)), Xi are mutually independent, restricted solvability holds for each attribute Xi, every strictly bounded standard sequence is finite, and every attribute is essential (see [8]).

2.2.4

Normalisation and Worst and Best Achievement Levels It is customary to denote the most preferred element of X by x* and the least preferred one by x0. In addition, the value of the worst consequence is usually fixed to zero, i.e. v(x0) = 0. A value function is said to be normalised if v(x0)N = 0

(2-8a)

v(x*)N = 1

(2-8b)

N

where v( ) denotes a normalised value function. The connection between normalised value functions and their nonnormalised counterparts is defined by a normalisation constant (i.e. the weight) denoted by w: v(x*) - v(x0) = w (v(x*)N - v(x0)N)

(2-9)

which simplifies to w = v(x*).

(2-10)

The normalised form of an additive value function in a multiattribute model is n

v( x) N = ∑ w i vi ( xi ) N ,

(2-11)

i =1

where wi is the weight of the attribute i. It is clear that v(x)N in (2-11) requires the following additional constraint: n

∑w i =1

i

= 1.

(2-12)

Values of alternatives are usually normalised to range from 0 to 1, regardless whether the values of individual consequences are normalised or not. By substituting (2-10) to (2-12) and summing over all attributes in the model we get a constraint standard to decision analytic multiattribute models:

7

N

∑ v (x i =1

i

∗ i

) =1

(2-13)

where N is the number of all attributes in the model.

2.3 Notations The notation introduced here is almost equal to one used in [1]. Some differences still exist, nevertheless. The notations used throughout this paper are the following:

Xi

Consequence space of the attribute i, i.e. the attribute i itself.

X

Consequence space of all attributes.

X = X 1 × X 2 ×... × X N , where N is the number of attributes in the model. j

the consequence of alternative j for attribute i.

xi *

the most preferred consequence for attribute i.

xi0

the least preferred consequence for attribute i.

x.j

an aggregate (a vector) of all consequences of alternative j, i.e.

xi

the corresponding alternative

x•j = ( x1j , x2j ,..., x Nj ) T ∈ X , where N is the number of attributes in the model x.*

an aggregate of the best consequences of all attributes ∗ x• = ( x1∗ , x2∗ ,..., x ∗N ) T ∈ X

x.0

an aggregate of the worst consequences of all attributes 0 x• = ( x10 , x20 ,..., x N0 ) T ∈ X

x, xj

an aggregate of achievement levels, or an aggregate of consequences of alternative j, belonging to some arbitrary set of indices I.

x = x j = ( x1j , x2j ,..., xnj ) T ∈ × X i , i∈ I

x*

an aggregate of the best consequences of attributes belonging to set I

x ∗ = ( x1∗ , x2∗ ,..., xn∗ ) T ∈ × X i i∈ I

x0

an aggregate of the worst consequences of attributes belonging to set I.

8

x 0 = ( x10 , x20 ,..., xn0 ) T ∈ × X i i∈ I

j

vi(xi )

the utility (or value) of the consequence of alternative j for attribute i, i.e. the corresponding score.

wi

the weight of attribute i. w i = vi (xi *) - vi (xi0) = vi (xi *)

vi (xij)N

the normalised score of alternative j for attribute i. Particularly, vi (xij) = wi vi (xij)N . vi (xij)N = vi (xij) / wi = vi (xij) / vi (xi *). Additionally, vi (xi *)N = vi (xi *) / vi (xi *) = 1.

v(xj)

the utility (or value) of the consequence of alternative j for some goal defined by a set of indices I (cf. (2-6)). xj is a vector which contains an element from each attribute indicated by I. Particularly, v(x0) = 0 for any set I, and v(x.*) = 1. The additive representation of v( ) is

v( x j ) = ∑ vi ( xi j ) i∈ I

w

the weight of an aggregate of attributes w = v(x*) - v(x0) = v(x*)

v(xj) - v(xk)

a value difference

v( x j ) − v ( x k ) a ratio comparison of value differences. v( x l ) − v ( x m ) Note that a normalised score can be represented as a ratio comparison.

9

3. The PRIME Method

3.1 Introduction to PRIME PRIME is a decision analytic method that allows the use of imprecise ratio statements in the elicitation of the decision maker’s preferences. Despite its name, PRIME has nothing to do with algebraic primes. In PRIME, preferences in decision making problems are modelled as linear constraints of optimisation models. The resulting linear programs can be solved by using well-known methods such as Simplex. The strengths of PRIME lie in possibility of using as little preference data as possible and in the efficient modelling of nonnumeric consequences. In contrast to many other decision analysis methods, PRIME models can be solved with missing weighting or scoring data. In addition, the PRIME method permits the DM to give lower and upper bounds for his preference statements instead of exact values only.

Like many other decision analysis methods, PRIME is based on additive value trees, which were discussed in Section 2. The next sections introduce the basics of the PRIME method itself and concentrate on the discussion of the standard preference elicitation steps of the method. In Section 4 I discuss issues that arise from the PRIME method.

3.2 The Modelling Process In general, the modelling process can be divided into five steps (cf. [1]). The steps are:

(a) Definition of the purpose and extent of the study (b) Creation of the value tree -

attributes of the model

-

goals, if some exist

(c) Identification of alternatives -

consequences for all attributes

(d) Elicitation of preferences (PRIME) 1. Ordinal ranking -

score assessment (rank scores in terms of importance) 10

-

weight assessment (rank attributes in terms of importance)

-

holistic comparisons (rank holistic scores in terms of importance)

2. Cardinal judgements -

score assessment (ratio comparisons of value differences)

-

weight assessment (SWING)

-

holistic comparisons (holistic ratio comparisons)

(e) Identify the best alternative from the dominance structures, or by the decision rules (PRIME)

In practice the process is iterative, and progress in the later steps often leads to update in the results of the previous steps. In the last two steps the PRIME method is deployed to elicit the decision maker’s preferences and identify the best alternative according to them. PRIME is an incremental process, where the elicitation is continued until the conditions for terminating the process are met. Usually, the PRIME process begins with ordinal ranking and proceeds to cardinal ranking if necessary. However, there are also different approaches, which are discussed in Section 4.3.

In PRIME, holistic comparisons are similar to score assessment. The term holistic comparison refers to a ratio comparison in which alternatives on some goal are compared to each other. Since similar ratio comparisons are deployed both in score assessment and in holistic comparisons, it is natural that the method handles them in the same way. In fact, all methods that apply to ranking scores apply to holistic comparisons. Holistic comparisons provide usually only additional information and in most cases they need not be made.

3.3 Value Differences, Ratio Comparisons, and Linear Optimisation In PRIME, all preference statements can be represented as ratio comparisons of value differences that become linear constraints in optimisation problems. The DM’s preferences are elicited by asking him or her upper and lower bounds for certain ratios of value differences. The statements to be made are of the form

v( x j ) − v ( x k ) L≤ ≤U , v( x l ) − v ( x m )

11

(3-1)

where the DM is asked to determine values for the lower bound L and the upper bound U. (3-1) yields two linear constraints. By multiplying the inequalities with the denominator of the ratio (3-1) becomes j k l m  − v( x ) + v( x ) + Lv ( x ) − Lv( x ) ≤ 0  j k l m  v( x ) − v( x ) − Uv( x ) + Uv( x ) ≤ 0

(3-2)

Usually, two kinds of value differences appear in PRIME. First, there are ratios that define the values of consequences with respect to the other consequences on the same alternative. These are elicited in the score elicitation step presented in Section 3.4. Second, there are ratios that define the value of the best consequence on each attribute, i.e. the weight of that attribute, with respect to the best consequence of an arbitrary reference attribute. They are the output of the weight assessment step which is executed by using a method very similar to SWING. Weight assessment is discussed in Section 3.5.

Values of different consequences, i.e. scores, act as variables in linear optimisation. Therefore, it is not necessary to determine the value function itself, but its values for certain consequences appearing in the model. This approach is equivalent to the point estimation of the value function in contrast to the definition of the entire value function explicitly. The obvious drawback of this approach is that one must make laborious ratio statements for each score appearing in the model, but on the other hand, an explicit definition of the value function need not be made. This feature of PRIME is given special consideration in Section 4.

The objective function of a linear optimisation problem depends on the value to be calculated. To compute the value of any score appearing in the model, two optimisations must be made, minimisation for the lower bound and maximisation for the upper bound. Similar calculations yield upper and lower bounds for weights in the model. The elicitation of the dominance structures requires also two calculations for each pair of alternatives. The total number of required linear programs to solve the PRIME model entirely grows quickly as the number of attributes and alternatives increases.

12

3.4 Score Elicitation and Holistic Comparisons Analogously to [1] I find it convenient to regard holistic comparisons as score elicitation for goals, and therefore holistic comparisons are treated in this section as a special form of score elicitation. The goal of the score elicitation phase is to identify the least and most preferred achievement levels xi0 and xi*, and rank other consequences with respect to them, if necessary. The process starts as the DM either selects the achievement levels or ranks all the consequences ordinally, whichever is required. Ordinal rankings become linear constraints of the form

v( x j ) − v( x k ) ≤ 0

(3-3)

where xk is more preferred than xj. If some of the consequences are left unranked, they must be ranked inferior (or equal) to the best consequence, for the selection of the best consequence implicates that there exist none better. Similarly, unranked scores must be superior (or equal) to the worst achievement level. However, the variables in many Simplex methods can be bounded from zero to infinity and thus the ordinal constraints with respect to the worst achievement level can be omitted. Further discussion on ordinal ranking is left for Section 4.

In the next phase, information on relative magnitude of scores is elicited with ratio comparisons. The general form of such comparisons is (3-1). The usual selections for ratios are either direct rating

v( x j ) − v ( x 0 ) L≤ ≤U v ( x ∗ ) − v( x 0 )

(3-4)

or the comparison of successive value differences

L≤

v( x n + 1 ) − v( x n ) ≤U v( x n ) − v( x n − 1 )

(3-5)

where n = 2 ... N – 1 and N is the number of alternatives.

The bounds for each score are then obtained from separate linear optimisation problems. The objective function for the optimisation problem that calculates the bounds of a score or an aggregate of scores is the score or the aggregate itself. The lower bound is the result of a minimisation problem and the upper bound results from the corresponding maximisation problem. Therefore, the interval for a score is given by

Is = [ min v( x), max v( x)], v( x) ∈ F where F is the set of feasible values determined by the model constraints. 13

(3-6)

One is often interested in values ranging from 0 to 1, i.e. the normalised values. Normalised values cannot be derived straightforwardly from the nonnormalised values, but they must be solved separately instead. The objective function of a normalised score is the score itself divided by the weight of the corresponding attribute or goal. This leads to linear fractional programming problems, which can be converted easily to linear optimisation problems with a simple transformation (see e.g. [6]). The interval for a normalised score is given by

 v( x ) v( x )  I sN = min , max , v( x), v( x ∗ ) ∈ F ∗ ∗  v ( x ) v ( x )  

(3-7)

3.5 Weight Assessment PRIME permits the use of either the hierarchical top down weighting or the single level bottom up weighting. Discussion on the superiority of either method is left for Section 4.5.

The goal of the weight assessment step is to define the relative magnitude of the values of the best consequences (i.e. the weight of each attribute, wi = v(xi*) – v(xi0)). One way of accomplishing this objective is the bottom up weight assessment, in which all the attributes in the model are compared to some arbitrarily selected reference attribute. The other way is to use hierarchical top down weighting. In either style of assessment a common weighting method is SWING, in which the most important attribute is selected as the reference attribute and given 100 points. The SWING method used in PRIME differs slightly from the usual one as each attribute is given an interval of points, i.e. the lower and upper bounds for feasible values. If both bounds are set to the same value, the method reduces to the ordinary SWING. The method leads to the following ratio comparisons, one for each attribute i:

w LPts UPts ≤ i ≤ 100 w ref 100

(3-8)



v( xi∗ ) − v( xi0 ) LPts UPts ≤ ≤ ∗ 0 100 v( xref ) − v( xref ) 100

(3-9)



v( xi∗ ) UPts LPts ≤ ≤ ∗ 100 v( xref ) 100

(3-10)

14

The top down weighting deploys the same ratio comparisons as above, but one assessment must be conducted for each goal in the model. The weight estimates are given for all the children of each goal, no matter whether they are attributes or goals. The weighting of goals differs from the weighting of ordinary attributes only in the fact that the consequences of goals are aggregates of other consequences, and their sufficient representation is much more challenging than the representation of the ordinary consequences. Assuming that w refers to the weight of a child of the goal in question, the formulae (3-8), (3-9) and (310) yield

LPts w UPts ≤ ≤ 100 w ref 100

(3-11)



LPts v( x ∗ ) − v ( x 0 ) UPts ≤ ≤ 0 ∗ 100 v( xref ) − v( xref ) 100

(3-12)



LPts v( x ∗ ) UPts ≤ ≤ ∗ 100 v( xref ) 100

(3-13)

Intervals for the weights in the model are results of separate linear optimisation problems. Analogously to score elicitation, the intervals for weights are given by

[

]

I w = min v( x ∗ ), max v( x ∗ ) , v( x ∗ ) ∈ F

(3-14)

The normalised value of the weight is obtained by dividing the weight of the attribute (or the goal) by the weight of its parent goal. Hence, normalised weights are given by

 v( x ∗ ) v( x ∗ )  ∗ ∗ I wN = min , max , v( x ), v( x parent ) ∈ F ∗ ∗ v ( x ) v ( x )  parent parent    (3-15) Note that the weight of the main goal is exactly equal to 1, which implicates that the normalised weights of attributes directly under it are equal to their nonnormalised weights.

3.6 Normalisation Constraints By convention, values of alternatives are restricted to the range from 0 to 1. PRIME itself does not demand normalisation to any specific range of values, 15

although some normalisation is required. However, if the value of the worst achievement level is not equal to zero, the weighting constraints in the previous section must be written also with respect to the worst achievement level. In the standard case, if all scores are assumed to be nonnegative the normalisation constraints are 0 )=0 v( x•

(3-16)

∗ ) =1 v( x•

Since no negative scores exist, the first normalisation constraint implicates that the value of every worst achievement level must be equal to zero.

3.7 Dominance Structures and Decision Rules An alternative dominates another, if the alternative’s value is greater than the other alternative’s under all circumstances. Dominance is easily deduced, if the alternatives’ value intervals do not overlap (i.e. absolute dominance prevails). However, if they do, dominance must be calculated via a specific linear optimisation problem. It is evident, that if

[

]

max v( x•j ) − v( x•k ) < 0

(3-17)

then the alternative k dominates the alternative j. If the value is greater than zero, there is no dominance. The dominance structures in PRIME provide a way of evaluating the superiority of one alternative with respect to other alternatives. Such means of determining the relative order of the alternatives with overlapping value intervals seems one of the most remarkable advantages of PRIME.

The result of the maximisation problem can be interpreted in another way, too. The result represents the greatest possible loss of value (PLV) that can be lost if alternative k is chosen instead of j. If similar calculations are made with respect to all the other alternatives, we obtain the greatest possible loss of value that can be lost by selecting the alternative k instead of any other alternative. The greatest possible loss of value if the alternative k is selected is given by

   j k  PLV ( k ) = max max v( x•) − v( x•)  j  j≠k   

[

16

]

(3-18)

The possible loss of value is the criterion of the minimax regret decision rule. Minimax regret chooses the alternative with the smallest possible loss of value, hence the name:

     j k  Minimax = arg min {PLV ( k )}= arg min max max v ( x•) − v ( x•)  (3-19) j k k      j≠k  

[

]

Decision rules in PRIME may use either dominance data like minimax regret or alternatives’ value interval data. While minimax regret is the only decision rule to use dominance data approach, there are three other decision rules that utilise alternatives’ value intervals to deduce the best choice. First, maximax believes that the true outcome will be near the upper bound of the value interval and therefore chooses the option with the greatest upper bound. On the other hand, maximin is not that optimistic and predicts that the outcome will remain in the proximity of the lower bound. Hence it chooses the alternative with the greatest lower bound. Sometimes maximax and maximin are also referred as optimistic and pessimistic decision rules. Third, the criterion of central values decision rule is the mean of the lower and upper bounds of the value interval. Two facts speak in favour of the superiority of the central values rule in comparison with maximax and maximin. First, central values has more information to make its decision, since it uses data both from upper and lower bounds. Second, many real world phenomena are likely to be Gaussian distributed, which implies that it is more probable that the alternative will assume a value from the central part of the value interval than from the neighbourhood of the lower or upper bound.

17

4. Analysis of the PRIME method

4.1 Weaknesses of PRIME

4.1.1 Implicit value functions One of the significant features of PRIME is that instead of using explicit value functions to appraise the relative importance of different scores, the scores associated with each consequence are evaluated directly using a sequence of ratio estimates. Because PRIME deals with scores directly instead of consequences associated with them, the notion of the value function is less important than in other decision analysis methods. Nevertheless, it is sometimes useful to model the relationship between consequences and their scores explicitly, especially when consequences can be represented as numbers. At present, PRIME does not provide such a means itself.

The absence of value functions is a remarkable drawback in PRIME. A value function provides an efficient way of mapping numeric consequences to scores. With a large number of alternatives, say over a hundred, the estimation with ratios becomes overwhelmingly laborious in comparison with the use of value functions. However, future computer tools may provide support for explicit value functions.

On the other hand, if consequences cannot be represented as real or integral numbers, the usual value functions are not applicable. In such cases PRIME’s ratio comparisons provide a practical way to estimate the consequences’ relative importance.

4.1.2 The Effort Required to Solve a Model The number of required optimisations grows quickly as the size of the model increases. Let the number of attributes in the model be a, the number of alternatives b, and the number of goals (excluding the main goal) c. Then, if both normalised and nonnormalised values for intervals are calculated (then each score requires four calculations, two for both intervals), the number of required Simplex calculations to solve the model entirely is:

18

Calculations Required Scores

4 ⋅a ⋅b

Goal Values

4 ⋅b ⋅c

Alternative Values

2 ⋅b

Attribute Weights

4 ⋅a

Goal Weights

4 ⋅c

Dominance Structures

b2

TOTAL

b2 + 2 ⋅b + 4 ⋅(a + c) ⋅(b + 1) = N

For example, for a model with 20 attributes, 10 alternatives and 6 additional goals, the number of required calculations N is 1264. If one calculation takes a mere one tenth of a second, it would still require more than two minutes to solve the model entirely. In addition, as the size of the model grows the time required to solve a linear optimisation problem with Simplex becomes longer. Empirical studies in [5] show that in typical cases the time required to solve a model (with PRIME Decisions v.1.00) is approximately something between O(N2.5) and O(N3) if all ordinal and cardinal preferences are given. The only implication of such a rate of growth is that PRIME, at least with current computer support, cannot be deployed to solve very large and detailed models; a model with N = 100000 requires several years to solve even with newest computers.

However, PRIME provides some alleviation to this problem if iterative calculations can be performed for a refined model. This method is called prioritisation. Assuming that the refinement of the model will not remove an existing dominance relation, values for dominated alternatives can be left uncalculated since they remain inferior to the dominating alternative under all circumstances. In such a case, the parameter b in the previous calculations must be replaced by the number of nondominated alternatives. This is a very efficient approach if the model consists of several tens or hundreds of alternatives and results of single alternatives are of minor interest (see [1]).

4.1.3 Absence of Sensitivity Analysis Features The PRIME method has three main weaknesses, two of which have already been discussed. These are the absence of value functions, the computational effort to solve the model, and the absence of sensitivity analysis features. 19

Suprisingly, the present computational support does not provide any sensitivity analysis functionality with respect to any variable in the model. Sensitivity analysis is one of the most important means of analysing the results given by the method. The absence of such means leaves the results of PRIME unverified.

However, it is not difficult to design such features, although additional calculations must be performed for each sensitivity analysis. It is no trouble changing some arbitrary parameters in the model and then calculating values for the inspected variable. Other class of missing sensitivity tools is inverse sensitivity analysis, in which the DM strives to determine which modifications should be performed to the model parameters to achieve some goal, e.g. find required parameter changes to achieve an absolute dominance between some alternatives.

4.2 Strength of PRIME: Uncertainty If PRIME has weaknesses it certainly has a strength of its own. PRIME is at its best when it comes to uncertainty and imprecise preference information. The linear programming techniques used in PRIME provide a simple and theoretically sound method of taking imprecision in preference statements into account. Additionally, dominance structures are a straightforward way of comparing alternatives with each other under uncertainty. These advantages may prove a sufficient reason for selecting PRIME for preference modelling.

Although a unique method of dealing with imprecise preferences, PRIME is only a preference modelling paradigm and uncertainties in other parts of decision making process are not addressed. In particular, the construction of the value tree and identification of alternatives are such fields of importance.

Value trees in PRIME are additive and precise. From the method’s point of view, there is no recognised uncertainty in any parts of the tree. However, in reality the actual value tree may depend on many uncertainties. The possibility of absence of relevant attributes is not taken into account in any way, albeit the effects of uncertainty in the attribute selection may be orders of magnitude greater than the effect of imprecision in the preference statements. However, the effect of the probability of omitting an important attribute is hard to

20

appraise. One could estimate the likelihood of the risk itself, but the exact effect of the attribute in the model would be nearly impossible to predict.

Other notable field is the identification of alternatives. It is obvious that an appearance of a new superior alternative will change the preference order, but a possible existence of inferior alternatives is irrelevant. Naturally, the risk of omitting a superior alternative from the model should be explored. By calculating an absent alternative’s value and appraising the risk of such an alternative existing, one could compute expected possible loss of value (EPLV) resulting from ignorance of a superior alternative. Still, this field should be examined with greater care and provides a fertile soil for further research.

4.3 Issues on Ordinal and Cardinal Ranking By definition, cardinal ranking includes ordinal preference information, and therefore one could think that ordinal assessments could be omitted if the cardinal ranking is made. However, the situation in PRIME is not that simple. Ratio comparisons do not necessarily include ordinal ranking; they are merely cardinal judgements. There are at least two kinds of cardinal judgements in PRIME: those that define distances between scores, and those that determine the position of the scores between the worst and best achievement levels. The latter contains ordinal ranking information, the former does not.

Depending on the selected cardinal ranking style, if such a ranking is made, some scores can be left ordinally unranked. However, as pointed out in Section 3.5, all scores must be still ranked with respect to worst and best consequences. Yet, depending on the type of cardinal judgements, the definition of an exact order of preference is not necessarily required. For instance, only best and worst consequences need to be identified if scores are rated directly. Direct rating defines explicitly the relative position of each score with respect to the best consequence, i.e. the normalised score of each consequence. With direct rating the inclusion of unnecessary ordinal constraints, e.g. the definition of the preference order, might lead to undesirable changes in the model. In contrast to direct rating, the comparison of the successive value differences requires the exact order of preference to be defined (as its name implies: the order must be known). This method does not define the relative positions of the scores but instead the distances between the successive scores in the preference order.

21

SWING weighting uses techniques similar to direct rating. In fact, it is equal to direct rating of the best consequences with respect to the selected reference attribute. In contrast to direct rating which uses [0, 1] –scale, the points in SWING range from 0 to 100 by convention.

4.4 Value Tree Weighting Biases

As all value tree based methods, PRIME is also subject to various weighting biases. Weighting biases often lead to different kinds of results with different methods. If one of the following phenomena is encountered, there exists a weighting bias:

(a) Different weighting methods yield different weights (b) Hierarchical (top down) weighting leads to steeper weights (c) Division of attributes changes weights, splitting bias (d) Range effect

The range effect refers to a phenomenon in which the attribute gains more or less weight than the range between its best and worst achievements levels would implicate. Reasons for the range effect lie often in the DM’s unconscious assumptions regarding the range of the attribute. The range effect can be avoided by emphasising the importance of the worst and best achievement levels to the DM. Computer applications, such as PRIME Decisions, presents always the worst and best consequences in immediate proximity of the weight estimates, so the DM can more easily appraise the relative importance of each attribute.

Division of an attribute is likely to result in increase of the attribute’s total weight; in other words the divided attribute gains overweight. This phenomenon is called splitting bias. It results often from ignorance of possible connections between different attributes and consequences. Especially splitting is likely to result in overweighing, since splitting an attribute can seldom be made completely (at least in psychological sense). Most weighting methods do not alleviate this problem at all, because their weighting process does not take into account the possibility that there may be related attributes in the model, i.e. they assume the value tree to be additive. Many biases in value trees follow from conditions that violate the additivity of the value tree. 22

Weight differences between bottom up and top down weighting are often related to psychological issues, and they often originate from an inadequate representation of the value tree and appropriate consequence changes. More information on bias problems can be found in [4].

4.5 Whether to Use Top Down or Bottom Up Weighting In PRIME, one can use either top down or bottom up weighting methods presented in Section 3.5. In my experience, it is more convenient to use hierarchical top down weighting, since it allows one to utilise the classification data provided by the value tree structure. By using bottom up weighting instead, one seems to lose the “divide and conquer” power of the value tree itself. However, sometimes the value tree does not provide any additional support for the DM, but the obscure break-down structure prohibits him or her from understanding the changes in consequences clearly. In such cases, the use of the bottom up weight assessment may be recommended. Nevertheless, problems with hierarchical weighting originate often from an inadequate representation of the value tree and incomplete understanding of the problem field. There are many analyses available concerning this issue, see e.g. [4].

There has been much debate on the superiority of either method and many field experiments have been conducted to reveal possible biases originating from the methods, but the problem of selecting the right weighting style still remains unsolved. Still, if bottom up weight assessment proves to be efficient, it leads to the question of the meaningfulness of the value tree itself. The very power of value trees lie in the possibility of classifying attributes and in this way facilitating their weighting process. If bottom up weight assessment seems to give weights that reflect the reality better than the weights derived from hierarchical weighting, there is no real justification to use value trees at all. Of course, one could argue that value trees are efficient in the attribute elicitation process, but this does not justify their use in the preference model itself.

4.6 Computational Aspect: Handling Single Achievement Levels One could expect that problems would appear if all the available alternatives had the same consequence with respect to some attribute. Because that single 23

achievement level were both the best and the worst achievement level on that attribute, it would follow by the normalisation constraints (3-16) that its value is equal to zero (being the worst) and, in general, be greater than zero (since it’s the best). In such a case the model would become infeasible.

The solution to the problem follows from the definition of the weight of an attribute, i.e. wi = v(xi *) - v(xi0). Assuming that x* = x0, a trivial calculation shows that the weight of the attribute must be equal to zero in such a case. As a result, if there exists only one consequence, or consequences of equal preference, the weight of the corresponding attribute must be set to zero. Otherwise the model becomes infeasible inevitably. In addition, it is obvious that all PRIME models with a single alternative are infeasible by nature (although being trivial).

However, severe problems arise when it comes to normalised scores. By (3-7) the value of the best consequence appears in the denominator of the objective function of a linear program. If its value is equal to zero, the value of such functions is undefined as are the intervals of normalised scores. In such a case corresponding linear programs are infeasible. If this special case is not taken into account separately, computational PRIME applications do not allow to compute normalised scores but result in error. Thus, we need define that, if the weight of an attribute equals to zero, all normalised scores under it are equal to 0. Computationally this case may be taken into account by interpreting infeasible results of normalised linear programs to represent a [0, 0] –interval. Of course, prior to computation of scores of each attribute one could check whether there is only one achievement level and then discard the attribute from computation. However, it is computationally inefficient to make such checks for each attribute, for it is not necessarily a quick task to determine whether there is only a single achievement level on the attribute in question.

Similar problems are encountered with normalised weights if the weight of the parent goal in (3-15) is equal to 0. This is possible in a case in which the nonnormalised weights of all children of the goal are equal to zero. As above, infeasible results in this case can be interpreted to represent a [0, 0] –interval.

24

5. Case Study: GM Crops 5.1 Introduction to the Study and the GM Crops 5.1.1

The Purpose of This Study This case study deals with evaluation of the risks and benefits associated with different types of utilisation of genetically modified crops (GM crops). The relative importance of the drawbacks and advantages of each such approach are appraised by the DM and compared to the benefits of the conventional forms of cultivation. To achieve this end the PRIME method is deployed to elicit the DM’s preferences. The model construction follows the steps introduced in Section 3.2. The associated computer tool is PRIME Decisions v. 1.00, see [5].

The primary purpose of this study is to illustrate the application of the PRIME method and to demonstrate its capabilities in the handling of uncertainty. The given judgements do not reflect preferences of any real instance, and therefore the results of the preference model are purely illustrative.

5.1.2

What is Genetical Modification? Genes are DNA sequences that control and regulate the production of proteins in a cell. By means of genetical modification, genes can be transferred from a cell of any living being to a cell of another being to produce the same proteins that it did in the original cell. Proteins can provide resistance against pests, make the organism grow larger, or, in general, produce other desired effects.

Although one could claim that conventional improvement has used similar techniques for millennia by breeding and selection, in contrast to conventional improvement genes need not come from the species the recipient represents. Instead, genes can be transferred from animals to plants and vice versa. Ethical questions emerge when it comes to human genes and their utilisation in genetically modified plants.

It is no wonder that people take up a sceptical attitude towards genetically modified organisms. Genetical manipulation deals with the issues on the basic nature of the nature itself, and people fear that the interference of a careless human hand destroys the delicate balance of God’s creation. The biological

25

mechanisms of genes still remain unknown to many people, and uncertainty alone is enough to inspire fears, anxiety, and scepticism.

5.1.3

The GM Crops The purpose of the development of GM crops is to provide a variety superior to an existing plant, in properties such as herbicide tolerance, crop yield, and pest resistance. So far genes have been transferred to all the most important plant species, such as potato, rice, wheat, and rye. The number of experiments is quickly increasing, and by the end of year 1997 over 25 000 field experiments had been conducted. The subjects of greatest interest were corn, turnip rape, potato, tomato, and soya bean. In the year 1997, 48 different GM crops were in commercial production, and their total cultivation area exceeded 12.8 million hectares. For more information, see [3].

5.2 Construction of the Model 5.2.1

Selection of the Object and Extent of the Study To simplify the rather large and obscure field I found it necessary to consider the effects of cultivation of only one GM crop. Since there were very limited data available regarding specific GM crops, the subject was chosen to represent some generic GM crop. This study presents only the first iteration of the decision analytic process.

5.2.2

Identification of Attributes and Goals The construction of the model began by identifying the possible risks and benefits resulting from the use of GM crops. By utilising results in [3], the effects can be classified into three separate groups: (a) Effects on public health (b) Environmental effects (c) Effects on productivity and economy

The groups can be divided further into separate attributes. The complete value tree is represented in Figure 1.

26

Figure 1. Value Tree Effects on public health include possible allergenic risk, increase of bacterial antibiotic

resistance,

and

production

of

unpredictable

compounds.

Environmental effects can be divided into possibility of becoming a weed, gene-flow to wild life plants through fertilisation, effects on the soil ecosystem, development of new viruses, and development of resistant insect species. Effects on productivity and economy consist mainly of one attribute: the crop yield.

It is obvious that without closer examination and analysis some relevant attributes may be missed. However, this is only the first iteration in the decision making process, and the value tree is likely to change during the next rounds. Nonetheless, the effects of missing attributes may prove to be enormous if something very important has been omitted.

5.2.3

Identification of Alternatives The most tricky part of this study was to determine the available alternatives and the distinction between their consequences. By following the steps of [2], the alternatives were identified to be (a) Organic agriculture -

All farming and food production is conducted under present day standards

(b) Integrated Pest Management 27

Figure 2. Weight assessment -

All farming and food production is conducted using systems designed to limit but not exclude chemical inputs and with greater emphasis on biological control systems than conventional methods

(c) Conventional Agriculture -

All farming and food production is conducted under present intensive systems

(d) GM plants with segregation and present systems of labelling -

Labelling based on the presence of foreign DNA or protein in the final product

(e) GM plants with post-release monitoring -

Monitoring for effects (mainly environmental) conducted on an ongoing basis after commercialisation

(f) GM plants with voluntary controls on areas of cultivation -

Areas of growing of GM oilseed rape restricted on a voluntary basis to avoid unwanted effects such as gene-flow and fertilisation of non-GM crops

Sometimes it was hard to distinguish the consequences of different alternatives, and I had to make several assumptions. Some attributes referred to the magnitude of assumed risks, which were hard to determine without the expertise of the appropriate field. In the end, the consequences were chosen to be ranked with the following verbal scoring which referred to the risk associated with each consequence: No risk, Negligible, Minimal, Unlikely, 28

Average, Probable and Highly Probable. Most consequences were deduced from the textual representations of the different alternatives, and therefore they are only subjective guesses of the possible effects of each alternative. The consequences of each alternative are shown in Table 1.

The list of the above alternatives is not complete in any way. Mayer and Stirling [3] elicited over hundred different alternatives in addition to these six. However, many additional alternatives were variations of these alternatives and such precision is not required in this context.

Table 1. Consequences Attribute

Org.agric.

Int. pest.m

Conv.agr

GM/Segr

GM/Monit

GM/vol.ctrl

Allergenic Risk

No risk

Minimal

Minimal

Average

Unlikely

Unlikely

Antibiotic Resistance

No risk

No risk

No risk

Probable

Probable

Probable

Unpredictable effects

No risk

No risk

No risk

Unlikely

Unlikely

Unlikely

Negligible

Negligible

Negligible

Minimal

Unlikely

Unlikely

Gene-flow to wild life

No risk

No risk

No risk

Minimal

Unlikely

Unlikely

New viruses

No risk

No risk

No risk

Negligible

Negligible

Negligible

No hazard

Small

Average

Small

Small

Small

Minimal

Average

A lot

Average

Average

Average

Less t. avg

Average

Average+

High

High

High

Becomes Weed

Soil Ecosystem Amount of Herbicide Crop Yield

5.3 Preference Elicitation: Applying PRIME

The preference elicitation phase was conducted by using PRIME. However, due to the DM’s unfamiliarity with the field, the weights were remarkably large intervals. I found it now more convenient to use bottom up weighting due to the small number of attributes in the model and the lack of additional information.

The ordinal score elicitation phase was performed very quickly with the aid of the elicitation tour of PRIME Decisions. In contrast to standard preference elicitation methods, verbal consequences were handled painlessly with PRIME. While ordinal ranking was self-explanatory, the cardinal score ranking was much more complicated because of the vagueness of verbal consequences. In the end, the cardinal ranking was omitted since there was not enough

29

Figure 3. Value intervals for cultivation type information to justify further elaboration to the model. The weightings given in the weight assessment phase are presented in Figure 2.

5.4 Analysis of Results In spite of the vagueness of given alternatives and preferences, PRIME analysis revealed distinctions between the alternatives. The value intervals of alternatives are shown in Figure 3. The intervals remained large, because the cardinal ranking was omitted and the weightings were imprecise. Despite these facts, the value of organic agriculture had a relatively narrow interval.

Figure 4 shows that there is no dominance present, which could be expected in the light of very limited and imprecise preference information. The resulting weights are available in both Figure 1 and Figure 5. They yield no big surprises.

30

As usual, the decision rules window (Figure 6) contains valuable information. As discussed earlier, the recommendations of maximax and maximin decision rules can be ignored, although they give good basis for comparison. Instead, the results of the superior decision rules, central values and minimax regret, should be examined with care. As shown in Figure 6, the possible losses of value are almost identical with the last three alternatives and they differ only in fourth or less significant decimals. By examining value intervals and PLVs it becomes quickly evident that even slight changes in the model may change the alternatives’ preference order: the central values of the last three alternatives differ no more than 0.008 in value. On the other hand, the amount of possible loss of value is significantly large in every alternative: 0.490 or more. This implies that despite the recommendations of the decision rules there are still no safe choices at all. Instead, the model would have to be refined further.

Figure 4. Dominance Matrix

31

Figure 5. Weights for cultivation type

As mentioned earlier, the results originate mainly from the given weightings, and even tiny changes in them could possibly change the preference order of the alternatives. Under these conditions, it would be quite interesting to make some sensitivity analyses with regard to weightings in the model, but unfortunately current PRIME applications do not support such features. However, the given weightings are so imprecise that the utility of sensitivity analyses becomes questionable. It is obvious that sensitivity analysis with respect to only one of the given bounds leads to the regulation of the uncertainty in the model, which can be undesirable. On the other hand, equal variation of both bounds may not result in sensible results.

PRIME ended up with recommending the three GM options. The reason for such results derive from the heavy weighting of the crop yield, and by lessening its importance the results could look very different. However, the results of this study should be treated with great criticism, for they represent only preferences of my own. Instead, the purpose of the study was to give a good understanding of the applications of the PRIME method and a quick look at the PRIME Decisions software.

32

6. Conclusions

PRIME provides a fascinating preference modelling paradigm when it comes to nonnumeric consequences or limited amount of imprecise preference data. However, the current PRIME has three severe weaknesses in comparison with standard methods. First, PRIME lacks the functionality of the value functions to map numeric consequences to scores. Second, even with the aid of fastest computers, large PRIME models become intolerably laborious to calculate. Third, there are no sensitivity analysis features present.

First and third weaknesses are solved rather easily by developing computer applications further. However, from my point of view, the second weakness remains as an essential drawback of the linear optimisation approach itself. Of course, this disadvantage can be mainly avoided if a part of the model, e.g. dominance structures or scores, is left uncalculated.

On the other hand, the strengths of PRIME emerge whenever uncertainty is encountered. First, the imprecise ratio statements give the DM the possibility of defining only the bounds for the value to be determined. Second, PRIME permits the models to be solved with otherwise inadequate or missing preference data. The DM is not forced to give unjustified preference judgements, and the results of PRIME reflect more closely the real preferences

Figure 6. Recommendations of the decision rules

33

than the usual approaches. In addition, the dominance structures provide an efficient way of comparing the alternatives with each other.

References [1] A. A. Salo, R. P. Hämäläinen: PRIME – Preference Ratios in Multiattribute Evaluation, Helsinki University of Technology, Systems Analysis Laboratory, 1999 [2] S. Mayer, A. Stirling: Confronting Risk, A Pilot Multi-Criteria Mapping of GM Crop in Agricultural Systems in UK, Final Report, 1999 [3] A. Salo, V. Kauppinen, M. Rask: Teknologian arviointeja 3, Eduskunnan kanslian julkaisu 4/1998 [4] M. Pöyhönen: On Attribute Weighting in Value Trees, Helsinki University of Technology, Systems Analysis Laboratory, 1998 [5] T. Gustafsson: Using PRIME Decisions and Evaluating Its Strengths and Weaknesses Compared to Other Decision Analysis Applications, 1999 [6] M.S. Bazaraa, H.D. Sherall, C.M. Shetty: Nonlinear Programming: Theory and Algorithms, John Wiley & Sons, Inc, 1993 [7] W. Rudin: Principles of Mathematical Analysis, Third Edition, McGraw-Hill, Inc, 1976 [8] S. French: Decision Theory: An Introduction to the Mathematics of Rationality, S. French/Ellis Horwood Limited, 1986

34