Project Buffer Sizing through Bayesian Approach

0 downloads 0 Views 174KB Size Report
results and deriving some conclusions. 1.1 The 50% method. Goldratt suggests sizing the project and feeding buffers as half the total duration of the activities ...
Project Buffer Sizing through Bayesian Approach

Franco Caron & Mauro Mancini* Politecnico di Milano, Department of Management Economics and Industrial Engineering, Piazza Leonardo da Vinci 32 - 20133, Milan, Italy

Fabrizio Ruggeri CNR IMATI, Via Bassini 15 - 20133, Milan, Italy

Abstract—The Critical Chain Project Management (CCPM) approach determines project duration defining a critical chain (the longest chain of activities) and using aggressive estimates for activity durations (moving time reserves from each task to time buffers located at the end of each activity chain). Different approaches in literature try to dimension these buffers. The aim of the present study is to develop a buffer sizing method based on a Bayesian approach that allows the integration of data records and experts’ judgement. The proposed method has been compared with the alternatives described in the literature using a set of Key Performance Indicators. Keywords: Critical Chain, Buffer sizing, Bayesian approach, Project Management

Corresponding Author: Mauro Mancini, p.zza Leonardo da Vinci 32, 20133 – Milan (Italy) Phone: +39.02.2399.4057; Fax: +39.02.2399.4067; E-mail: [email protected].

1. Introduction Estimating project duration is an extremely complex activity, as it depends on a large number of factors which cannot always be effectively controlled since they are correlated to human behaviour. In his CCPM approach, Goldratt addresses this problem and seeks to minimise the negative effects that human behaviour can have on project planning and execution. Details of the technical characteristics of CCPM can be found in the reference bibliography [8] [15] [4]. Here, we would underline that the methodology has been widely used in industrial contexts [9]. However, not all applications have been successful and the reasons for the failures need to be investigated [1] [5]. Goldratt assumes that executors normally over-estimate the duration of activities by including a certain time reserve to avoid delays, but during execution all the available time is always used [10]. CCPM therefore proposes an aggressive approach to the estimate of activity duration, by approximating the median of the distribution of the activity duration, and a protection against possible delays in project completion with time buffers allocated to each project chain. For a given project, we can define two types of buffer: • Project buffer: assigned to the critical chain; • Feeding buffer: assigned to the non-critical chains where these meet the critical chain. Buffer management is the means by which CCPM controls project development. The various types of buffer play a key role in the CCPM methodology. The project buffer should reflect both the protection level required against delay and the foreseen variability of the total duration of the critical chain. The feeding buffers take account of the protection level and the variability of the other chains. In order to reduce overall project duration, the estimate of the size of the buffers should exploit all the information available during the planning phase [7]. As an overall view of the ways of sizing buffers, we summarise in this section the most significant approaches considered in the literature, so as to provide a comparison with the Bayesian method proposed in the second section. In the third section the method is applied to a case study commenting the results and deriving some conclusions.

1.1 The 50% method Goldratt suggests sizing the project and feeding buffers as half the total duration of the activities chain they protect. The advantage of this approach is its intrinsic simplicity, which, although fully in line with the CCPM approach, tends to overestimate project duration so resulting in a less competitive bid. This method appears to be well fit to completely new projects, whereas when data records or experts’ judgements deriving from past experience are available, other methods can be considered.

1.2 The root square error method This method seeks to define the buffer in terms of the risk associated to the chain [6]. Assuming the realistic hypothesis that the duration of the activities has a lognormal distribution, two estimates are required for the duration of each activity: the first (S) is the safe option and must include a margin of error to compensate delays, the second (A) does not include a margin of error. The author suggests calculating the two estimates using a 90° percentile and the median of the distribution. The difference between the two values (D = S-A) is proportional to the variability in duration of the activity. There is then a series of suggestions to refine the use of this methodology. The weakness of the approach is in its basic hypothesis, i.e. that the difference between safe and aggressive estimates is double the standard deviation, since it has been shown that the number of standard deviations actually ranges from 0.05 to 1.5. 1.3 The simulation method This method is based on the Monte Carlo simulation technique and requires the knowledge of the duration distribution, along with its median and variance, for each project activity [13]. Project execution is simulated obtaining the distribution of the project duration taking account of the possible impact of sub-critical chains. The size of the project buffer depends on the required probability of completing the project within the agreed date. It is determined as:

B = qα − q50% where qα corresponds to the α percentile of the total duration, while q50% is the median of the distribution, following Goldratt. For the feeding buffers, the method proposes a size equal to the average delay. If the feeding buffer turns out to be insufficient, so rendering the chain critical, the effects will be absorbed by the project buffer [14]. From a purely statistical point of view, this is the best approach, because it takes account of both the critical chains and the possibility that the sub-critical chains become critical. The limitation resides in its difficult implementation, particularly in complex projects, because a large amount of information and specific skills are required.

1.4 The classes of uncertainty method This method is based on the idea that the duration of some activities can be accurately predicted, while for others it is more difficult to reach accuracy. Activities are therefore divided according to the level of uncertainty of the respective duration [14]. Uncertainty is measured by relative dispersion (RD), obtained from the average duration μ and the standard deviation σ of each activity, as:

RDi =

σi μi

The method proposes a sub-division into four classes of uncertainty (very low, low, high, very high), one of which is associated to each activity. For each class, there are three percentage values representing the desired safety level (low, medium, high) which, when multiplied by the duration of the activity, return the margin of protection (MP). The sum of the margins of protection of the activities in the chain gives the size of the buffer protecting the chain. Furthermore, this method requires a large amount of data. 1.5 The forecast error method This model explicitly considers data records on estimated and actual duration of project activities and associates a buffer based on the contractor’s forecasting performance [2]. For each activity, the mean absolute percentage error (MAPE) is calculated in function of k historical observations of the percentage error (Pej): k

MAPEi =

∑ Pe

j

1

K

The size of the project and feeding buffers (B) is calculated by summing the MAPE values of the n activities in the considered chain and multiplying by the duration of the chain (D). n

B = ∑ MAPEi ⋅ D 1

Although the method is simple, it requires, as the three previous cases, a large amount of data.

2. The Bayesian model Having looked at the main methods of sizing time buffers, we now propose an approach which uses other sources of information in addition to the contractor’s data records. The Bayesian approach is the ideal tool to manage situations where there are both limited data records and in-company experience. These situations are typical of projects in many industrial contexts, like the Engineering and Contracting sector. 2.1 Bayesian statistics The strength of the Bayesian approach stems from the possibility of exploiting both available data records and experts’

( )

judgement, modelled, respectively, through a parametric distribution function f x θ on the observable quantity x and a prior distribution Π (θ ) on the parameter θ representing the expert’s opinion.

Using Bayes’ Theorem, we can update the prior distribution of θ in the light of the observations X = ( x1 ,..., xk ) from the

(

)

variable x and obtain the posterior distribution Π θ X , with density

Π (θ X ) =

f (x θ )Π (θ )



f (x θ )Π (θ )dθ

(1)

The posterior distribution is typically used to make inferences on quantities of interest, e.g. the parameter θ itself, whose Bayesian estimator is given by the mean of the posterior distribution, as minimizer of the posterior expected loss under squared loss function. Here we will focus mostly on another use: the prediction of future observations. In particular, it is possible, after observing X = ( x1 ,..., xk ) , to compute the posterior predictive distribution for the future observation xk +1 :

f (x k1 X ) = ∫ f (x k +1 θ )Π (θ X )dθ

(2)

Before presenting a concrete application of the methodology, it is important to highlight two significant limitations: 1. The result is a function of the chosen prior distribution. This is one of the most controversial aspects of the Bayesian approach which can be successfully addressed by a proper elicitation process which leads to a prior fully adequately representing the expert’s opinion or a sensitivity analysis [11]. 2. In general, posterior and predictive distributions are not available in closed forms and this fact has limited in the past the diffusion of Bayesian methods in applications. Since early ‘90’s, efficient simulation algorithms, known under the name of Markov chain Monte Carlo (MCMC) methods, have been developed [12]. These methods allow for obtaining samples from the posterior distribution which can be used to estimate parameters and distribution of future observations.

2.2. Approach to buffer sizing The model aims to size a time buffer in the context of CCPM, using Bayesian statistics and is applied to a project which, for the sake of simplicity, can be considered as a succession of activities X, Y, Z and a buffer B, as illustrated in figure 1:

X

Y

Z

B

Fig. 1 Sequence of project activities The estimate of the duration of each individual activity strictly follows Goldratt and is taken as the median of the posterior x as predictive distribution. Therefore, for the activity X (and, subsequently, for the other activities) we obtain the median ~ solution of ~ x

∫ f (x X

k +1

X )dxk +1 = 0,5

(3)

0

For ease of notation, we indicate the posterior predictive density for

X as f X ( x ) (and we treat the others similarly).

For the overall behaviour of the three activities, we consider their sum W = X + Y + Z, whose density function can be obtained via a convolution integral, because of the stochastic independence of X, Y and Z. Assuming X + Y = T and T + Z = W, the total duration of the chain will have the density function w t

f w (w) = ∫ ∫ f z (w − t ) f y (t − x ) f x ( x )dxdt

(4)

0 o

The size of the time buffer considered is calculated as the difference between a percentile of the distribution of the total duration and the sum of the medians of the individual activities.

~ , the 95% percentile of the distribution on the total duration of activities, as the solution of We consider w ~ w

∫ f (w)dw = 0,95 W

(5)

0

Therefore, the size of the buffer (B) can be derived as:

~−~ B=w x−~ y −~ z

To obtain a robust approach to buffer sizing, different forms of prior distributions can be used, trading off computational simplicity in the posterior distributions (see previous paragraph) with the efficacy of the prior distributions to represent the probability distribution of an activity’s duration. As extreme cases, we tested an exponential distribution (for its computational simplicity) and a triangular distribution (for its efficacy in representing an activity’s duration) as models on x, i.e. f ( x | θ ) , with convenient choices of prior distributions. Both models revealed significant weaknesses, precisely because they were extreme cases. Computations of medians and quantiles of posterior predictive distributions were possible via Monte Carlo simulation. Details can be found in [3]. On the other hand, a Gaussian (normal) model for x gave extremely interesting results and the method is, therefore, described in detail in the following section. 2.3 Application with a normal model The nicest aspect of the choice of a Gaussian model is that the distribution of sums of independent Gaussian random variables is still Gaussian, overcoming the need of using Monte Carlo simulation to evaluate quantities of interest.

Computations of medians and quantiles can be carried over very easily by using any software allowing for computations of quantiles of the Gaussian distribution. We are aware that activities’ durations are nonnegative quantities but, as common practice in Bayesian statistics, the Gaussian distribution is used as a good approximation when its mean and variance assign negligible probability to the negative values, as in our case. The Gaussian distribution is also more effective than the exponential or triangular distributions in representing the variability of a duration, since it is possible to deal with conjugate priors, i.e. such that prior and posterior distributions are in the same family (Gaussian when considering a prior only on the mean of the Gaussian, and Gaussian-Inverse Gamma distributions when the variance is considered as an unknown parameter as well). The conjugacy property can be exploited to get simpler computations. We assume that:

(

)

X θ ~ N θ ,σ 2 , with density ( x −θ )2

1

f (x θ ) =

e

2π σ

2σ 2

(6)

The expert is asked by an analyst to provide a maximum and minimum value within which most of the values are found. A level of confidence, 100β %, on the induced range r is assigned either by the expert or the analyst helping the expert in the elicitation process. This information will be used by the analyst to choose the prior distribution and perform the Bayesian analysis, Because of the symmetry of the Gaussian distribution, the middle point, μ 0 , of the range r is treated as an opinion on the mean value θ . The opinion on the variance

σ2

is obtained by considering the range as the size of an interval having

probability β under a Gaussian model with variance σ . A well known result about Gaussian distributions implies 2

r = 2 ⋅ σ ⋅ Z (1+ β ) / 2 where Z (1+ β ) / 2 is the quantile of order (1+ β)/2 of a standard Gaussian distribution. giving:

σ=

r 2 ⋅ Z (1+ β ) / 2

Once the expert has provided his information and it has been transformed into opinions on the parameters of the model, prior distributions have to be chosen. Although a plethora of distributions could be chosen for the parameters ( θ , choose a conjugate prior. We could treat the parameter for

σ 2 ), we will

σ 2 .as unknown and consider a Gaussian - Inverse Gamma prior

(μ ,σ ) but we prefer to take it as fixed. In fact, we are dealing with the case in which the expert provides a minimum 2

amount of information (i.e. only the range of the values) and it can be reasonably applied to choose a plug-in value for σ , but it is not sufficient to specify properly an Inverse Gamma distribution on it. 2

We only specify a normal prior distribution on θ , choosing

μ0

as its mean and a value σ 0 , usually chosen by the analyst, 2

as its variance. The latter choice, quite common in Bayesian analysis, often reflects the opinion of the analyst on the “expertise” of the expert, allowing for large values when the analyst’s knowledge is quite weak. A sensitivity analysis for different values of

σ 02

often accompanies the analysis. The choice of

σ 02 would become less and less relevant as data

accumulate. Therefore, we choose a conjugate Gaussian prior θ ~ N with density

Π (θ ) =

1 2π σ 0

(μ ,σ ) 0

2 0 .

(θ − μ 0 )2

e

2σ 02

(7)

Given the sample X

= ( x1 ,..., x k ) , from the conjugacy property, we can obtain the posterior distribution

θ X ~ N (μ k ,σ k2 ), with density (θ − μ k )2

1

Π (θ X ) =

2π σ k

e

2σ k2

(8)

where

μk =

μ0 1 k + ∑ xi σ 02 σ 2 i =1 1

σ 02

σ k2 =

+

(9)

k

σ2

1 1

σ 02

+

(10)

k

σ2

Simple computations lead to the posterior predictive distribution, given by

x X ~ N (μ k ,σ p2 ) with

σ p2 = σ 2 + σ k2 The predictive density will be

f X (x X ) =

1 2π σ p

( x − μ k )2

e

2σ 2p

(11)

Remembering that the sum W of independent Gaussian random variables has a Gaussian distribution, we find that:

(

W ~ N μ w , σ w2

)

where: n

μ w = ∑ μi i =1

n

σ w2 = ∑ σ i2 i =1

Operationally, the steps followed are: 1. 2.

Identification of the duration of the individual activities by means of the median of the posterior predictive density which considers both historical data and expert’s opinion. Since the distribution is normal, this value is μk Sizing of the buffer for the framework in question by subtracting μ k from the relevant quantile of the total duration function W; the quantile is a function of the safety level α desired and is determined by choosing the value Zα such that: P(W < Z α ) = α The value can be easily computed by the adequate function in any statistical software.

We are aware that asymmetric distributions might better describe the typical duration of the activities, but our choice of the Gaussian distribution will prove helpful in the next section when applied to a case study

3. Validation in a real case study To test the model, we used a real case study from the engineering & contracting sector concerning the construction of an industrial plant. The project scheduling has been simulated using a lognormal distribution to describe the activity duration.

The project is divided into two major phases: an initial project planning phase (feasibility study and basic engineering) and a second realisation phase (detailed engineering, procurement, construction, testing and commissioning). The project network highlighting the temporal dependencies between the activities is shown in Fig. 2. It encompasses 59 activities and 9 chains (8 non-critical and 1 critical). The simulation process required 65,000 iterations. We compared the results given by that simulation, used as a benchmark, with the result given by the buffer sizing of the proposed model (Normal Bayesian – NB) and the different approaches detailed in section 2 above. Specifically, eight different outputs (in terms of Critical Chain and related Project Buffer) were defined: • Goldratt approach [50%] • Simulation: [Sim 1] and [Sim 2] • Root square error: [RSE] • Forecasting error: [FE] • Classes of uncertainty: C.U. (l), C.U. (m), C.U. (h)

Fig. 2: Activity network of the case study It should be noted that the simulation based approach to the buffer sizing is implemented using two different levels of uncertainty [Sim 1 and Sim 2], and the classes of uncertainty method adopts three levels of uncertainty [C.U. (l), C.U. (m), C.U. (h)]. Furthermore, in cases where the size of the feeding buffer obtained using the different approaches was greater than the total float of the non critical chain, the feeding buffer was reduced to the total float value. Table 1 synthesizes the result of the different approaches considering the length of the Feeding buffer (FBi) of each of the eight non critical paths, the duration of the critical path (CP), the size of the Project Buffer (PB) and the total duration of the project (CC). Having completed the comparison, various KPIs (Key Performance Indicators) were defined and computed in order to highlight the conditions which favour the use of one method of sizing rather than another. However, our goal was not to define one best approach, since experience has shown that this does not exist. These indicators consider the differences between the planned values and the durations returned by the simulations and are briefly reported below. A first Key Performance Indicator (KPI1) seeks to measure the efficacy of a method in function of the length of the chain. The indicator is given by the ratio between the delay with respect to the forecast completion date of a chain and its duration, taking an average between the various chains, including the critical chain (see Fig.3). KPI2 measures the under-estimation of the feeding buffer, i.e. how often the duration of a sub-critical chain was so long as to exceed the available feeding buffer and thus impact on the project buffer of the critical chain. The higher the value of the indicator, the worse the performance of the method is (Fig. 4). This indicator also serves to test the chosen approach with respect to the management of critical nodes, i.e. the merging point between a non critical chain and the critical chain. Where the feeding buffer was reduced to the total float of the corresponding chain, there were no problems as the size of the buffer was more than sufficient to absorb variations. N.B. F.E. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50% 0,00%

0,20%

0,40%

0,60%

Fig 3: Performance indicator KPI1

0,80%

1,00%

1,20%

Model Normal Bayesian [N.B.] Forecast Error [F.E.] Simulation 2 [Sim 2] Simulation 1 [Sim 1] Classes of uncert. - low - [C.U. (l)] Classes of uncert. - medium - [C.U. (m)] Classes of uncert. - high - [C.U. (h)] Root square error [R.S.E.] Goldratt [50%]

FB1 1 1 40 40 1 2 2 5 3

FB2 4 7 5 5 5 9 14 19 15

FB3 1 3 4 4 1 1 2 11 6

FB4 13 7 71 71 12 24 36 7 33

FB5 7 2 6 6 4 7 11 17 9

FB6 1 1 28 28 1 1 2 2 2

FB7 15 25 104 104 19 37 55 8 54

FB8 1 2 105 105 1 2 3 6 3

CP 228 217 253 236 217 217 217 217 217

PB 20 43 17 26 32 63 94 55 109

CC 248 260 270 262 249 280 311 272 326

Table 1: Schedules for the various sizing methods

N.B. F.E. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50% 0,00%

2,00%

4,00%

6,00%

8,00%

10,00%

Fig. 4: Performance indicator KPI2 As shown in Fig. 4, the methods resulted in a buffer size which rendered the average delay irrelevant with respect to the overall duration. As is evident in the preceding figures, KPI2 is linked to KPI1,: the worst performance is always found in the forecast error and root square error methods.

N.B. F.E. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50% 0,00%

5,00% 10,00% 15,00% 20,00% 25,00% 30,00% 35,00% 40,00% 45,00%

Fig. 5: Performance indicator KPI3 KPI3 measures the ratio between average use of the buffer and its length, i.e. which portion of the buffer is used on average. A high value suggests a correct sizing of the buffer (Fig.5).

Note that this indicator can be used as a two-phase project planning driver: the buffer is sized with a simple algorithm (e.g. the 50% method), the simulation is started and the KPI3 value is used to choose the effective size of buffer. KPI4 measures overall over-estimation, i.e. the ratio between average early completion and planned project duration (the length of the critical chain): over-estimation increases the ratio; consequently a high KPI4 (e.g. the Goldratt or the classes of uncertainty with a high safety level methods) indicates a non-optimum method.

N.B. F.E. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50% 0,00%

5,00%

10,00%

15,00%

20,00%

25,00%

30,00%

35,00%

Fig 6: Performance indicator KPI4 KPI5 estimates the average incidence of the use of the buffer, calculating as a percentage the number of times the buffer is exploited. This indicates the precision of the estimated duration of activities: if the duration is over-estimated, the buffer is rarely used. During the bidding phase, the contractor would suffer an unnecessary loss of competitiveness due to overestimated activity durations. As illustrated in Fig. 7, all models (except the Bayesian approach, which performs worst) produce a comparable estimate of the duration of the activities. N.B. F.E. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50% 0,00%

5,00%

10,00%

15,00%

20,00%

25,00%

30,00%

35,00%

Fig. 7: Performance indicator KPI5 KPI6 gives the probability of over-running the agreed delivery date. Methods which use aggressive sizing, such as Bayesian and the classes of uncertainty with a low safety level perform worst, but as shown on Fig. 8, the over-runs are negligible. KPI7 indicates the competitiveness of the bid in terms of the difference between the method in question and the most prudent planning approach, i.e. the 50% approach. As can be seen in Fig. 9, the most attractive models are the Bayesian, the classes of uncertainty with a low safety level, and the forecast error approaches

N.B. F.E. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50% 0,00%

0,05%

0,10%

0,15%

0,20%

0,25%

0,30%

20%

25%

30%

Fig. 8: Performance indicator KPI6 N.B. F.E. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50% 0%

5%

10%

15%

Fig. 9: Performance indicator KPI7

Model N.B. E.P. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50%

KPI1 0,02% 1,00% 0,03% 0,03% 0,67% 0,56% 0,03% 0,34% 0,03%

KPI2 0,30% 8,63% 0,47% 0,47% 5,14% 3,41% 0,47% 7,78% 0,47%

KPI3 7,20% 36,80% 9,30% 9,40% 41,90% 30,50% 22,20% 29,40% 20,00%

KPI4 10,07% 13,57% 17,39% 14,87% 10,16% 20,25% 28,28% 17,01% 31,58%

KPI5 15,90% 64,40% 55,60% 56,30% 64,10% 64,00% 63,90% 64,70% 63,90%

KPI6 0,25% 0,01% 0% 0% 0,27% 0% 0% 0,00% 0%

KPI7 23,93% 20,25% 17,18% 19,63% 23,62% 14,11% 4,60% 16,56% 0%

Table 2: Summary of the performance indicators In order to carry out an overall comparison, we define a single measure of the quality of the methods used in order to include all the information generated by the KPI. This measure has been defined as a weighted average of the individual KPI coefficients and has been named the Global Indicator of Planning Quality (GIPQ):

GIPQw =

10 N

∑O i =1

iw

∀w = 1,..., W

(12)

⋅ pi

Operationally, each of the N KPI is assigned the ranking position obtained by all the W models (Oiw). This is weighted in function of the importance of the KPI and all the KPI are summed. In order to have the highest value for the best model and the lowest result for the worst, the reciprocal is taken and multiplied by 10 to avoid decimal places.

p1 0,025

p2 0,025

p3 0,05

p4 0,075

p5 0,025

p6 0,4

p7 0,4

Table 3 Weightings of the different KPI Without considering the sensitivity analysis implemented Tab. 3 shows an example of the weighting assigned to the KPI where a greater weighting has been given to the indicators measuring visible project performance: competitiveness (KPI7) and respect of deadlines (KP6). The other indicators, which are not perceived by the customer, are decidedly less important. The values for the global indicator of planning quality derived by the example of table 3 are shown in Fig. 10. As is evident form Fig. 10, the most effective planning is that proposed by the normal Bayesian model. This result is determined by the fact that the weightings attributed particular importance to factors perceived externally by customers, i.e. competitiveness of the bid and the incidence of delivery date over-run. The Bayesian method is the most successful, because it allows aggressive scheduling (which generates bid competitiveness), while at the same time sizing the buffers so as to obtain generally limited delays in delivery. From a global point of view, the simulation, the forecast error and the classes of uncertainty with low safety levels methods also return good results. The classes of uncertainty with medium or high safety levels, the root square error and the Goldratt methods are not considered optimum, primarily because of the long schedule which makes the bid less attractive.

N.B. F.E. Sim. 2 Sim. 1 C.U. (l) C.U. (m) C. U. (h) R.S.E. 50% 0,00

0,50

1,00

1,50

2,00

2,50

3,00

3,50

4,00

Fig. 10: Global Indicator of Planned Quality

4. Conclusions Remembering that the models were validated on a case study, it is nevertheless important to underline the optimum performance of the Bayesian model which is, therefore, a valid alternative to the approaches in the literature. It should also be born in mind that these models are intended for situations in which the sequence of activities is repetitive (as, for example, in the engineering and contracting sector) and where data from historical series can be exploited. Going beyond the areas of CCPM, this Bayesian model could be applied to the sizing of contingency cost reserve. In that area, too, the integration of data records with expert opinions should lead to an improvement in forecasts and, consequently, in performance.

References 1. 2. 3. 4. 5. 6.

Barnes D., Dvri D., Ratz T. (2003). A critical look at critical chain project management. Project management Journal. Vol. 34 n°4. p.24-32. Caron F., Mancini M. (2006). Project buffer sizing through historical errors. Proceedings of 1st ICEC & IPMA Global Congress on Project Management, 23-26 april Fumagalli, S. (2006). Dimensionamento del project buffer mediante approccio bayesiano. B.SC Dissertation; Politecnico di Milano. Goldratt, E.M. (1997). Critical chain. The North River Press. Great Barrington. Herroelen W., Leus R. (2001). On the merits and pitfalls of critical chain scheduling. Journal of Operation Management. Vol. 19 n° 5. p. 559-577 Hoel, K. - Taylor S.G. (1999). Quantifying buffers for project schedules. Production and Inventory Management Journal, vol. 40 n. 2 p. 43-47.

7. 8. 9. 10. 11. 12. 13. 14. 15.

Hulett, D., Kendall, G.L., Pitagorsky G. (2001). Integrating critical chain and the PMBOK® Guide. International Institute for Learning. Lynch, W., Simpson, W.P. (1999). Critical success factors in critical chain project management. Proceedings of PMI Symposium. Newbold, R.C. (1998). Project management in the fast lane: applying the theory of constraints. St. Lucie Press. Boca Raton. Patrick, F.S. (1999). Critical Chain scheduling and buffer management: getting out from between Parkinson’s rock and Murphy’s hard place. PM Network. Vol. 13 n° 4. p. 57-62 Rios Insua, D. - Ruggeri, F. (2000). Robust Bayesian Analysis. Springer Verlag, New York. Robert, C.P.- Casella, G. (2004). Monte Carlo Statistical Methods (2nd Edition). Springer-Verlag; New York. Schuyler J. (2000). Exploiting the best of Critical Chain and Montecarlo Simulation. PM Network Vol 14. p.56-60. Shou, Y. - Yeo, K.T. (2000). Estimation of project buffers in critical chain project management; Proceedings of the International Conference on Management of Innovation and Technology. ICMIT 2000. 12-15 Nov. Vol.1 p.162 - 167 Sood, S. (2003). Taming uncertainty. PM Network 17 (March), p. 57-59.