Selection of Estimation Window With Strictly ... - Semantic Scholar

1 downloads 0 Views 410KB Size Report
2, it follows that there is a structural break in the data generating process ... forecasting performance of several methods under structural breaks and propose a.
Selection of Estimation Window With Strictly Exogenous Regressors∗ M. Hashem Pesaran

Allan Timmermann

University of Cambridge and USC

University of California, San Diego

April 2004

Abstract This paper derives analytical results for determination of the window size that explores the trade-off between bias and forecast error variance to minimize the mean squared forecast error in the presence of breaks. We show analytically how to determine the estimation window optimally for the case with strictly exogenous regressors. Through Monte Carlo simulations the paper compares the performance of the proposed method to that of several existing approaches to forecasting under breaks. JEL Classifications: C22, C53. Key Words: Parameter instability, forecasting with breaks, time-varying parameter model, choice of observation window.



We would like to thank James Chu, David Hendry, Adrian Pagan and seminar participants at Erasmus University at Rotterdam, London School of Economics and University of Southern California for comments and discussion of an earlier draft.

1. Introduction Consider the univariate linear regression model subject to a single structural break yt+1

= 1{t≤T1 } β 01 xt + (1 − 1{t≤T1 } )β 02 xt + ut+1 , t = 1, ..., T.

(1)

Here 1{t≤T1 } is an indicator variable that takes a value of one when t ≤ T1 , and otherwise is zero, while xt is a p × 1 vector of stochastic regressors, βi (i = 1, 2) are p × 1 vectors of regression coefficients, and ut is a serially independent error term that is also independently distributed of xs for all t and s, possibly with a shift in its variance from σ 21 to σ 22 at the time of the break. Assuming that β1 6= β2 or σ 21 6= σ 22 , it follows that there is a structural break in the data generating process at time T1 which we refer to as the break point. Suppose that we know that β has changed at T1 and our interest lies in forecasting yT +1 given the observations {yt , t = 1, ..., T } and {xt , t = 1, ..., T } which we collect in the information set ΩT = {xτ , yτ }Tτ =1 . How many observations should we use to estimate a model that, when used to generate forecasts, will minimize the mean squared forecast error (MSFE)? Here we are not concerned with the classical problem of identifying the exact point of the break, but rather the sample size that it is optimal to use in order to forecast out of sample on the assumption that a structural break has in fact occurred. The standard solution is to use only observations over the post-break period (t = T1 + 1, ..., T ) to estimate the model. We show in this paper that this procedure need not be optimal when the objective is to optimize forecasting performance. We study through simulations the forecasting performance of several methods under structural breaks and propose a new procedure that in the mean squared error sense optimally combines pre-break and post-break data. The intuition behind our proposed method is very simple. If structural breaks characterize a particular time series, using the full historical data series to estimate a forecasting model will lead to forecast errors that are no longer unbiased, although they may also have a lower variance. Provided that the break is not too large, pre-break data will be informative for forecasting outcomes even after the break. By optimally trading off the bias and forecast error variance, we can potentially improve on the forecasting performance as measured by MSFE. Our simulations show that forecasting accuracy can be improved by pre-testing for a structural break, but that the overall outcome crucially depends on how well the break point and its size are estimated. In particular, information about the 1

time of the break can be very important to forecasting performance even in the absence of knowledge about the size of a break. The purpose of our paper is to make a simple, yet largely overlooked theoretical point, namely that when the objective is to minimize out-of-sample MSFE it is often optimal to use pre-break data to estimate forecasting models on data samples subject to structural breaks. To establish this point we make a set of simplifying assumptions (such as strict exogeneity of the regressors) which are unlikely to hold empirically but allow us to get a set of theoretical results that are easy to understand. These assumptions also let us demonstrate analytically the factors determining the optimal estimation window both conditionally and unconditionally. We discuss the limitations to our setup at the end of the paper. The outline of the paper is as follows. Section 2 sets up the recursive least squares estimation problem for the conditional forecasting problem. Section 3 considers the factors that influence the forecasting performance unconditionally. Section 4 conducts Monte Carlo experiments on the performance of alternative methods for dealing with model instability and Section 5 concludes. 2. Selection of the Optimal Window Size There are many tests for structural breaks. In the context of linear regression models Chow (1960) derived an F-test for a structural break with a known break point, while Brown, Durbin and Evans (1975) derived Cusum and Cusum Squared tests that are also applicable when the time of the break is unknown. More recently, the literature has extended the earlier tests to dynamic models with unit roots. Tests for consistent estimation of the size and timing of multiple break points have also been developed. See, for example, Ploberger, Kramer and Kontrus (1989), Hansen (1992), Andrews (1993), Inclan and Tiao (1994), Andrews and Ploberger (1996), Chu, Stinchcombe and White (1996) and Bai and Perron (1998). Less attention has been paid to the problem of determining the size of the estimation window in the presence of breaks. To analyse this question, let m denote the starting point of the sample of the most recent observations to be used in estimation for the purpose of forecasting yT +1 . Also, let Xm,T be the (T −m+1)×p matrix of observations on the lagged x-variables, while Ym,T is the (T − m + 1) × 1 vector of observations on the dependent variable whose value for period T + 1 we 0 are interested in forecasting. Defining the quadratic form Qτ ,Ti = Xτ ,Ti Xτ ,Ti so 2

that Qτ ,Ti = 0 if τ > Ti , the OLS estimator of β based on the sample from m to T is given by b T (m) = Q−1 X0 Ym,T . (2) β m,T m,T The forecast error in the prediction of yT +1 will be a function of the data sample used to estimate β and is given by ³ ´0 b T (m) xT + uT +1 . eT +1 (m) = yT +1 − ybT +1 = β2 − β (3)

We implicitly assume that it is known that there is no break in the regression model in period T +1. Otherwise the best forecast would need to consider the distribution from which new regression parameters are drawn after a break. To derive analytical results, throughout this paper we will assume that xt is strictly exogenous, in the sense that it is independently distributed of us for all s and t, such that E[xt |f(u1 , u2 , ..., uT +1 )] = 0, for t = 1, ..., T

(4)

where f (.) is a general function. In particular, it follows that E[xt |us ] = 0, for all t and s. While this assumption is clearly not empirically accurate in many situations, it simplifies the analysis considerably and allows us to derive much stronger and clearer results than would otherwise be possible. Pesaran and Timmermann (2003) deal with the case with finite-order autoregressive processes subject to structural breaks. The advantage of that setup is that it is highly relevant to many empirical situations, but the theoretical results are complicated to interpret. Under strictly exogenous regressors, however, the factors determining the optimal window length under breaks become very transparent. This case therefore serves as a natural theoretical benchmark. 2.1. Conditional MSFE results We consider in this section the case where the prediction can be conditioned on the sequence of xt values. Taking expectations of (3) conditional on XT = {x1 , x2 , ..., xT }, we get the conditional bias in the forecast error: ³ ´0 b bias(m|XT ) ≡ E[eT +1 (m)|XT ] = β2 − β T (m) xT . 3

(5)

Furthermore, it can be shown that eT +1 (m|XT ) =

¡

¢ 0 β02 − Ym,T Xm,T Q−1 m,T xT + uT +1

(6)

0 −1 = ( β 2 − β1 )0 Qm,T1 Q−1 m,T xT − um,T Xm,T Qm,T xT + uT +1 ,

where um,T = (um , um+1 , .., uT )0 . Squaring this expression and taking expectations, the conditional mean squared forecast error (MSFE) can be computed as follows: £ ¤ M SF E(m|XT ) = E e2T +1 (m)|XT ¡ ¢2 = σ 22 + σ 22 µ0 Qm,T1 Q−1 m,T xT

(7)

0 −1 +x0T Q−1 m,T Xm,T Σm,T Xm,T Qm,T xT , 0

where µ = (β 2 −β 1 )/σ 2 , and Σm,T = E[um,T um,T ] is a (T − m + 1) × (T − m + 1) diagonal matrix with σ 21 on the first T1 − m + 1 diagonal places and σ 22 on the remaining T − T1 places. Intuition is gained by considering the case with a single regressor (p = 1): eT +1 (m) = uT +1 + θm (β 2 − β 1 )xT − vT (m)xT ,

(8)

where T1 P

Qm,T1 θm ≡ θm (T1 , T ) = = t=m T P Qm,T t=m

x2t−1 , and vT (m) = x2t−1

E

e2T +1 (m)|XT

¤

=

σ 22

+

σ 22 x2T

xt−1 ut

t=m T P

t=m

Hence the conditional MSFE becomes (using (4)) £

T P

½

µ2 θ2m

ψθm + 1 + T Σt=m x2t−1

¾

.

(9)

x2t−1

,

(10)

where ψ = (σ 21 − σ 22 )/σ 22 is the proportional decline in the variance after the break. The MSFE need not be monotonic in m : Decreasing m leads to a higher θ m and an ¢−1 ¡ increased squared bias. However, ΣTt=m x2t−1 decreases monotonically and the total effect on the MSFE depends on the balance between these two factors as well as on the extent of the structural break in the regression coefficients (µ) and in the error variances (ψ). Clearly there is a trade-off between using a biased estimate, 4

which results from including observations from the pre-break regime, and getting lower variability of the forecast error by including pre-break observations. The optimal window size can be determined from the value of m that minimizes the conditional MSFE: m∗ = arg min m=1,..,T1 +1

© £ 2 ¤ª E eT +1 (m)|XT .

(11)

The number of observations from the first regime used in forecasting depends in a complicated manner on the degree of the structural change, as measured by µ =| β 2 − β 1 | /σ 2 and ψ = (σ 21 − σ 22 )/σ 22 , and the relative sample variation in xt during the second as compared to the first regime. A simple recursive decision rule can be developed to determine the window size that minimizes the conditional MSFE. The choice between whether to start with observation t = m + 1 rather than t = m, depends on whether £ ¤ £ ¤ E e2T +1 (m)|XT > E e2T +1 (m + 1)|XT ,

m = T1 , T1 − 1, ..., 1.

(12)

This condition is satisfied if

© ¡ ¢ ª2 −1 σ 22 µ0 Qm,T1 Q−1 m,T − Qm+1,T1 Qm+1,T xT ¡ ¢ 0 −1 > x0T Q−1 m,T Xm,T Σm,T Xm,T Qm,T xT ¡ ¢ 0 −1 − x0T Q−1 X Σ X Q x . m+1,T m+1,T T m+1,T m+1,T m+1,T

(13)

Again the expression simplifies in the case with a single regressor (p = 1): ¡ ¢ ψθm+1 + 1 ψθ m + 1 µ2 θ2m − θ2m+1 > − . Qm+1,T Qm,T

(14)

To enhance the intuition from this expression, we initially analyze whether it is optimal to base the estimation of β only on post-break observations. This will be the case if the following condition holds: £ ¤ £ ¤ E e2T +1 (T1 )|XT > E e2T +1 (T1 + 1)|XT .

Setting m = T1 we see that θ T1 = x2T1 −1 /

T P

(15)

x2t−1 , and Qm+1,T1 = θT1 +1 = 0. Since

t=T1

xT1 is the last observation in the first regime, only observations after the break point will be used if 5

µ2 x2T1 −1 + ψ >

QT1 ,T , QT1 +1,T

(16)

or, equivalently, if σ 22 /

T P

x2t−1

t=T1 +1

µ2 x2T1 −1 + ψ >

σ 22 /

T P

.

(17)

x2t−1

t=T1

The first term on the left hand side is the squared bias in the estimate of yT induced by including the extra information from the pre-break regime, while the second term is the proportional decrease in volatility after the break. The term on the right hand side is the ratio of the efficiency of the parameter estimates, as measured by the two variances of the OLS estimator of β 2 based on the longer sample [T1 , T ] and the smaller sample [T1 + 1, T ], respectively. Some special cases are of particular interest. If there is no break in the mean (µ = 0) and σ 22 > σ 21 , it is optimal to use the full data set (and thus an expanding window) to estimate µ although the forecast error variance should be based on σ 22 and not on the weighted average of σ 21 and σ 22 . In contrast, it is not optimal to include pre-break observations when either µ2 is very large or σ 21 is much larger than σ 22 so that ψ is large. Hence if the break in the mean parameters is high or the pre-break error variance is much higher than the post-break error variance, then only post-break observations should be used in the estimation. However, even if a sizeable break in the mean has occurred, it may still be optimal to include prebreak data provided that the variance of the regression equation is smaller before the break occurred. More generally, from (17) the following comparative statics results can be obtained: Proposition 1 It is generally optimal to include more pre-break observations to estimate the parameters of the regression model if (i) the break in the mean parameters (µ) is small (ii) the variance parameter increases at the point of the break (σ 21 < σ 22 ) (iii) the post-break window size (ν 2 = T − T1 ) is small. Provided it is optimal to use some pre-break observations to estimate β, the next issue that naturally arises is how many such observations to use. In the

6

following proposition we establish conditions under which a recursive stopping rule can be used to determine the optimal window size. Proposition 2 There exists an optimal stopping rule for determining the window size that minimizes the MSFE conditional on XT +1 for the univariate regression model with a single break point in the regression coefficient (β 1 6= β 2 ). This rule is to choose a window of observations [m∗ , ..., T ] where m∗ is the largest value of m for which the following condition holds M SF E(m − 1) > MSF E(m) for m = T1 + 1, T1 , ...., 1 arranged in declining order. This proposition, which is proved in the Appendix, suggests that even if we knew the point at which a structural break has taken place, it may still be worthwhile to utilize pre-break information to forecast in the second regime. The expression for the MSFE shows the difference in the rates of dependence of the squared bias and the variance of the forecast error with respect to the window size (T − m + 1). As one moves further away from the break point, the optimal window size need not expand uniformly, but may initially decline. The optimal window, when plotted against the post-break window size (T − T1 ) may be Ushaped and a rolling window is likely to be suboptimal: Proposition 3 In the univariate regression model with a break in the conditional mean, the optimal window size conditional on XT +1 never expands by more than a single observation as the sample size, T , increases. Furthermore, the optimal window size need not be a monotonically increasing function of (T − T1 ), and may initially decrease before it eventually increases. 3. Unconditional MSFE results The decision rule set out in Proposition 2 conditions the choice of the optimal window size on the sequence of realizations of xt , but it is clearly of interest to investigate which factors determine the optimal window size on average, i.e. across the possible realizations of xt . Provided a process is postulated for {xt } one can integrate out the effect of XT in the expression for the optimal window size and 7

the resulting MSFE. In general, this can be done through Monte Carlo simulation. However, if the joint process generating {ut , xt−1 } is sufficiently simple, analytical results can also be obtained. Considering once again the case with a single regressor, from (8) the first two unconditional moments of eT +1 (m) are given by E[eT +1 (m)] = (β 1 − β 2 )E[θm xT ],

"

E[e2T +1 (m)] = (β 1 − β 2 )2 E[θ2m x2T ] + σ 22 + σ 22 E PT

x2T

2 t=m xt−1

#

(18)

ψx2 θ m + PTT +1 2 . t=m xt

In the interesting special case where ut and xt are i.i.d. and normally distributed, univariate processes à ! "à ! à !# 2 ut+1 0 σ 0 ∼ IIN , , µx xt 0 ω2 we have E[x2T θ2m ] = E[x2T ]E[θ2m ]. Furthermore, θm ∼

χ2ν 1 (λ1 ) , χ2ν 1 (λ1 ) + χ2ν 2 (λ2 )

(19)

where we have split the total window size ν = T − m + 1 into a pre-break window size ν 1 = T1 − m + 1, and a post-break window size ν 2 = T − T1 . χ2ν 1 (λ1 ) is a non-central chi-squared distribution with non-centrality parameter λ1 = ν 1 µ2x and ν 1 degrees of freedom. Likewise, χ2ν 2 (λ2 ) is a non-central chi-squared distribution (independent of χ2ν 1 (λ1 )) with non-centrality parameter λ2 = ν 2 µ2x and ν 2 degrees of freedom. Hence θm follows a doubly non-central beta distribution with parameters ν 1 /2 and ν 2 /2 and non-centrality parameters λ1 and λ2 . Using a result due to Patnaik (1949), the first two moments of θm are approximately given by ν1 ν1 = 0 implies that ∆(m − 1) > 0. Now ∆(m) > 0 if

¢ µ2 QT1 +1,T ¡ 2Qm,T Qm+1,T1 + x2m QT1 +1,T > 1, Qm,T Qm+1,T

(29)

so the result follows if the left hand side of (29) increases when evaluated one period earlier. This is easily demonstrated to hold:

¢ ¢ µ2 QT1 +1,T ¡ µ2 QT1 +1,T ¡ 2Qm,T Qm+1,T1 + x2m QT1 +1,T − 2Qm−1,T Qm,T1 + x2m−1 QT1 +1,T Qm,T Qm+1,T Qm−1,T Qm,T 2 µ QT1 +1,T = {2Qm,T Qm−1,T Qm+1,T1 + x2m Qm−1,T QT1 +1,T Qm−1,T Qm,T Qm+1,T −2Qm−1,T Qm,T1 Qm+1,T − x2m−1 Qm+1,T QT1 +1,T }

16

Using that Qm,T = Qm−1,T + x2m , the term inside the curly bracket simplifies to

¡ ¢ 2Qm−1,T (Qm+1,T + x2m )Qm+1,T1 − (Qm+1,T1 + x2m )Qm+1,T +QT1 +1,T (x2m Qm−1,T − x2m−1 Qm+1,T )

= −QT1 +1,T (x2m Qm−1,T + x2m−1 Qm+1,T ) < 0. Hence we have shown that if ∆(m) > 0 (so that MSFE increases by starting the estimation window at t = m, rather than at t = m + 1) then the MSFE will also increase by starting the window at observation t = m − 1 rather than at t = m. By induction, the optimal window size will be the largest value t = m, for which ∆(m) > 0.

Proof of Proposition 3 Suppose the sample is extended from T to T + 1 and recall that the stopping rule determining m∗ is the first value of m (arranged in decreasing order) for which ∆(m−1)

> 0. We need to show that if ∆(m∗ (T )) > 0, then ∆(m∗ (T + 1)) > 0. This can be done by showing that (29) implies that

¢ µ2 QT1 +1,T +1 ¡ 2Qm,T +1 Qm+1,T1 + x2m QT1 +1,T +1 > 1. Qm,T +1 Qm+1,T +1

(30)

Sufficient conditions for the difference between the left hand sides of (29) and (30) to be positive are that

QT1 +1,T +1 Qm+1,T1 QT1 +1,T Qm+1,T1 > , and Qm+1,T +1 Qm+1,T Q2T1 +1,T +1 Q2T1 +1,T > Qm,T +1 Qm+1,T +1 Qm,T Qm+1,T The first condition holds if

(QT1 +1,T + x2T +1 )Qm+1,T > QT1 +1,T (Qm+1,T + x2T +1 ), ie. if x2T +1 (Qm+1,T − QT1 +1,T ) > 0, which holds since m < T1 + 1. The second condition holds if

Qm,T Qm+1,T (Q2T1 +1,T +x4T +1 +2x2T +1 QT1 +1,T ) > (Qm,T +x2T +1 )(Qm+1,T +x2T +1 )Q2T1 +1,T , or Qm,T Qm+1,T (x4T +1 + 2x2T +1 QT1 +1,T ) > Q2T1 +1,T (x4T +1 + Qm,T x2T +1 + Qm+1,T x2T +1 ). 17

This in turn is satisfied since it simplifies to a sum of positive terms:

x4T +1 (Qm,T Qm+1,T − Q2T1 +1,T ) + Qm,T QT1 +1,T x2T +1 (Qm+1,T − QT1 +1,T ) +Qm+1,T QT1 +1,T x2T +1 (Qm,T − QT1 +1,T ) > 0. To prove that ∆(m, T ) > 0, does not rule out that ∆(m + 2, T + 1) > 0, notice that

∆(m, T ) > 0 holds provided that ¡ ¢ 1 1 , 2Qm,T Qm+1,T1 + x2m QT1 +1,T > 2 Qm,T Qm+1,T µ QT1 +1,T

(31)

while the second condition holds if

¡ ¢ 1 1 2Qm+2,T +1 Qm+3,T1 + x2m+2 QT1 +1,T +1 > 2 . Qm+2,T +1 Qm+3,T +1 µ QT1 +1,T +1

(32)

It is easily seen that (31) does not rule out (32).

Bibliography Andrews, D.W.K., 1993, Tests for Parameter Instability and Structural Change with Unknown Change Point. Econometrica 61, 821-856. Andrews, D.W.K. and W. Ploberger, 1996, Optimal Changepoint Tests for Normal Linear Regression. Journal of Econometrics 70, 9-38. Bai, J. and P. Perron, 1998, Estimating and Testing Linear Models with Multiple Structural Changes. Econometrica 66, 47-78. Brown, R.L., J. Durbin, and J.M. Evans, 1975, Techniques for Testing the Constancy of Regression Relationships over Time. Journal of the Royal Statistical Society, Series B, 37, 149-192. Chow, G., 1960, Tests of Equality Between Sets of Coefficients in Two Linear Regressions. Econometrica 28, 591-605. Chu, C-S J., M. Stinchcombe, and H. White, 1996 Monitoring Structural Change. Econometrica 64, 1045-1065. Cooley, T.F. and E.C. Prescott, 1976, Efficient Estimation in the Presence of Stochastic Parameter Variation. Econometrica 44, 167-184. Hansen, B.E., 1992, Tests for Parameter Instability in Regressions with I(1) Processes. Journal of Business and Economic Statistics 10, 321-335. Harvey, A.C., 1989, Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press, Cambridge. 18

Inclan, C. and G.C. Tiao, 1994, Use of Cumulative Sums of Squares for Retrospective Detection of Changes of Variance. Journal of the American Statistical Association 89, 913-923. Patnaik, P.B. (1949) The non-central χ2 − and F −Distributions and their applications. Biometrika 36, 202-232. Pesaran, M.H. and A. Timmermann, 2003, Small Sample Properties of Forecasts from Autoregressive Models under Structural Breaks. Forthcoming in Journal of Econometrics. Ploberger, W., W. Kramer, and K. Kontrus, 1989, A New Test for Structural Stability in the Linear Regression Model. Journal of Econometrics 40, 307-318.

19

)LJXUH  2SWLPDO SUHEUHDN ZLQGRZ EUHDN SDUDPHWHUV NQRZQ

P





SUHEUHDN ZLQGRZ

        





































SHULRGV DIWHU EUHDN

P



SUHEUHDN ZLQGRZ



























































SHULRGV DIWHU EUHDN

P



































SUHEUHDN ZLQGRZ



SHULRGV DIWHU EUHDN

)LJXUH  2SWLPDO IXOO ZLQGRZ VL]H EUHDN SDUDPHWHUV NQRZQ

P





IXOO ZLQGRZ VL]H

        

























































SHULRGV DIWHU EUHDN

P





IXOO ZLQGRZ VL]H

    



























P







SHULRGV DIWHU EUHDN



   























IXOO ZLQGRZ VL]H



SHULRGV DIWHU EUHDN

)LJXUH  0HDQ 6TXDUHG )RUHFDVW (UURU 3HUIRUPDQFH %UHDN SDUDPHWHUV NQRZQ

P





06)(

    





































SHULRGV DIWHU EUHDN VWRSSLQJ UXOH

SRVWEUHDN

P

H[SDQGLQJ

UROOLQJ



 

06)(

     





































SHULRGV DIWHU EUHDN

P





SRVWEUHDN



VWRSSLQJ UXOH

H[SDQGLQJ

UROOLQJ

 

    

































06)(



SHULRGV DIWHU EUHDN VWRSSLQJ UXOH

SRVWEUHDN

H[SDQGLQJ

UROOLQJ

)LJXUH  0HDQ 6TXDUHG )RUHFDVW (UURU 3HUIRUPDQFH XQNQRZQ VL]H RI WKH EUHDN

P





06)(

    





































SHULRGV DIWHU EUHDN VWRSSLQJ UXOH

SRVWEUHDN

UROOLQJ

793

P

H[SDQGLQJ



 

06)(

    





































SHULRGV DIWHU EUHDN

793

P





SRVWEUHDN

UROOLQJ



VWRSSLQJ UXOH

H[SDQGLQJ

 

    

SHULRGV DIWHU EUHDN VWRSSLQJ UXOH

SRVWEUHDN

UROOLQJ

793

H[SDQGLQJ

































06)(



)LJXUH  5HFXUVLYHO\ HVWLPDWHG WLPH RI EUHDN SRLQW XQNQRZQ EUHDN SDUDPHWHUV

P



HVWLPDWHG EUHDN SRLQW

        

























































SHULRGV DIWHU EUHDN

P



HVWLPDWHG EUHDN SRLQW

     



























SHULRGV DIWHU EUHDN

P



     



























HVWLPDWHG EUHDN SRLQW



SHULRGV DIWHU EUHDN

)LJXUH 

P







0HDQ 6TXDUHG )RUHFDVW (UURU 3HUIRUPDQFH XQNQRZQ WLPH DQG VL]H RI EUHDN

  

06)(

     

































SHULRGV DIWHU EUHDN VWRSSLQJ UXOH

SRVWEUHDN

P

H[SDQGLQJ

UROOLQJ



 

06)(

    





































SHULRGV DIWHU EUHDN

P





SRVWEUHDN



VWRSSLQJ UXOH

H[SDQGLQJ

UROOLQJ

 

    

































06)(



SHULRGV DIWHU EUHDN VWRSSLQJ UXOH

SRVWEUHDN

H[SDQGLQJ

UROOLQJ