Measurement of Technical Efficiency. A brief survey on ... - CiteSeerX

8 downloads 966 Views 281KB Size Report
1.1 Koopmans and Debreu-Farrell measure of technical efficiency . ... analysis also involves the measurement of effectiveness, and the degree to which.

Measurement of Technical Efficiency. A brief survey on parametric and non-parametric techniques Francesco Porcelli January 2009

1

2

Francesco Porcelli - Measurement of Technical Efficiency

Contents 1 Measurement of efficiency 1.1 Koopmans and Debreu-Farrell measure of technical efficiency . . 1.1.1 Output approach . . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Input approach . . . . . . . . . . . . . . . . . . . . . . . . 1.1.3 Features of Debreu-Farrell measure of technical efficiency 1.2 Measure of allocative efficiency . . . . . . . . . . . . . . . . . . . 1.3 Techniques for measurement of technical efficiency . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

3 3 4 5 6 7 9

2 Mathematical programming models: the non-parametric approach 2.1 DEA: one-input, one-output model . . . . . . . . . . . . . . . . . . . . 2.2 Multiple input and multiple output model . . . . . . . . . . . . . . . . 2.2.1 Model with variable return to scale . . . . . . . . . . . . . . . . 2.2.2 Model without strong disposability . . . . . . . . . . . . . . . . 2.2.3 Relaxation of the convexity of the output and input set . . . . 2.3 Stochastic DEA model . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

10 10 10 12 13 14 15

3 Stochastic frontier models: the parametric approach 3.1 One-output, multiple-input cross-sectional model of technical efficiency 3.1.1 Estimation techniques . . . . . . . . . . . . . . . . . . . . . . . 3.2 One-output, multiple-input panel data models of technical efficiency . 3.2.1 Fixed effects model . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Random effects model . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Some problems . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

16 16 17 19 20 21 24

References

26

3

Francesco Porcelli - Measurement of Technical Efficiency

1

Measurement of efficiency

Following Lovell [1993], the productivity of a production unit can be measured by the ratio of its output to its input. However productivity varies according to differences in production technology, production process and differences in the environment in which production occurs. The main interest here is in isolating the efficiency component in order to measure its contribution to productivity. Producers are efficient if they have produced as much as possible with the inputs they have actually employed and if they have produced that output at minimum cost [Greene, 1997]. It is important, however, to be aware that efficiency is only one part of the overall performance; as reported in Figure 1 a complete analysis also involves the measurement of effectiveness, and the degree to which a system achieves programmes and policy objectives in terms of outcomes, accessibility, quality and appropriateness [Worthington and Dollery, 2000]. PERFORMANCE Efficiency

Effectiveness

Resource management

Allocative efficiency

Outcomes Accessibility Quality Appropriateness

Technical efficiency Koopmans (1951) Debreu (1951) Farrel (1957)

Input approach

Output approach

Figure 1: Framework for performance assessment.

1.1

Koopmans and Debreu-Farrell measure of technical efficiency

Even if in the empirical part of this research the attention is focused on measuring technical efficiency, it is important to define both concepts of efficiency reported in Figure 1. 1. allocative (or price) efficiency refers to the ability to combine inputs and outputs in optimal proportions in the light of prevailing prices, and is measured in terms of behavioural goal of the production unit like, for example, observed vs optimum cost or observed profit vs optimum profit. 2. technical efficiency is measured as the ratio between the observed output and the maximum output, under the assumption of fixed input, or, alternatively, as the ratio between the observed input and the minimum input under the assumption of fixed output. In the literature there are two main definitions of technical efficiency. (a) According to Koopmans [1951] "a producer is technically efficient if an increase in an output requires a reduction in at least one other output or an increase in at least one input, and if

4

Francesco Porcelli - Measurement of Technical Efficiency

a reduction in any input requires an increase in at least one other input or a reduction in at least one output". (b) Differently, Debreu [1951] and Farrell [1957] defined the following measure of technical efficiency known as the Debreu-Farrell measure: "one minus the maximum equiproportionate reduction in all inputs that still allows the production of given outputs, a value of one indicates technical efficiency and a score less than unity indicates the severity of technical inefficiency". Finally, both technical and allocative efficiency can be measured by two main approaches: 1. the input approach if one is considering the ability to avoid waste by producing as much output as input usage allows, i.e. we evaluate the ability to minimise inputs keeping outputs fixed; 2. the output approach if one is considering the ability to avoid waste by using as little input as output production allows, i.e. we evaluate the ability to maximise outputs keeping inputs fixed. It is useful, now, to derive both the output and input oriented Debreu-Farrell efficiency measures that differ form Koopmans’ measure only in the presence of input and output slacks as we will see in Section 1.1.3. 1.1.1

Output approach

• the output vector, y = (y1 , ...., ym ) ∈ Rm + • the inputs vector x = (x1 , ...., xn ) ∈ Rn+ • the output set P (x) = {y : (x, y) is f easible} • the production frontier (or isoquant from the output perspective) IsoqP (x) = {y : y ∈ P (x), ey ∈ / P (x), e ∈ (1, +∞)} • the efficient subset Ef f P (x) = {y : y ∈ P (x), y ′ ∈ / P (x), y ′ > y} • the Shephard [1953, 1970] output distance function DO (x, y) = min{e : ( ye ) ∈ P (x)}. Then we can obtain the following relations: Ef f P (x) ⊆ IsoqP (x) IsoqP (x) = {y : DO (y, x) = 1} At this point it is possible to define Debreu-Farrell output-oriented technical efficiency measure: DFO (x, y) = max{e : ey ∈ P (x)}

(1)

where DFO (x, y) > 1. Finally, considering the definition of Shephard’s output distance we can derive two important relations: DFO (x, y) =

1 DO (x, y)

(2)

and IsoqP (x) = {y : DFO (x, y) = 1}

(3)

5

Francesco Porcelli - Measurement of Technical Efficiency

Figure 2 displays a piece-wise production frontier IsoqP (x) and the corresponding output set P (x) considering a two-output (Y1 and Y2 ) one input (x) technology. Output combinations that lie on the isoquant, for example, yc and yd , will identify fully efficient producers. Conversely, output combinations that are inside the production frontier, for example, ya and yb , will identify inefficient producers. In this example the expansion of ya and yb toward the production frontier along the rays that start from the origin corresponds to the Debreu-Farrell output-oriented technical efficiency measures, respectively ea and eb . Then ea ya and eb yb are the projected or ideal output combinations, i.e. the output combination that producers a and b should have attained if fully efficient. Y2

IsoqP (x)

yc b

P (x)

ea ya b

b

ya

yd b

b b

eb yb

yb

O

Y1

Figure 2: Representation of Debreu-Farrell output-oriented efficiency measure.

1.1.2

Input approach

Similarly to the output approach we can define: • the outputs vector, y = (y1 , ...., ym ) ∈ Rm + • the inputs vector x = (x1 , ...., xn ) ∈ Rn+ • the input set L(y) = {x : (y, x) is f easible} • the isoquant, IsoqL(y) = {x : x ∈ L(y), ρx ∈ L(y), ρ ∈ / [0, 1)} • the efficient subset Ef f L(y) = {x : x ∈ L(y), x′ ∈ / L(y), x′ 6 x} • the Shephard [1953, 1970] input distance function DI (y, x) = max{ρ : ( xρ ) ∈ L(y)} Then we obtain the following relations: Ef f L(y) ⊆ IsoqL(y) IsoqL(y) = {x : DI (y, x) = 1}

6

Francesco Porcelli - Measurement of Technical Efficiency

Similarly we define the Debreu-Farrell input-oriented technical efficiency measure: (4)

DFI (y, x) = min{ρ : ρx ∈ L(y)} where DFI (x, y) 6 1 From equation 4 end the definition of the input distance function it follows that: DFI (y, x) =

1 DI (y, x)

(5)

end (6)

IsoqL(y) = {x : DFI (y, x) = 1}

Figure 3 displays a piece-wise isoquant IsoqL(y) and the corresponding input set L(y) considering a Two-input (X1 and X2 ), one-output (y) technology. Input combinations that lie on the isoquant, for example, xc and xd , will identify fully efficient producers. Conversely, input combinations that are on the right of the isoquant, for example, xa and xb , will identify inefficient producers. In this example the contraction of xa and xb toward the isoquant along the rays that start from the origin corresponds to the Debreu-Farrell input-oriented technical efficiency measures ρa and ρb . Then ρa xa and ρb xb are the projected or ideal input combinations, i.e. the input combination that producers a and b should have employed to be fully efficient. X2 b

ρb xb

Xb

b

b

Xc

Xa

b

L(y) b

ρa xa b

Xd

O

IsoqL(y)

X1

Figure 3: Representation of Debreu-Farrell input-oriented efficiency measure.

1.1.3

Features of Debreu-Farrell measure of technical efficiency

The DF measure of technical efficiency is widely used, and since it is the reciprocal of the distance function it satisfies several properties, including: • homogeneity of degree 1 in output and inputs; • weakly monotonically decreasing in inputs and outputs;

7

Francesco Porcelli - Measurement of Technical Efficiency

• invariance with respect to changes in units of measurement. It is important to stress that the DF measure does not always coincide with Koopmans’s definition of efficiency. The DF measure is necessary but not sufficient to obtain Koopmans’s technical efficiency. For example, in both Figure 2 and Figure 3, points eb yb and ρb xb satisfy the DF conditions but not the Koopmans conditions. In fact, in the case of the output approach (Figure 2) it is possible to increase the production of output Y2 without reducing output Y1 , and similarly in the case of the input approach (Figure 3) it is possible to reduce the consumption of input X2 without increasing that of X1 . Clearly points ρa xa and ea ya , instead, satisfy both definitions of technical efficiency. This problem is related to the possibility of having input or output slacks. It disappears in much econometric analysis, where the parametric form of the function used to represent production technology (e.g. Cobb-Douglas) imposes equality between isoquants and efficient subsets. Typically the problem may be important in the mathematical programming approach where the nonparametric form of the frontier used to represent the boundary of production possibilities imposes slack by assumption as a result of the piece-wise output frontier or isoquant. In this last case a possible way to solve the problem is to report the DF technical efficiency scores and slacks separately side by side. Coelli et al. [2005] observed, however, that the importance of slacks can be overstated since they are essentially an artefact of the chosen construction method and the use of a finite sample size. Moreover, it has also been stressed that the problem of slacks can be viewed as allocative efficiency, so when the focus of the analysis is on technical efficiency it can be ignored without problems.

1.2

Measure of allocative efficiency

If price information is available it is possible to compute allocative efficiency from either the input or the output side. For example, in the case of the input approach let us assume1 : • the vector of input prices w = (w1 , ..., wn ); • the vector of output prices p = (p1 , ...pm ); • the minimum cost function (or cost frontier) c(y, w; β) = min{w′ x : DI (y, x; β) > 1}, where y is the output. where β is a vector of parameters describing the structure of the production technology. Therefore, assuming that: • xe is the input vector that minimises the production cost of Y , such that w′ xe = c(y, w; β); • xa corresponds to the observed input vector: we have the following result: • the cost efficiency corresponds to c(y, w; β) w′ xe = w′ xa w′ xa • technical efficiency corresponds to ρa = 1

w′ (ρa xa ) ρa xa = xa w′ xa

A detailed analysis, including the output approach case, can be found in Lovell [1993].

8

Francesco Porcelli - Measurement of Technical Efficiency

• allocative efficiency corresponds to w′ xe cost efficiency = ′ technical efficiency w (ρa xa )

(7)

that is bounded by zero and one, as are its two components. Figure 4 represents the decomposition of the cost efficiency in its three components in relation to the input combination xa : • the technical inefficiency component (T ), distance between the red and the green line; • the slack component (S), distance between the green and the black line; • the allocative efficiency component (A), distance between the black and the isocost line (the blue one). X2 L(y)

b

Cost efficiency xe

b

b

ρa xa xb

xa

b b

IsoqL(y) T S

A O

X1

Figure 4: Representation of decomposition of the cost inefficiency measure. For example, in Figure 4 the input combination xe lies on the isoquant and on the isocost (the blue line) and therefore it is efficient from the allocative and consequently also from the technical point of view. Conversely, the input combination xb , although technically efficient, exhibits some cost inefficiency that makes it inefficient from the allocative point of view. Then we can see that xa is not technically efficient, because it is inside the input set, and consequently it does not satisfy conditions for allocative efficiency. Therefore, it is clear that technical efficiency is a necessary but not sufficient condition for allocative efficiency, while allocative efficiency implies technical efficiency because by assumption we are not observing input combinations on the left of the isoquant. Moreover we can also see that the correct identification of slacks is much more important in allocative efficiency analysis since they are one component of cost inefficiency, while as regards technical efficiency they are just an additional piece of information. Of course, the same conclusions would be obtained under the output approach.2 2

The output approach can be found in Lovell [1993].

Francesco Porcelli - Measurement of Technical Efficiency

1.3

9

Techniques for measurement of technical efficiency

Essentially there are two main methodologies for measuring technical efficiency: the econometric (or parametric) approach, and the mathematical (or non-parametric) approach. The two techniques use different methods to envelop data, and in doing so they make different accommodation for random noise and for flexibility in the structure of production technology. Hence they differ in many ways, but the advantages of one approach over the other boil down to two characteristics: • the econometric approach is stochastic and attempts to distinguish between the effects of noise and the effects of inefficiency, while the linear programming approach is deterministic and under the voice inefficiency melt noise and real inefficiency; • the econometric approach is parametric and as a result suffers from functional form misspecification, while the programming approach is non-parametric and so it is immune to any form of functional misspecification. Both methodologies will be surveyed in more detail in Sections 2 and 3.

Francesco Porcelli - Measurement of Technical Efficiency

2

10

Mathematical programming models: the non-parametric approach

Usually the mathematical programming approach to the evaluation of efficiency goes under the descriptive title of Data Envelopment Analysis (DEA), that under certain assumptions about the structure of production technology envelops the data as tightly as possible. If only data on quantities are available, it is only possible to compute technical efficiency measures, but when prices are also available, economic efficiency (in terms of costs or profits) can be computed and decomposed into its technical, allocative, and slack components. In this research DEA will be employed to evaluate technical efficiency in the health care sector of each regional government. In fact, DEA was first developed in public sector analysis of technical efficiency, where price information is not available or nor reliable.3 Moreover, Pestieau and Tulkens [1990] argue that public providers have objectives and constraints different from those of private providers and so the only common ground on which to compare their performance is on the basis of their technical efficiency.

2.1

DEA: one-input, one-output model

To get the flavour of DEA, in Figure 5 we analyse the simplest case of a one-output, one-input model, computing technical efficiency scores under the output approach.4 The x-axis measures the input quantity and the y-axis the output quantity. Each point represents the input-output combination of each producer. DEA will envelop all these points in order to compute a piece-wise frontier over them, then the efficiency score of each producer depends on the distance from the frontier. In Figure 5, for example, producers E, H, I, L, M and N are fully efficient since their input-output combination lies on the frontier, while producers F and G are inefficient. Producers F and G efficiency D B and eDEA ≃ C+D . We can see that producer G is more scores correspond respectively to eDEA ≃ A+B G F efficient than producer F; in fact, we have: D B < ≃ eDEA G A+B C +D As a result, the efficiency index of each producer is a number between zero and one, and the closer it is to one, the more efficient is the producer. Unity will indicate a fully efficient input-output combination. eDEA ≃ F

2.2

Multiple input and multiple output model

In the case of multiple input and/or multiple output DEA uses linear programming to construct a non-parametric piece-wise surface (or frontier) over the data, so as to be able to calculate efficiencies without parameterising the technology. In the DEA model by Charnes et al. [1978], the objective is to measure the performance of each producer relative to the best observed practice in the sample of N producers who employ a vector of inputs x of dimension (p × 1) to produce a vector of outputs y of dimension (q × 1) under the following restrictions: • constant return to scale; • strong disposability of inputs and outputs; • convexity of the set of feasible input-output combinations. 3 DEA can also be used to compute allocative efficiency, but this methodology will not be surveyed here since it will not be used in the empirical part of this research. Ali and Seiford [1993] provide a complete analysis. 4 Under the output approach the distance from the frontier is computed vertically, but under the input approach the distance is computed horizontally.

11

output

Francesco Porcelli - Measurement of Technical Efficiency

G2

M L

b

b

F1

×

C G

I

y2

N

×

b

b

b b

A H b

F

y1

b

b D

E

B b

X1

O

X2

input

Figure 5: Data Envelopment Analysis, one-input, one-output model.

Since in the empirical part we are assuming that regional governments, as producers of health care services, operate in order to maximise output given a determined amount of input, let us consider in more detail the output-oriented approach.5 Under these assumptions DEA requires the solution of the following linear programming for each producer in the sample: min u,v

v′ xi s.t. u′ yi

v′ xj > 1 j = 1, ..., i, ..., N u′ yj u, v > 0.

(8)

where: • u is a (q × 1) vector of outputs weights; • v is a (p × 1) vector of inputs weights; • (xi , yi ) is the input-output vector of the producer being evaluated; • (xj , yj ) is the input-output vector of the jth producer in the sample. The linear ratio model in (8) can, however, be converted to the following linear programming "multiplier" problem: min v′ xi s.t. u,v

u′ yi = 1 v′ xj > u′ yj

(9)

j = 1, ..., i, ..., N

u, v > 0. 5 The input approach differs very little from the output approach, Ali and Seiford [1993] provides an extensive analysis of both methodologies.

12

Francesco Porcelli - Measurement of Technical Efficiency

Finally, the linear programming problem in (10), which represents the dual version of the problem in (9), is solved to derive the technical efficiency score of each producer eDEA ∈ [0, 1], i = 1, 2, ..., N . i In particular, let eDEA = i

1 φDEA i

, then φDEA solves the linear program: i max φi s.t. φ,λ

Xλ 6 xi

(10)

φi yi 6 Yλ λ>0 where: • i = 1, 2, ..., N ; • xi = is the (p × 1)input vector of the ith producer; • yi = is the (q × 1) output vector of the ith producer; • X is an (p × N ) input matrix, where p is the number of inputs; • Y in an (q × N ) output matrix, where q is the number of outputs; • λ is a (N × 1) intensity vector. Therefore, our measure of technical efficiency, eDEA , will correspond to the inverse of the solution to i the problem in (10), φDEA , that represents the DEA variant of the Debreu-Farrell output measure of i technical efficiency DFo (x, y) considered in section 1.1. In this way we have that edea ∈ [0, 1] ∀i = i 1, 2, ..., N . eDEA = 1 is, however, a necessary but not a sufficient condition for technical efficiency according i to Koopmans’s definition, since the ideal input-output combination (edea i yi , xi ) may contain slacks in any of its (p + q) dimensions.6 In fact, technical efficiency according to Koopmans, as we have seen in section 1.1, is reached when: edea = 1, Xλ = xi , yi = Yλ i The problem in (10) is then solved N times in order to obtain N values of eDEA , and N vectors λi , i one for each producer. The optimal vector λi is a vector of weights that tell us the position of the ideal (or projected) input-output combination of the i-th producer on the frontier. 2.2.1

Model with variable return to scale

To relax the restrictive hypothesis of constant return to scale (CRS) we have to add the following constraints to the problem in Equation (10): • ι′ λ = 1 to have variable return to scale (VRS) • ι′ λ 6 1 to have non-increasing return to scale (NRS) where ι is a (N × 1) vector of ones. As reported in Figure 6 in relation to the input-output combination (y1 , x1 ), the output-oriented approach and the input-oriented approach can provide different signals concerning scale economies that are not inconsistent because they are based on different conceptions of the scale of production and 6

The problem slacks has been considered in Section 1.1.3.

13

Francesco Porcelli - Measurement of Technical Efficiency

because they are obtained by looking in different directions toward different points on the production frontier. Only in the case of constant return to scale there is no difference between the output and input approach. For example, in Figure 6, under the assumption of constant return to scale the distance from the frontier of the input-output combination (y1 , x1 ) does not change if it is measured horizontally (AC) or vertically (AE). Conversely, in the case of variable return to scale the distance from the frontier is greater if measured vertically (output approach) than horizontally (input approach), i.e. AD > AB. output variable return to scale nonincreasing return to scale constant return to scale

*E b

(y2 , x2 )

G

*C

B

*

*D b

b

(y1 , x1 )

A

b

F

Input

Figure 6: Return to scale in DEA. Assuming, for example, variable return to scale, scale efficiency can be evaluated considering the distance between the VRS frontier and CRS frontier. In Figure 6, only the output-input combination (y2 , x2 ) is scale-efficient. By using the output approach the scale efficiency of combination (y1 , x1 ) will, however, correspond to the following ratio: eSE 1 =

FD FE

A under variable Given that technical efficiency, measured in terms of ratios, corresponds to eV1 RS = FF D FA under constant return to scale, the following result can be derived:7 = return to scale, and to eCRS 1 FE

eCRS = eVi RS × eSE i i , ∀i Therefore, under the assumption of VRS, technical efficiency is computed without considering the effect of the the scale component. This is the approach that will be followed in the empirical part. 2.2.2

Model without strong disposability

Strong disposability is related to the possibility of changing the quantity of one input or output factor without changing the output produced, a notion clearly related to the possibility of slacks. 7 In this case we are considering the output approach since distances from the frontier are measured vertically, but under the input approach we can obtain the same results simply by measuring the distance from the frontier horizontally.

14

Francesco Porcelli - Measurement of Technical Efficiency

Strong disposability is rarely relaxed and in the empirical part of this work this assumption will not be relaxed. Weak disposability is obtained by replacing: • Xλ 6 xi with αXλ = xi • φi yi > Yλ with βYλ = ei yi where α, β ∈ (0, 1]. 2.2.3

Relaxation of the convexity of the output and input set

When we relax the assumption of convexity of the output or input set, we obtain a Free Disposal Hull (FDH). Although FDH is an old idea, it was introduced into frontier literature as recently as 1984 by Deprins et al. [1984]. To obtain the FDH model the following constraint will replace the last in (10): λi ∈ {0, 1}, i = 1, ...., N

output

Therefore, it is important to note that linear programming is not even used to solve the FDH efficiency measurement problem. Comparing DEA frontiers (where the assumption of convexity holds) and FDH frontiers, one can see that FDH envelops data more closely and has a more restrictive notion of domination than the DEA frontier does. Slacks, however, are a much more serious problem in FDH than in DEA. Therefore, in the empirical part of this research I shall not relax the convexity assumption. For example, the FDH version of the one-input, one-output model represented in Figure 5 is displayed in Figure7.

G2

M L

N

×

b

b

b

y2

b

H

G

×

b b

A b

F

y1

C

F1

I

b

b D

E

O

B b

X1

X2

input

Figure 7: Data Envelopment Analysis, FDH one-input, one-output model when we relax the assumption of convexity of the output or input set.

Francesco Porcelli - Measurement of Technical Efficiency

2.3

15

Stochastic DEA model

A shortcoming of all models based on linear programming is that they are non-stochastic; consequently, efficiency scores are contaminated by omitted variables, measurement error, and other sources of statistical noise. This typical problem of DEA models can be solved either by collecting and measuring accurately all relevant variables or by using Stochastic DEA models (SDEA). To use SDEA, however, it is necessary to provide information about the expected values and variances of all variables, as well as probability levels at which feasibility constraints are satisfied. Another and more feasible alternative for including statistical noise in the model is to use a parametric approach for the estimation of the production function. This leads us to frontier production function models, the subject-matter of the next section.

16

Francesco Porcelli - Measurement of Technical Efficiency

3

Stochastic frontier models: the parametric approach

In this research we are going to consider only the stochastic frontier models based on the estimate of the frontier production function. In Greene’s words, "the frontier production function is an extension of the familiar regression model based on the microeconomic premise that a production function represents some sort of ideal, the maximum output attainable given a set of inputs" [Greene, 1997]. In practice, the frontier production function is a regression where the estimation of the production function is implemented with the recognition of the theoretical constraint that all observations lie below it, and it is generally a means to another end, the analysis of efficiency. A measure of efficiency emerges naturally after the computation of the frontier production function, since it corresponds to the distance between an observation and the empirical estimate of the theoretical ideal. The estimated models provide a means of comparing individual agents, either with the ideal production frontier or with each other, and also provide point estimates of environmental variables’ effect on efficiency.

3.1

One-output, multiple-input cross-sectional model of technical efficiency

Stochastic frontier models (SFM) developed simultaneously by Aigner et al. [1977] and Meeusen and van den Broeck [1977] are made up of three components: the deterministic production function, the idiosyncratic error and the inefficiency error component. Since the error term has two components, the stochastic frontier models are often referred to as "composed error models". The general version of the stochastic frontier production function model can be written in the following way: yi = f (xi ; β)exp(vi )exp(−ui )

i = 1, 2, ..., I.

(11)

where: • xi is the input vector of producer i; • yi is the single output of the producer i • f (xi ; β) is the deterministic component of the production function, where β is a vector of technology parameters; • exp{vi } is the stochastic component of the production function which accounts for the statistical noise in the production process, and therefore we assume that it has a symmetric distribution with zero mean; • finally the possibility of inefficiency is captured by the second error component ui that is assumend to be distributed independently of vi in such a way as to satisfy the restriction ui > 0. Following Lovell [1993] the econometric version of the Debreu-Farrell output-oriented technical efficiency measure corresponds to the reciprocal of the DFO (xi , yi ) and is written: esf i =

yi = exp{−ui } [f (xi ; β)exp{vi }]

i = 1, 2, ..., n.

(12)

where esf i ∈ (0, 1], and unity values indicate fully efficient producers. These models allow for technical inefficiency controlling for those random shocks the affect output and at the same time are outside the control of producers. As stressed by Kumbhakar and Lovell [2000], the great virtue of SFM is that the impact on output (and so also on efficiency) of shocks owing to variation in inputs can be separated from the effect of environmental variables. When one of the two error components is taken out of the model we do not have an SFM any longer and we end up with one of these two other broad families of models:

17

Francesco Porcelli - Measurement of Technical Efficiency

• if vi = 0 ∀i than yi = f (xi ; β)exp(−ui ) becomes a deterministic frontier production function model (DFM); • if ui = 0 ∀i than yi = f (xi ; β)exp(vi ) is just a stochastic production function model. The production model will usually be linear in the logs of the variables, so for estimation purposes the model in (11) usually becomes: log yi = log f (xi ; β) + vi − ui where ui = log

1

esf i are the following:

(13)

i = 1, 2, ..., I.

and f (xi , β) can assume many function forms; the two most used in empirical works

• Cobb-Douglas f (xi ; β) = β0

K Y

xβk k ∀i

(k=1)

• Translog 

f (xi ; β) = exp β0 +

K X k=1

K

K

1 XX βkl log xk log xl βk log xk + 2 k=1 l=1



∀i

in both cases β0 is a constant term, and restrictions on the technology parameters are usually imposed in order to ensure that f (xi ; β) is homogeneous of degree not greater than one in order to rule out the possibility of increasing return to scale8 . Figure 8 provides a graphical representation of the SFM. In a deterministic framework the producer with the (y1 , x1 ) observed output/input combination would have been evaluated as less efficient than the producer with (y2 , x2 ) output/input observed quantities (consider the distance between the dot and the asterisk). Conversely, in a stochastic framework we are observing an opposite outcome since v1 < 0 and v2 > 0 (consider the distance between the the × and the asterisk). This example provides a powerful theoretical argument against the deterministic approach. As it will be discussed in more detail in Section ??, however, the stochastic noise can contain some elements of the environment that affect the output and can be introduced into the model among the inputs in order to explain the reason behind a bad or good performance instead of being ascribed to a generic ability or disability. Therefore excluding the possibility of allowing for what we can call "environment effect", one might risk overestimating or underestimating inefficiency in a stochastic framework also. 3.1.1

Estimation techniques

On the assumption of vi = 0 the deterministic production frontier can be estimated using any of the following three techniques: 1. Corrected ordinary least squares (COLS), first proposed by Winsten [1957] but usually attributed to Gabrielsen [1975], does not require any assumptions about the functional form of ui , as it estimates the technology parameters of (13) by OLS and corrects the downward bias in the estimated OLS intercept by shifting it up until all corrected residuals are non-positive and at least one is zero. Corrected residuals are then used in Equation (12) to estimate esf i , that by sf construction satisfy the constraint 0 < ei 6 1. 8

A detailed discussion about the functional form of the production function can be found in Coelli et al. [2005]

18

Francesco Porcelli - Measurement of Technical Efficiency

output

f (x2 , β)exp(v1 )

×

stochastic effect b

f (x2 , β)

Inefficiency effect

*

y2

f (x2 , β)exp(v2 − u2 )

f (x2 , β) b

y1

×

stochastic effect f (x1 , β)exp(v1 ) Inefficiency effect

*

O

x1

f (x1 , β)exp(v1 − u1 )

x2

input

Figure 8: The stochastic production frontier model.

2. Modified Ordinary Least Squares (MOLS) was introduced by Richmond [1974] and requires assumptions about the functional form of ui (half normal, truncated normal, exponential, etc.). MOLS proceeds by estimating β of Equation (13) by OLS and estimates OLS intercept by shifting it up by minus the estimated mean of ui , that is extracted from the moments of the OLS residuals. Finally, the OLS residuals are modified in the opposite direction, and used to estimate esf i in Equation (12). The pitfall of this technique is that there is no guarantee that the intercept will be shifted up enough to cover all the observations and it will be possible, for some of them, to have esf i > 1, which will be very difficult to justify. 3. Maximum Likelihood Estimator (MLE) was proposed by Afriat [1972] and apparently first used by Greene [1980] and Stevenson [1980]. This estimator assumes the distribution of ui and it simultaneously estimates the technology parameters (β) and the moments of the distribution of ui . The MLE envelops all observations and the residuals are inserted in Equation (12) to sf estimate esf i , that by construction will satisfy the constraint 0 < ei 6 1. Lovell [1993] challenged the assumption of vi = 0 because it combines the bad features of both the econometric approach and the linear programming approach: it makes the model deterministic and parametric at the same time. All deviation from the frontier is attributed to technical efficiency since no accommodation for noise is taken into consideration. As reported in Figure 9, COLS and MOLS are further deficient because they correct only the intercept value with respect to OLS, leaving the slope parameters (β) unchanged, and as a result the structure of the efficient frontier technology is the same as the structure of the technology used by less efficient producers. In other words, the COLS and MOLS frontier does not necessarily bound the data frorm above as closely as possible, since it is required to be parallel to the OLS regression. Therefore MOLS and COLS assign the same efficiency ranking as OLS does, the only difference being that with MOLS and COLS the magnitude of efficiency scores becomes available. Instead, MLE allows for structural dissimilarities between OLS and frontier technology. Consequently, when we make distributional assumptions about the composite error term, MLE is the most suitable estimator, and COLS seems to be appropriate only when we want to avoid distributional assumption about ui . To understand why, consider again the model in (11), where the composite

19

Francesco Porcelli - Measurement of Technical Efficiency

ln(output) MLE

COLS MOLS b b b b b b b

b

b b

b

OLS b

b

b b

b

b

b b

b

b b

b

b b

b

b b b

b b

b b

b b

b

b

b

b

O

ln(input)

Figure 9: MOLS, COLS, and MLE deterministic production frontiers.

error (vi − ui ) is asymmetric since we impose ui > 0; then, assuming that vi and ui are distributed independently of xi , COLS provides consistent estimates only of the slope parameters but not of the intercept that corresponds to E(vi − ui ) = −E(ui ) 6 0. Consequently, it will not possible to compute estimates of producer-specific technical efficiency. Therefore among the three estimators the choice is between COLS and MLE according to the assumptions one is prepared to make. The assumption under which these two estimators are consistent and the shape of the likelihood function will be considered in relation to panel data models, also because, as will be clear later, the only possibility to estimate SFM by means of COLS is when we have a longitudinal dataset; with crosssectional datasets there is very little to compare since MLE is in the end the only possible estimator. From the previous discussion we understand that when we choose the parametric approach we need a stochastic approach as well since the assumption of vi 6= 0 is more appropriate for investigating inefficiency caused by pure chance. On the other hand, the stochastic approach requires a wider set of assumptions, and unlike the determinist frontier case the resulting residuals obtained from MLE contains both noise and inefficiency, and must be decomposed in order for the distance from the frontier measured by ui to be assessed. The decomposition problem was first solved by Jondrow et al. [1982], who specified the functional form of the distribution of the one-sided inefficiency component and derived the conditional distribution (ui |vi − ui ). Either the mode or the mean of this distribution provides a point estimate of ui , which can be inserted in Equation (12) to obtain estimates of esf i . We are going to describe this procedure in detail for the case of panel data models.

3.2

One-output, multiple-input panel data models of technical efficiency

At the heart of this approach is the association of the "individual effect" from the panel data literature with the "inefficiency error term"; how this association is formulated distinguishes the models from each other. In accordance with the assumptions about the inefficiency error component, the equivalent of the unobserved heterogeneity of classical panel data terminology, two broad families of models will be

20

Francesco Porcelli - Measurement of Technical Efficiency

distinguishd: fixed effect models (FE) and random effect models (RE). For each of them different estimators can be used to estimate consistently the technology parameters as well as the individual specific inefficiency terms. The general model in (11) becomes: yit = f (xit ; β)exp(vit − ui )

(14)

i = 1, 2, ..., I and t = 1, 2, ..., T.

The basic assumptions required in all panel data models are the following: a1 vit ∼ i.i.d.(0, σv2 ), homoscedasticity of the stochastic error term; a2 E[vit |vjt ] = 0∀i 6= j, independence of the idiosyncratic error across individuals; a3 E[vit |vis ] = 0 ∀t 6= s, independence of the idiosyncratic disturbances across time, which implies a dynamically correctly specified model; a4 E[yit |xi1 , xi2 , ..., xit , ui ] = f (xit ; β) ∀i which implies that f (xit ; β) is correctly specified and that the regressors are weakly exogenous; a5 ui is constant over time and restricted in order to be non-negative, ui > 0. a6 he asymptotic properties are in principle based on both dimensions of the panel, N → ∞ and T → ∞. To simplify the discussion we are assuming that a Cobb-Douglas production function, with nonneutral technological change,9 correctly specifies the production function, and finally we consider a linear transformation of the model in (14) using a log-log functional form. As a result the general form of the empirical model becomes: yit = β0 + τ t +

K X

δk (tx)k +

(k=1)

K X

where i = 1, 2, ..., I and t = 1, 2, ..., T , variables are expressed in natural logs, and is imposed in order to rule out the possibility of having increasing return to scale. 3.2.1

(15)

βk xk + vit − ui

(k=1)

PK

(k=1) βk

61

Fixed effects model

In the fixed effects model we do not make any assumption about the the inefficiency error term ui , as a result the technology parameters (β and δ) in (15) can be consistently estimated by Least Square Dummy Variables (LSDV) or by Within the Group Estimator (WG). Both estimators are essentially COLS applied to a modified version of the original model. With LSDV the inefficiency error term is treated in a parametric way with the introduction of a dummy variable for each producer, and therefore COLS will be applied to the following version of the model: yit = β0i + τ t +

K X

(k=1)

δk (tx)k +

K X

βk xk + vit

(16)

(k=1)

where β0i = β0 − ui . 9

The non-neutral technological change is introduced through a linear trend and its interaction with the inputs; moreover, the Cobb-Douglas specification implicitly assumes that the technological change is constant over time.

21

Francesco Porcelli - Measurement of Technical Efficiency

Then to obtain a consistent estimate of β0i , and also to avoid an incidental parameter problem, we need to restrict assumption a6, so the asymptotic properties of the model will be based only on T → ∞ with N fixed. After the estimation we employ the normalisation (LSDV )

β0

(LSDV )

= max{β0i

(17)

}

and ui are estimated from (LSDV )

β0

(LSDV )

(18)

− β0i

that ensure that all ui > 0. Finally, producer-specific estimates of technical efficiency are then obtained by eFi E = exp(−ui )

(19)

As underlined by Schmidt and Sickles [1984] as T → ∞ we can consistently separate the overall intercept from the one-sided individual effect, which provides only the measure of efficiency relative to an absolute standard represented by the most efficient producer in the sample that will have eFi E = 1. The efficiency of the most efficient producer in the sample, however, will approach one, as N → ∞. Therefore with LSDV we obtain only a consistent "relative" measure of efficiency. If assumption a6 holds, estimates of ui are fully consistent but the incidental parameter problems make LSDV infeasible so we have to use WG, which corresponds to COLS applied to the transformation of the model where all data are expressed in terms of deviation from producers’ means and the ui are recovered as means of producers’ residuals.

u∗i

 K T  K X X 1X (W G) (W G) (W G) xk βk (tx)k − δk yit − β0 − = T t=1

(k=1)

(k=1)

(20)

ui =max{u∗i } − u∗i

The main advantage of the FE model is its robustness and consistency since we do not need any assumption about ui and therefore this model can be assumed as a sort of benchmark for the others; this comes at a price, however: time invariant variables are not allowed in the model; as stressed by Kumbhakar and Lovell [2000], the fixed effects (the ui ) are capturing not only variation in timeinvariant technical efficiency across producers, but unfortunately will also capture all the time invariant heterogeneity.10 3.2.2

Random effects model

The first step in estimating an RE model is to introduce three new assumptions: a7 E[ui |xi1 , xi2 , ..., xiT ] = 0, independence between the inefficiency error term and the inputs; a8 ui ∼ i.i.d.(µ, σu2 ), we do not need to specify the distribution of the inefficiency error term but we have to estimate its variance; a9 E[vit |ui ] = 0 ∀i and ∀i, independence between the two error components. 10

This is, however, a common problem with all panel data SFM. This issue will be discussed in detail in section 3.2.3.

22

Francesco Porcelli - Measurement of Technical Efficiency

Under assumptions a1 - a9, Feasible Generalised Least Squares (FGLS) is a consistent estimator of the technology parameters considering the following transformation of the original model in (15):

yit =[β0 − E(ui )] + τ t +

K X

δk (tx)k +

=β0∗ + τ t +

δk (tx)k +

(k=1)

βk xk + vit − [ui − E(ui )] =

(k=1)

(k=1) K X

K X

K X

(21)

βk xk + vit − u∗i

(k=1)

We can recover estimates of the specific-producer inefficiency u∗i from the residuals through the same methodology used in the case of FE:

u∗i

 K T  K X X 1X (F GLS) (F GLS) ∗(F GLS) βk xk δk (tx)k − = yit − β0 − T t=1 (k=1)

(k=1)

(22)

ui =max{u∗i } − u∗i

Under assumptions a7 - a9, FGLS is implemented considering an "equi-correlated error structure" for the variance-covariance matrix of the composite error term (vit − ui ), since var[vit − ui ] = σv2 − σu2 and the cov[(vit − ui ), (vis − ui )] = σu2 , and consistent estimates of σv2 and σu2 are obtained respectively from the residuals of the WG estimator and between the group estimator (BG) of the model in (15). Under assumptions a1 to a9 and specifying a distribution for ui such that the ui > 0 and a symmetric distribution for vit , it will be possible to estimate the parameters of the model in (15) with Maximum Likelihood Estimator (MLE); then the estimates of the technical efficiency of each producer are obtained in a second step considering the conditional distribution of ui given the composite error term (vit − ui ) using Jondrow et al. [1982] decomposition approach (JLMS from the authors of the paper). Following Pitt and Lee [1981], who first proposed MLE in the context of panel data SFM, in order to write the complete log-likelihood function, the preliminary step is to assume a distribution g(u) for the inefficiency error term, then we have to define the composite error structure ǫit = (vit + ui ). At this stage it is possible to write the joint density of ǫit that corresponds to the "unconditional likelihood function": h(ǫi1 , ǫi2 , ..., ǫiT ) =

Z

T ∞Y

(ǫit − ui )g(ui )dui

0

(23)

t=1

Given this density, the complete log-likelihood function is: log L(τ, δ, β, σv2 , σu2 ; data)

=

N X

log h(ǫi1 , ǫi2 , ..., ǫiT )

(24)

i=1

According to the assumption about the distribution of the error terms, at least four different models can be found in the literature on SFM: the Normal-Half Normal (NHN), Normal-Exponential (NE), Normal-Truncated Normal (NTN), and Normal-Gamma (NG). Clearly, the distribution of the idiosyncratic error is always normal whereas the distribution of the inefficiency error term is chosen in order to compel ui to be non-negative. As reported by Kumbhakar and Lovell [2000] and Coelli et al. [2005], theoretical consideration should guide the choice of the distributional specification. For example, the Normal-Half Normal and

23

Francesco Porcelli - Measurement of Technical Efficiency

Normal-Exponential models are appropriate when most inefficiency effects are assumed to be in the neighbourhood of zero, implying that the modal value of technical efficiency is one with decreasing values of technical efficiency becoming increasingly less likely. The Normal-Truncated Normal and Normal-Gamma are more flexible since the modal efficiency value can also be away from one, and for this reason in most empirical works the NTN is usually preferred to the NHN. Unfortunately, this sort of flexibility comes at the cost of more computational complexity, and if the probability density functions of ui and vit are similar it will become more difficult to distinguish inefficiency effects from noise. Finally, the different distributional assumptions will produce different predictions of technical efficiency but the rankings of producers on the basis of the predicted technical efficiency have been found to be quite robust to distributional assumption choice. Since the NE and the NG models are rarely used in empirical works, and the NHN can be seen as a particular version of the NTN model, we can consider in more detail this last model.11 In the Normal-Truncated Normal model assumptions a8 and a1 become less general since ui ∼ i.i.d.N + (µ, σu2 ). The density function of the truncated normal variable u is:   (u − µ)2 2 , ,u > 0 (25) exp − g(u) = 2σu2 (2π)1/2 σu Φ(µ/σu ) After the estimation of the structural parameters through the maximisation of the log-likelihood function shown in (24), we need a second stage to estimate the producer-specific time-invariant technical efficiency terms using the JLMS decomposition method. The first step is to derive the conditional distribution of u given u given ǫ: QT

g(u|ǫ) = R ∞ QT 0

t=1 (ǫit

t=1 (ǫit

− ui )

− ui )g(ui )dui

=

" # (µ − µ∗ )2 1 exp − = 2σ∗2 (2π)1/2 σ∗ [1 − Φ(−µ∗ /σ∗ )]

(26)

That is distributed as N + (µ∗ , σ∗2 ) where: P µσv2 − t ǫt σu2 ∗ • µ = ; σv2 + T σu2 • σ∗2 =

σu2 σv2 + T σu2

σv2

As a result, either the mean or the mode of this distribution can be considered as a point estimate of the specific-producer inefficiency ui . Using the mean we have: E[ui |ǫi ] = µ∗i + σ∗

"

φ(−µ∗i /σ∗ ) 1 − Φ(−µ∗ /σ∗ )

#

(27)

or, using the mode we have: M [ui |ǫi ] = 11

(

µ∗i

if µ∗i > 0

0

otherwise

(28)

Moreover software like STATA and FRONTIER 4.1 work assuming a NTN model and this will be also my choice in the empirical part.

Francesco Porcelli - Measurement of Technical Efficiency

24

As reported in Kumbhakar and Lovell [2000], the mean is used more frequently than the mode. Finally, once point estimates of ui are obtained, estimates of the technical efficiency score of each producer can be derived as in the case of FE models: = exp(−ui ) eRE i

(29)

Considering other distributions for the inefficiency error term involves the same processes to obtain ui , and of course the NTN requires more computations, essentially the price we pay for its flexibility. The main advantage, and in a certain sense also the main reason, of considering RE instead of FE models is the possibility of having time invariant inputs among the regressors. Moreover, under the assumptions a1 to a9 RE models are more efficient than FE models but, as always stressed in the classical panel data literature, if some of the three extra assumptions (a7-a9 ) required by RE models will not hold, either FGLS or MLE is not consistent. The assumption of independence between the inefficiency error term and the inputs is indeed very strong and should be tested through the Hausman test or its robust version in case we do not have conditional homoscedasticity and/or correct dynamic specification. Similarly, the joint hypothesis that the effects are uncorrelated with the regressors and that the distributional assumptions are correct can be tested through an Hausman test based on the difference between MLE and WG estimators. In general the nature of our dataset should guide us in the choice of the most suitable estimator among WG/LSDV, FGLS and MLE. In general, in long panels, where T → ∞ and N is fixed, WG/LSDV is the better choice, but in short panels, where N → ∞ and T is fixed, FGLS is clearly preferred, provided that independence between the inefficiency error term and the regressor holds. Then, at the cost of more computations, if we are prepared to make distributional assumptions, MLE is more efficient (and consistent either over N or over T ). A strong argument against MLE in the case of short panels, however, is that as N increases the variance of the conditional mean or mode of g(ui |ǫi ) for each producer does not go to zero, and technical efficiency of producers cannot be estimated consistently. 3.2.3

Some problems

The failure of the conditional homoscedasticity assumption In the case of FE models we can have heteroscedasticiy only in the idiosyncratic error term as a result of the failure of the assumption 2 ). a1, and in this case the variance can vary across individuals and vit ∼ iid(0, σvi WG and LSDV are still unbiased and either the technology parameters or the specific-producers intercepts can be consistently estimated under the same set of assumptions from a2 to a5. In this case, robust estimators for the variance-covariance matrix like White Covariance Estimator are required for hypothesis testing. An alternative to the WG/LSDV approach that accounts for heteroscedasticity in v is the FGLS approach, which does not require distributional assumptions for the error components and is more efficient. Of course, we need to be sure that assumptions a7-a9 hold. In the case of RE models we can also have heteroscedasticity in both error components when both assumptions a1 and a7 are not satisfied. In this case FGLS in is not practical, as well as MLE, when N is large because in that case we have to estimate N variances causing an incidental parameter problem. A possible solution, as suggested by Kumbhakar and Lovell [2000], is to use the method of moments estimator. Therefore, in the case of FE models the failure of the conditional homoscedasticity assumption is much less problematic than in RE models since it is related only to the idiosyncratic error component and for this reason one can cope with it easily. This is, obviously, another point in favour of FE models.

Francesco Porcelli - Measurement of Technical Efficiency

25

The failure of weak exogeneity assumption in case of endogenous inputs In the case of FE models, assumption a4 fails when the deterministic component of the production function is not correctly specified because there are omitted inputs or some measurement errors in the inputs. Collection of better data is the first way to solve the problem. If data cannot be improved then the only solution is to use some instrumental variables for the endogenous inputs. In the case of RE models, the problem is twofold. In fact, apart from the failure of assumption a4, assumption a7 also may not hold when independence between the inefficiency error component and inputs is not verified. To re-establish assumption a4 one can use the same methods valid in the case of FE models, and to re-establish assumption a7 one can use the Mundlak [1978] and Chamberlain [1980] approach or, under more strict assumptions, it is possible to use the Hausman and Taylor [1981] method. Again, FE models seem to be less problematic than RE models.

Distinction between unobserved heterogeneity and unobserved inefficiency One of the major pitfalls of the panel data approach in SFM is that both RE and FE models force any time invariant cross unit heterogeneity into the same term used to capture the inefficiency [Greene, 2003, 2005b]. Therefore there is the risk of picking up "unobserved heterogeneity" in addition to, or even instead of, inefficiency. For example, one of the possible solutions suggested by Greene [2005b] is a modified version of the traditional FE model, called the True Fix Effect model (TFE). In this new formulation, the model in (16) becomes: yit = αi + x′it β + vit − ui

(30)

where ui is the stochastic inefficiency error term that is specified under the same distributional assumptions we made in the case of RE models, whereas the additional αi is intended to capture the unobserved heterogeneity; this amounts simply to adding a full set of specific producer dummies to the RE model that will be estimated through MLE. The TFE model places the unmeasured heterogeneity in the production function, generating a neutral shift of the function specific to each producer. Alternatively, a set of producer-specific dummies can be placed in the mean of ui that will be specified as: ui = δ0 + d′i δ

(31)

The incidental parameter problem and the possible overspecification of the model are the two main problems in this approach. We need T → ∞. As reported by Greene [2003], the incidental parameter problem is a persistent bias that arises in nonlinear fixed effects models when the number of periods is small (five is small). This bias has been widely documented for binary choice models but not systematically examined in SFM. Greene [2005a] uses a Monte Carlo simulation to show that the incidental parameter problem is much less serious that one might expect: the coefficient appears biased but far less so than in the binary choice models, and also the estimated inefficiencies appear to be only slightly biased. There is, however, another big problem related to this new formulation of the FE model. Now all factors that are time invariant will not affect the inefficiency component of the model at all. This is an extreme solution that generates, in some senses, a diametrically opposed new problem. Therefore an effective solution to disentangle inefficiency from unobserved heterogeneity still requires further research.

Francesco Porcelli - Measurement of Technical Efficiency

26

References Afriat, S. N. (1972). Efficiency estimation of production functions. International Economic Review, 13(3):568–598. Aigner, D. L., Lovell, C. K., and Schmidt, P. (1977). Formulation and estimation of stochastic frontier production function models. Journal of Econometrics, 6(1):21–37. Ali, A. I. and Seiford, L. M. (1993). The mathematical programming approach to efficiency analysis. In Fried, A. O., Lovell, A. K., and Schmidt, S. S., editors, The Measurement of Productive Efficiency, chapter 3, pages 120 – 159. Oxford University Press. Chamberlain, G. (1980). Analysis of covariance with qualitative data. Review of Economic Studies, 47:225–238. Charnes, A., Cooper, W. W., and Rhodes, E. (1978). Measuring the efficiency of decision-making units. European Journal of Operational Research, 2(6):429–444. Coelli, T. J., Rao, D. S. P., O’Donnell, C. J., and Battese, G. E. (2005). An Introduction to Efficiency and Productivity Analysis. Spinger, second edition. Debreu, G. (1951). The coefficient of resource utilization. Econometrica, 19(3):273–292. Deprins, D., Simar, L., and Tulkens, H. (1984). Measuring labor-efficiency in post office. In NorthHolland, editor, The Performance of Public Enterprises. M. Marchand and P. Pestieau and H. Tulkens, Amsterdam. Farrell, M. J. (1957). The measurement of productive efficiency. Journal of the Royal Statistical Society, 120(3):253–281. Gabrielsen, A. (1975). On estimating efficient production functions. Chr. Michelsen Institute, Department of Humanities and Social Science, (Working Paper No. A-35). Greene, W. H. (1980). Maximun likelihood estimation of econometri frontier functions. Journal of Econometrics, 13(1):27–56. Greene, W. H. (1997). Frontier production functions. In Pesaran, M. H. and Shmidt, P., editors, Handbook of Applied Econometrics, volume II: Microeconometrics. Blackwell Publishers Ltd. Greene, W. H. (2003). Distinguishing between heterogeneity and inefficiency: Stochastic frontier analysis of the world health organization’s panel data on national health care systems. Department of Economics, Stern School of Business, New York University. Greene, W. H. (2005a). Fixed and random effects in stochastic frontier models. Journal of Productivity Analysis, 23:7–32. Greene, W. H. (2005b). Reconsidering heterogeneity in panel data estimators of the stochastic frontier model. Journal of Econometrics, 126:269–303. Hausman, J. A. and Taylor, W. E. (1981). Panel data and unobservable individual effects. Econometrica, 49(1377-1399). ISSiRFA-CNR (1982-2008). La finanza regionale - anni vari. In Buglione, E., editor, Osservatorio Finanziario Regionale. Franco Angeli, Milano. ISTAT (2006). Conti ed aggregati economici delle Amministrazioni pubbliche Serie SEC95 - anni 1980-2005. Roma. www.istat.it.

27

Francesco Porcelli - Measurement of Technical Efficiency

ISTAT (2008). Health For All 2008. ISTAT, Roma. www.istat.it. Jondrow, J., Lovell, C. K., Materov, I. S., and Shmidt, P. (1982). On the estimation of technical inefficiency in the stochastic frontier production function model. Journal of Econometrics, 19(2/3):233 – 238. Koopmans, T. C. (1951). An analysis of production as an efficient combination of activities. In Koopmans, T. C., editor, Activity Analysis of Production and Allocation. Jhon Wiley and Sons, Inc. Kumbhakar, S. C. and Lovell, C. K. (2000). Stochastic Frontier Analysis. Cmbridge University Press. Lovell, C. K. (1993). Production frontiers and productive efficiency. In Fried, A. O., Lovell, A. K., and Schmidt, S. S., editors, The Measurement of Productive Efficiency, chapter 1, pages 3 – 67. Oxford University Press. Meeusen, W. and van den Broeck, J. (1977). Efficiency Estimation from Cobb-Douglas Production Functions with Composed Error. International Economic Review, 18(2):435–444. Mundlak, Y. (1978). On the pooling of time-series and cross-section data. Econometrica, 46:69–86. Pestieau, P. and Tulkens, H. (1990). Assessing the performance of public sector activities: Some recent evidence form the productive efficiency viewpoint. Discussion Paper No. 9060, CORE, Universitè Catholique de Louvain. Pitt, M. M. and Lee, L.-F. (1981). The Measurement and Sources of Technical Inefficiency in the Indonesian Weaving Industry. Journal of Development Economics, 9:43–64. Richmond, J. (1974). 13(2):515–521.

Estimating the efficiency of produciton.

International Economic Review,

Schmidt, P. and Sickles, R. C. (1984). Production frontiers and panel data. Journal of Business Economic Statistics, 2(367-374). Shephard, R. W. (1953). Cost and Production Functions. Princeton University Press. Shephard, R. W. (1970). Theroy of Cost and Prodution Functions. Princeton University Press. SISTAN (2006). Regional Public Accounts. Roma. www.dps.mef.gov.it/cpt-eng/cpt.asp. Stevenson, R. E. (1980). Likelihood functions for generalized stochastic frontier estimations. Journal of Econometrics, 13(1):58–66. Winsten, C. B. (1957). Discussion on Mr. Farrell’s Paper. Journal of the Royal Statistical Society, Series A(120):282 – 284. Worthington, A. C. and Dollery, B. (2000). An empirical survey of frontier efficiency measurement tehniques in local government. Local Government Studies, 26(2):23–52.

Suggest Documents