PART B

1 downloads 0 Views 1MB Size Report
Movement in House Prices (Extract from Halifax Survey 2006). Chapter 8 ...... +39%. Low productivity, high MFF. Unit Labour Costs (ULC) by Staff Group ...... 177. 516. 262. 6,078. 9,351. 963. 1,764. 666. 516. 13. 1.2087. 1.531712. 888. 898 ...... £201,242. 8. Other. 0. 0. £1,240,253. -£56,547. 48075. 0. 0. £25,798. -£1,176. 1.
Crystal Blue Consulting Ltd

Review of Specific Cost Approach to Staff Market Forces Factor Report to the Department of Health

Crystal Blue Consulting Ltd, York University and City University

May 2007

REVIEW OF SPECIFIC COST APPROACH TO STAFF MARKET FORCES FACTOR

Study Commissioned by :

Department of Health

Authors:

Dr Tessa Crilly * Mr John Crilly * Mrs Margaret Conroy * Professor Roy Carr-Hill ** Professor David Parkin ***

* Crystal Blue Consulting Ltd ** Centre for Health Economics, York University *** City Health Economics Centre, City University

Report Date:

Study Concluded November 2006 Report 16th May 2007

Acknowledgements We wish to offer our sincere thanks for the support and advice provided by the Department of Health’s Project Group, including Bridget Carolan, Keith Derbyshire, Francis Dickinson, Michael Haslam, Lorraine Middlemas, Eileen Robertson, Helen Strain and Carl Vincent. We are also grateful to Professor Carol Propper for her helpful comments on the report. Finally, we would like to thank the finance, HR and nursing directors and staff of the 14 Reference Panel trusts who made an invaluable contribution to this study.

Contents List of Tables List of Figures List of Appendices

Page 4 8 9

Executive Summary

10

SECTION A.

SUMMARY: BACKGROUND, KEY FINDINGS & CONCLUSIONS

15

1.

Background

15

2.

Summary of Findings and Conclusions

22

SECTION B.

THE MICRO STUDY

44

3.

Design of the Micro Study

44

4.

General Ledger

47

5.

Payroll Analysis

63

6.

Qualitative Survey

72

7.

Survey Perceptions and National Data

88

8.

Reference Panel

96

SECTION C.

NATIONAL DATA SETS

101

9.

Review of Ward Nurse Staffing

101

10.

Medical Staffing

120

11.

Specialty Analysis

146

12.

All Staff Data Base

158

SECTION D.

ECONOMETRIC MODELLING

179

13.

Econometric Approaches: Theory-Driven

179

14.

Econometrics: Hypothesis & Empirically-Driven

196

Appendices Glossary References

220 300 302

3

Number

List of Tables

Chapter 1 1.1 1.2 1.3 1.4

Background 2006 Impact of Staff MFF (£000s) PCTs with Largest Percentage Movement Effected by the Staff MFF History of MFF Data Sets Used in the Staff MFF Study

Chapter 2 2.1 2.2 2.3 2.4 2.5 2.6

Summary of Findings and Conclusions Nursing Staff: Average of Leaver Rates (Census 2003 and 2004, n = 172) Ward Nurse Vacancies (HCC Dataset, n=165) 2004/5 % of Total Wage Bill (HCC Dataset, n=165) 2004/5 London Uplift Comparing Bank & Agency as % of Gross Pay (n=9) Summarising Variation in Cost per wte Summary Variation in wte per Workload Measure

Chapter 3 3.1 3.2

Design of the Micro Study Comparison of National and Micro Sample MFF Range Sample of 14 Trusts in the Micro Study

Chapter 4 4.1 4.2 4.3 4.4 4.5 4.6 4.7

General Ledger Distribution of Actual wte and Costs Across the Sample 2 R Measure between Variables & MFF at Staff Group Level Unit Cost Ratios Ranked by Actual Cost per 1,000 Admissions – All Staff Summary of Spatial Variation in Unit Cost Ratios R2 of MFF Index & Budgeted Cost per WTE by Grade R2 of MFF Index and Proportion at Each Grade Productivity Ratios for Consultant Staff – Based on Unweighted Workload Productivity Ratios for Consultant Staff – Based on Workload Weighted for Complexity Midwifery Costs Relationship Between MFF and Range of Variables Maternity HRG Codes and National Average Unit Cost Weightings Fit (R2) between MFF and HRG and General Ledger Unit Costs Output from Investigation into Orthopaedic Costs within General Ledgers

4.8 4.9 4.10 4.11 4.12 4.13 Chapter 5 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 Chapter 6 6.1 6.2 6.3

Payroll Analysis Submitted Payrolls Summary of Total Wage Cost Bank & Agency and Total Wage Bill Average Basic Pay per Worked WTE Average Basic Pay as a % of Gross Pay Interpolating Gross pay using difference in Basic and Basic as a percentage of Gross pay Geographical Allowances as a % of Gross Pay Overtime as a % of Gross Pay Other Allowances as a % of Gross Pay Bank and Agency as a % of Gross Pay Proportion of Non Clinical Worked WTE Staff (employee headcount) Turnover by Trust Qualitative Survey Turnover Vacancies Overtime, Bank and Agency

4

Chapter 7 7.1 7.2 7.3 7.4 7.5 7.6 7.7

Survey Perceptions and National Data Average Annual Change for Selected Periods Nursing Staff: Leavers Nursing Staff: Joiners Average Age of Qualified Nurses Cumulative Age Distribution of Qualified Nurses Summary of Average Ages Across the Non-Medical Workforce Movement in House Prices (Extract from Halifax Survey 2006)

Chapter 8 8.1 8.2 8.3 8.4

Reference Panel Balance of Perceptions Positive Perceptions of the MFF Negative Perceptions of the MFF Neutral Perceptions of the MFF

Chapter 9 9.1 9.2 9.3 9.4 9.5 9.6 9.7 9.8 9.9 9.10 9.11 9.12 9.13 9.14 9.15 9.16 9.17 9.18 9.19 9.20 9.21 9.22 9.23 9.24

Review of Ward Nurse Staffing HCC Dataset Overview A Description of the Data Set by Quintile Geographic Variation in Total Wage Bill and its Components Wage Cost per Non Standardised WTE Wage Cost per Standardised WTE The Grade Mix Element Grade Mix Index by Trust Type and Complexity Standardised Grade E WTE per Bed Total Standard E WTE per 100 Admissions (Volume Variance A) Number of Wards by Trust Type and Location Total Std E WTE per 100 Complexity Adjusted Admissions (CAA) - Specialist Total Std E WTE per 100 Complexity Adjusted Admissions (CAA) - Teaching Total Std E WTE per 100 Complexity Adjusted Admissions (CAA) - Acute Adjustments Made to Ratios Recalculation of Quintile Cover Ratios – Volume Variance B Recalculation of Quintile Cover Ratios – Volume Variance C SCA A Index (STD E price Variance plus 35% Volume Variance) SCA B Index (STD E price Variance plus 20% Volume Variance) SCA B Index (STD E price Variance plus 15% Volume Variance) Ward Nurse Vacancy (Establishment – In-post) Rates Career Grade per Nurse wte Doctor per Nurse wte Total Wage Bill per wte Total Wage Bill per Std E wte

Chapter 10 10.1 10.2 10.3a 10.3b 10.4a 10.4b 10.5a 10.5b 10.6a 10.6b 10.7a 10.7b 10.8a

Medical Staffing Distribution of Trusts Balance Between London and Non-London Summary Data for All Trusts Productivity Ratios for All Trusts Staffing and Workload Summary for Acute Hospitals Productivity Ratios for Acute Hospitals Staffing and Workload Summary for Teaching Hospitals Productivity Ratios for Teaching Hospitals Staffing and Workload Summary for Specialist Hospitals Productivity Ratios for Specialist Hospitals Wte Grade Mix in All Trusts % Grade Mix in All Trusts Wte Grade Mix in Acute Trusts

5

10.8b 10.9a 10.9b 10.10a 10.10b 10.11 10.12 10.13 10.14 10.15 10.16 10.17 10.18 10.16 10.17 10.18 10.19

% Grade Mix in Acute Trusts Wte Grade Mix in Teaching Hospitals % Grade Mix in Teaching Hospitals Wte Grade Mix in Specialist Hospitals % Grade Mix in Specialist Hospitals Medical Model – Current Position Workforce in 173 Trusts at Sept 2004 Medical Model – Current Position – Grade Relationships Medical Model – Current Position - Workload Medical Model – Current Position – Doctor:Workload Ratios Medical Model Results Average of Medical 3 Month Vacancy Rate at 31st March 2005 by Quintile Average of Medical 3 Month Vacancy Rate at 31st March 2005 by Hospital Type Correlation Between Vacancies, Staff MFF and Hospital Type Medical Model A – Current Baseline Position Medical Model B – Peer Group Average Medical Model C – No Constraint Medical Model C – Constrained

Chapter 11

Specialty Analysis Distance Between Trust and Cost Based on National Average Unit Costs (Expressed as % of National Average for Trust) Ranking Comparison of R2 Statistic in National Samples (n=c.170) Cost Differences as % of Notional National Averages for Trusts Dependent Case Mix Adjusted Costs: Model Summary Dependent Case Mix Adjusted Costs: Coefficients Dependent Case Mix Adjusted Costs: Model Summary by Chapter Chapter Details of Regression Models

11.1 11.2 11.3 11.4 (A) 11.4 (B) 11.4 (C) 11.5 Chapter 12 12.1 12.2 12.3 12.4 12.5 12.6 12.7 12.8 12.9 12.10 12.11 12.12 12.13 12.14 (A) 12.14 (B) 12.15 (A) 12.15 (B) 12.16 12.17 12.18 (A) 12.18 (B) 12.19 12.20 (A)

All Staff Data Base Average Annual Wage Cost (£’000s) Per WTE Weights Applied To WTE Figures For Each Trust Volume Differences in the number of WTEs Per Complexity Adjusted FCE Equivalent Patient (Scenario A all unavoidable) Volume Differences in the number of WTEs Per Complexity Adjusted FCE Equivalent Patient (Scenario B Adjusted to Average Performance) Volume Differences in the number of WTEs Per Complexity Adjusted FCE Equivalent Patient (Scenario C Adjusted to Best Performance) Workload Ratios Unadjusted Volume - Index SCA A Volume adjusted to average for type - Index SCA B Volume adjusted to most efficient for type - Index SCA C Comparison of SCA indices with MFF – Price variance 100% unavoidable Price Variance at 90% and volume variance at average of SCA A & B Summary of Variation Explained by Regression Models Correlations between Variables Model Summary of Basic Regression Coefficients in Basic Regression Model Summary of Parsimonious Regression Coefficients of Parsimonious Regression Coefficients of Parsimonious Regression with ACLE Local Accounting for Variance in WTE per Occupied Bed with Size and Specialist and Teaching Status Dummies: Model Summary Residuals from Table 12.17: Model Summary Residuals from Table 12.17: Coefficients Accounting for Variance in WTE per complexity Adjusted FCE: Model Summary Residuals from Table 12.19: Model Summary

6

12.20 (B) 12.21 12.22 (A) 12.22 (B) 12.23 12.24 (A) 12.24 (B)

Residuals from Table 12.19: Coefficients Accounting for Variation in Unit Labour Cost: Model Summary Residuals from Table 12.21: Model Summary Residuals from Table 12.21: Coefficients Accounting for Variance in Total Wage Cost per WTE: Model Summary Residuals from Table 12.23: Model Summary Residuals from Table 12.23: Coefficients

Chapter 13 13.1 13.2 13.3 13.4 13.5 13.6 13.7

Econometric Approaches: Theory-Driven Variables Used Comparison of Models: MFF and Locality Residuals by Location of Trust Comparison of Models: Case Mix Adjustments HRG Model CMAC/CMAO Stochastic Frontier Model HRG Stochastic Frontier Model

Chapter 14 14.1 14.2 14.3 14.4 14.5 14.6 14.7

Econometrics: Hypothesis & Empirically-Driven Summary of Model Performance Summary of Variables Entering the Models ULC MODEL ONE Starting and Ending Coefficients ULC MODEL ONE Include Inner And Outer London Dummies ULC MODEL TWO Starting And Ending Coefficients ULC MODEL TWO Bringing Back Staff MFF and London Dummies RCI MODEL ONE Starting And Ending Coefficients RCI MODEL ONE Include Inner and Outer London Dummies and Empirical Parsimonious Model RCI MODEL TWO Starting and Ending Coefficients RCI MODEL TWO Bringing Back Staff MFF And London Dummies Descriptives Correlations n = 173 ULC MODEL ONE Starting Coefficients ULC MODEL ONE Ending Equation ULC MODEL ONE Ending Equation plus London Dummies Empirical Parsimonious Model ULC MODEL TWO Starting Coefficients ULC MODELTWO Ending Model ULC MODEL TWO Final Equation plus Staff MFF ULC MODEL TWO Bringing Back Staff MFF and London Dummies ULC MODEL TWO Empirical Parsimonious Model RCI MODEL ONE Starting Coefficients RCI MODEL ONE Ending Coefficients RCI MODEL ONE Include London Dummies RCI MODEL ONE: Empirical Parsimonious Model RCI MODEL TWO Starting Coefficients RCI MODEL TWO Ending Coefficients RCI MODEL TWO Bringing Back In Staff MFF RCI MODEL TWO Bringing Back In Staff MFF and Inner & Outer London Dummies RCI MODEL TWO: Empirical Parsimonious Model

14.8 14.9 14.10 A.14.II.1 A.14.II.2 A.14.III.1 A.14.III.2 A.14.III.3 A.14.III.4 A.14.III.5 A.14.III.6 A.14.III.7 A.14.III.8 A.14.III.9 A.14.III.10 A.14.III.11 A.14.III.12 A.14.III.13 A.14.III.14 A.14.III.15 A.14.III.16 A.14.III.17 A.14.III.18

7

Number

List of Figures

Chapter 2

Summary of Findings and Conclusions

2.1 2.2 2.3 2.4 Chapter 4 4.1 Chapter 8 8.1 8.2 8.3 Chapter 9 9.1

Reviewing the Specific Cost Approach to the Staff MFF Index Based on Price & Volume Variance Staff MFF Index and Trust HRG Cost Index by Quintile Around a Base of 1 Staff MFF and Trust HRG Cost Index by Individual Trust General Ledger Hospital Model Linking Cost Type with Clinical Area and Staff Group Reference Panel Reference Panel Feedback: Suggested Reasons for Geographical Variation in Use of Bank, Agency and Overtime Reference Panel Feedback: Explaining Productivity Differences Reference Panel Feedback: Key Unavoidable Cost Drivers Review of Ward Nurse Staffing A Comparison of the Three Specific Cost Indexes with the Staff MFF

Chapter 10

Medical Staffing

10.1 10.2 10.3 10.4 10.5

Productivity Ratios for All Trusts (scaled to Base 1 at Quintile 1) Productivity Ratios for Acute Hospitals (Base 1 at Quintile 1) Productivity Ratios for Teaching Hospitals (Base 1 at Quintile 2) Productivity Ratios for Specialist Hospitals (Base 1 at Quintile 1) Assumptions Driving the Medical Model

Chapter 11

Specialty Analysis

11.1a 11.1b 11.1c 11.1d 11.2 11.3

Staff MFF and Trust HRG Cost Index by Individual Trust (All) Staff MFF and Trust HRG Cost Index by Trust (Acute) Staff MFF and Trust HRG Cost Index by Trust (Teaching) Staff MFF and Trust HRG Cost Index by Trust (Specialist) Summarising Trust Type and HRG Cost Index by Quintile Staff MFF Index and Trust HRG Cost Index by Quintile Around a Base of 1

Chapter 12

All Staff Data Base

12.1 12.2 12.3 12.4

Data Sets Employed Defining the Staffing and Workload Measures Staff MFF & SCA indices A,B & C Staff MFF and SCA D

Chapter 13

Econometric Approaches: Theory-Driven

13.1

Average Cost by Activity, by Hospital Type & Effect of Case Mix Adjustment

8

List of Appendices (Prefixed with Chapter Number) 1 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 5.1 5.2 5.3 5.4 9.1 9.2 9.3 9.4 9.5 9.6 10.1 10.2 10.3 11.1 11.2

11.3 12.1 12.1 12.3

The Calculation of the Complexity Index Unit Costs Based on General Ledger Data Maternity Activity and Weighting Factors in the HRG Data Set General Ledger Field Data Specification HES Activity Summary 2004/5 Budgeted Cost per Medical wte by Grade Nursing Costs and Activity in Midwifery The Whole Budget Described as ‘Maternity’ by Trusts Medical Staff in Obstetrics & Gynaecology The Whole Obstetrics & Gynaecology Direct Care Budget Payroll Data Specification A Sample of Pay Classifications from Three Trusts Build of Wage Costs 2004/5 by Trust Proportional Breakdown of ‘Other’ Payments The Standardisation of WTEs to Grade E Quality of Care The Calculation of the Complexity Index (abridged) Estimating the ‘Grade Drift’ Effect Std E WTE per 100 Complexity Adjusted Admissions baby Quintile Multivariate Regression Results for HCC Ward Nursing Data Set Medical Staffing Grade Structure – Micro Sample Payroll Costs – Micro Sample Medical School Intakes and Statistics 2005 Calculating the Staff MFF for Each Quintile Using a Variety of Formulations Distance Between Trust Cost and Benchmark (Based on National Average Unit Cost) for Trust (Including Excess Bed Days, Expressed as Percentage of Benchmark National Average) Cost Differences as % of National Averages for Trusts by Specialty The Number of WTEs and Total Wage Cost per Staff Group Benchmarking the All Staff Database (Unweighted WTEs) Staff MFF as Dependent Variable – An Alternative Model Looking at Spatial Non-Economic Variables

9

Staff Market Forces Factor – Review of the Specific Cost Approach

Executive Summary PURPOSE OF THE STUDY The Market Forces Factor (MFF) is used to adjust funding in the NHS for unavoidable geographical cost variations. The staff component is the largest of the three (staff, land and buildings) which form the MFF. It is currently calculated on the basis of the general labour market (GLM) approach, which measures standardised spatial wage differentials (SSWD) in the private sector. An alternative, more direct way of estimating staff cost differentials would be through a Specific Cost Approach (SCA), based on the actual costs incurred by NHS trusts. A consortium led by Crystal Blue Consulting Ltd in partnership with Centre for Health Economics (York University) and City University was commissioned by the Department of Health to examine SCA. The study ran from January to November 2006. The research brief was to study the size, variation and drivers in NHS unavoidable costs and their relationship to the current staff MFF by identifying: 1. Spatial variation in costs of providing services in different labour markets; 2. Avoidable and unavoidable components of higher costs; 3. Feasibility of implementing SCA as an alternative to the GLM method of calculating the staff MFF.

CONTEXT The staff MFF acts as a redistributive mechanism between areas with lower-cost labour markets towards areas with higher-cost environments. 1 Its purpose is to ensure that the same level of service can be provided anywhere in the NHS. The GLM-based staff MFF has been subject to considerable criticism of both its rationale and its impact. NHS managers have questioned the logic of using private sector pay differentials to represent NHS cost variation. The current (2005/6) range, from 0.85 – 1.28, represents a 51% increase from the lowest to the highest MFF score nationally. The magnitude of the gradient is thought to be unjustified and inequitable in the NHS where staff are employed according to a national wage structure with limited geographical allowances. A further source of inequity is attributed to the ‘cliff edge’ effect, where adjacent trusts are located in separate pay zones and attract a different MFF score. The GLM method has been used to calculate the staff MFF ever since it was introduced following recommendations to the Resource Allocation Working Party (RAWP) in 1980. At first it included only administrative and clerical staff, ancillary workers and unqualified nurses. A further study in 1993 widened its scope to include all non-medical professional staff. From 2003/4 the staff MFF has included the whole workforce, giving it the maximum possible coverage (i.e. 67% of HCHS resources 2). Detailed reviews of the methodology 1

In 2005/6 it moved £1.149 billion target allocation (2% of the £59 billion hospital and community health services (HCHS) allocation), primarily towards London and the South-East. 2 The MFF is weighted according to national average expenditure shares, i.e. 67% staff, 5% buildings, 1% land; the remaining 27% running costs are assumed not to vary across the country. (Department of Health, 2005c).

10

underpinning the staff MFF were undertaken in 1988 and 1995. On both occasions, the possibility of replacing GLM with SCA was considered and rejected. A fresh look at SCA is timely, in the light of changes to funding flows through Payment by Results (PbR) and improved availability of NHS data over the last decade. Prior to PbR, the MFF was paid direct to Primary Care Trusts (PCTs) as an adjustment to the resource allocation formula. The rolled-out introduction of PbR involves a provider-specific MFF adjustment to the national tariff which has a direct and highly visible impact on trusts’ incomes. This study represents the most detailed attempt to date to investigate the Specific Cost Approach in relation to an area cost adjustment in the NHS or government services generally.

STUDY DESIGN We carried out analyses of national datasets on NHS costs, together with in-depth investigation of a micro study of trusts. The base year was 2004/5. The micro study sample was selected according to three criteria: (i) adjacencies between trusts; (ii) MFF ranking to gain a spread of low – medium – high, with a sample range of 0.8640 – 1.2799 compared to a national range of 0.8514 – 1.2826; (iii) pragmatic factors of access and motivation among trusts to participate. This selection resulted in a cluster of three trusts in the south west, a cluster of five in London and six non-adjacent trusts in the south and north of England. The micro study comprised two research strands: a) A qualitative study using a questionnaire tool that was administered in March-April 2006 to directors of human resources (HR) and nursing, through either face-to-face or telephone interviews. This survey addressed recruitment, retention and other labour market factors for each staff group. b) Quantitative analysis of data supplied by finance directors, from both the trust payroll and general ledger systems. Directors of HR, nursing and finance were invited to a Reference Panel in May 2006 to receive and critique preliminary findings. The Panel provided a steer for the second half of the project which focussed on macro data sets. The macro studies involved six distinct data sets, the first four of which were analysed arithmetically (by MFF quintile 3) and statistically 4: 1) Healthcare Commission (HCC) 2004/5 data on nurse staffing and costs in 4,435 wards located in acute hospitals across England, in conjunction with quality and workload indicators, and non-NHS data; 2) The medical staffing census September 2004 and a measure of workload derived from trust activity data; 3) HRG costs, grouped into specialties (chapters); 4) An ‘all staff database’ for 127 hospital trusts 5 in England based on September 2004 census returns, complexity-adjusted patient activity 2004/5 and staff cost data taken from Trust Financial Returns (TFR3A) 2004/5;

3

Quintile 1 (Q1) contained the 20% of hospitals with the lowest MFF and quintile 5 (Q5) contained the 20% with the highest MFF scores. 4 Statistical analysis used multivariate regression modelling to isolate the impact of variables. 5 Reduced from 173 hospital trusts due to problems of data quality and availability.

11

5) The fifth and sixth data sets were bespoke and provided by the Department of Health. All data was for 2004/5 and included, for example, Reference Cost Index (RCI), Unit Labour Cost (ULC) and Hospital Episode Statistics.

RESULTS Results are summarised against the three research questions of (1) spatial variation, (2) avoidable and unavoidable cost differences and (3) feasibility of adopting SCA instead of GLM as a method of calculating the staff MFF. Spatial Variation Key Finding: (1) Spatial variation in staff costs reflected the patterning of the staff MFF. The variation was a combination of price (pay) and volume (productivity) differentials. Labour market theory predicts that it is more expensive to employ staff in high cost areas, such as London, and less expensive in lower cost areas. Wage variability is limited in the NHS, so in high/low MFF areas trusts will be paying below/above the going rate respectively. The effect will be higher turnover, vacancies and lower productivity (due to poorer quality of inputs) in high MFF areas, and vice versa. We found that the prediction held in relation to non-medical staff groups. The study of ward nurses (165 trusts) identified a price differential 6 between quintiles 5 and 1 of 18.3% 7. The micro study of payroll data (9 trusts) produced broadly similar results. The volume difference was wider, with 35% more nurses per unit of output 8 employed by trusts in Q5 (high MFF) compared to Q1 (low MFF). Use of bank/agency staff and patient throughput accounted for the difference. The costs of medical staff showed no clear spatial patterning at individual grade level. The mix of grades varied, with a higher proportion employed as registrars in London and more employed as staff grades in low MFF areas. Vacancies, an indicator of labour market activity, were consistently low for medical staff, but higher in Q1 than in Q5, in contrast to nursing staff. 9 The volume or productivity differences that we observed in other groups were, however, evident among medical staff. The number of doctors per FCE was 46% higher among Q5 than Q1 trusts 10.

Avoidable and Unavoidable Components Key Finding: (2) Avoidable and unavoidable components of higher costs were virtually impossible to isolate with confidence, due to problems of determining cause and effect.

6

All quintile comparisons are between quintile mid-points. Comprising 10.7% geographical variation, 6.3% grade mix associated with caseload complexity, and 1.3% grade drift. 8 Standardised grade E per 100 complexity adjusted admissions. 9 The England 3-month medical vacancy rate at 31st March 2005 was 2.7%. Quintile 1 was 3.5% compared with 2.2% in quintile 5 (mainly London), due to vacancies being lower in London teaching hospitals. 10 The number of doctors per FCE was 73% higher among quintile 5 trusts than quintile 1 trusts. The gap reduced to 46% after adjusting workload for complexity, volumes of A&E and outpatient attendances. 7

12

Several approaches, arithmetic and statistical, were adopted to identify avoidable and unavoidable costs. In considering spatial volume differences, the quintile analysis compared ratios of staff per unit of output within acute, teaching and specialist hospitals. These were fed into benchmarking scenarios that generated a price-volume index for comparison against the MFF index. Consistency between the two indexes implied that most of the resource variation associated with the staff MFF was unavoidable. The strength of the benchmarking approach lay in its clarity, (e.g. in suggesting that the range of the staff MFF could be narrowed by two percentage points at the top and the bottom). Its weakness lay in the subjectivity needed to judge between ‘what is’ and ‘what should be’. The medical staffing analysis considered how much scope there was to reduce geographical differences in the numbers of doctors employed. The role of education, training and regulation was identified as being important, both through distribution of junior doctor posts and the need for consultants to provide supervision. Training and supervision demands are largely unavoidable to trust management, leading us to estimate that only 1% of the overall medical workforce is linked to avoidable spatial productivity differences (equivalent to 3% of career grades). The econometric studies revealed a greater degree of uncertainty in distinguishing between ‘explained’ (unavoidable) and ‘unexplained’ (potentially avoidable) variation in costs. One of the features of hospital cost data is that, within any spatial band (quintile), there is a mixture of high cost and low cost hospitals, appearing as “noise” in the econometric studies and giving a high level of unexplained variation or residuals at around 40%. Stochastic frontier analysis was used to find patterns in these residuals but did not identify any strong spatial dimension. The study, therefore, revealed no relative inefficiencies between trusts that could be explained by geography. In conclusion, every approach that we used to separate out avoidable from unavoidable cost differentials was met by some logical, technical or data quality difficulty. Cause and effect were hard to distinguish, because we were analysing costs that were at least in part the product of resource allocation already shaped by the staff MFF. We did not find incontrovertible evidence of geographic avoidable cost differentials. Rather, the study produced a large amount of information that described the spatial differences in staff costs and evidence for why a high proportion of that difference would be unavoidable.

Feasibility Key Finding: (3) Feasibility of implementing SCA as an alternative to the current GLM method was rejected on the grounds of cost, practicality, lack of unified methodology, and conceptual problems concerned with cause and effect. The GLM method was ultimately supported by the evidence from this review across all staff groups with the exception of medical staffing. We rejected the possibility of implementing SCA as an alternative to GLM on the basis of feasibility. Limiting factors include: o

Cost & Practicality - There does not exist a single nationally available data set that would lend itself to SCA. The databases tested in this review could not be reproduced routinely and efficiently at national level, and at local level were not sufficiently standardised.

o

Methodology - There does not exist a unified, accepted methodology that points to how SCA should be undertaken.

13

o

Concept – The methodological difficulties in distinguishing between avoidable and unavoidable costs were linked to conceptual difficulties in designing and implementing a Specific Cost Approach. This may be described as ‘circularity’ or problems of distinguishing between cause and effect in the relationship between resource allocation and cost differentials.

o

Appropriateness of GLM - The GLM method itself was ultimately supported by this review. The level of connectedness between the NHS and the private sector or ‘outside’ labour markets proved to be unexpectedly strong. The medical staff group was an exception to this.

Throughout the course of the study, we found mounting evidence that supported the use of general labour market measures as a proxy for NHS cost pressures. The differentials between cost of living, wage rates, house prices, and other amenities are effectively summarised in the GLM formula. While it may be imperfect at the individual trust level (e.g. through cliff edges), it was not feasible to produce anything superior from directly observed specific cost data. Patchy data quality and availability, lack of accepted methodology, together with logical and conceptual problems, provided a poor case for substituting GLM with a SCA version of the MFF. In support of GLM, we observed that non-medical staff groups respond to labour market signals, impacting on trusts’ cost bases. Medical staff is a distinct group where we found that there is a case for breaking the link between GLM and medical staff costs.

14

SECTION A. REPORT SUMMARY: BACKGROUND, KEY FINDINGS & CONCLUSIONS Section A describes the background to this review of the Specific Cost Approach to the Market Forces Factor and sets out the study’s key findings. It contains two chapters, serving as a summary to the whole report.

CHAPTER 1.

BACKGROUND

The Market Forces Factor (MFF) is used to adjust funding in the NHS for unavoidable geographical cost variations. Its purpose is to equalise purchasing power of commissioning PCTs by adjusting for spatial differences in provider costs. The rolled-out introduction of Payment by Results (PbR) involves a provider-specific MFF adjustment to the national tariff which, in contrast to the former adjustment to PCT allocations, has a direct and highly visible impact on trusts’ incomes. Two methods of estimating geographical cost variations are available, namely the Specific Cost Approach (SCA) and the General Labour Market (GLM) approach. The MFF is currently calculated on the basis of the GLM approach; previous reviews have rejected the use of SCA (e.g. Wilson et al, 1996 and Elliott et al, 1996). A consortium led by Crystal Blue Consulting Ltd in partnership with Centre for Health Economics (York University) and City University has been commissioned by the Department of Health to examine the SCA in relation to the staff MFF. The research brief was to study the size, variation and drivers in NHS unavoidable costs and their relationship to the current staff MFF. It translated into the following objectives, to: (1)

Illustrate the actual costs of providing services in different labour markets by describing cost variations in relation to the staff MFF;

(2)

Attempt to isolate the avoidable and unavoidable components of higher costs;

(3)

Explore the feasibility of implementing SCA as an alternative to the GLM method of calculating the staff MFF.

Defining the Staff MFF and its Impact The staff MFF is one of three components of the MFF index used in the national HCHS resource allocation formula. The other two components are land and buildings and all are combined into a single index, weighted according to national average expenditure shares, i.e. staff 67%, buildings 5%, land 1%; the remaining 27% running costs are assumed not to vary across the country (Department of Health, 2005c). Up until 2005/6, PCT target allocations have been determined through a formula that applies the MFF index to age and needs weighted populations. From 2006/7 onwards the MFF adjustment is to be paid direct to trusts as part of their income stream under PbR, with resulting non-recurrent adjustments to PCT allocations. Activity traded under the PbR tariff currently accounts for 40% of HCHS spending, and so the MFF becomes an explicit adjustment to trusts’ revenue. The land and building MFF adjustments are direct, taken from the Valuation Office Agency’s (VOA) valuation of the NHS estate and the Building Cost Information Service to the VOA, as a reflection of the actual cost of hospital land and

15

construction. The staff MFF is indirect since it is based on observation of what private sector workers are paid, rather than the cost of the NHS workforce. The staff MFF Index used throughout this review is the 2005/6 index for England, which ranges from 0.85 (South West Peninsula) to 1.28 (London) nationally, amounting to a gap between the lowest and the highest of 51%. The staff MFF acts as a redistributive mechanism, taking monies away from low-cost areas and allocating them to high cost areas. It covers all staff, equivalent to 67% of HCHS and 52% of PCT revenue allocations 11. Tables 1.1 and 1.2 summarise its effect in redistributing £1.149 billion, (2% of £59 billion HCHS allocation), two thirds of which (£748 million) goes to London. The South West Peninsula, Trent and Cumbria & Lancashire SHAs experience the largest percentage loss of funds. PCTs at the top of the MFF range gain 12% -13% in resource redistribution through the staff MFF. Table 1.1 2006 Impact of Staff MFF (£000s) SHA Net Change £000 % North West London +199,691 +9% North East London +156,472 +7% North Central London +148,256 +9% South East London +135,635 +7% Thames Valley +114,208 +5% South West London +107,923 +8% Surrey and Sussex +75,389 +3% Bedfordshire and Hertfordshire +70,674 +4% Hampshire and Isle Of Wight +24,793 +1% Essex +14,137 +1% Avon, Gloucestershire & Wiltshire +9,794 +0% Kent and Medway +9,196 +1% West Midlands South -10,476 -1% Leicestershire, Northamptonshire & Rutland -20,764 -1% Norfolk, Suffolk & Cambridgeshire -36,340 -1% Birmingham & The Black Country -39,999 -1% Dorset and Somerset -43,853 -3% County Durham and Tees Valley -55,400 -4% Greater Manchester -66,269 -2% North and East Yorkshire and Northern Lincolnshire -69,640 -4% West Yorkshire -70,876 -3% Northumberland, Tyne and Wear -71,737 -4% South Yorkshire -72,699 -4% Cheshire and Merseyside -75,904 -2% Shropshire and Staffordshire -76,297 -5% Cumbria and Lancashire -111,985 -5% Trent -121,801 -4% South West Peninsula -122,125 -6%

Table 1.2 PCTs with Largest Percentage Movement Effected by the Staff MFF PCT Impact of MFF £000s % Islington Camden Tower Hamlets Westminster City & Hackney Teaching Kensington and Chelsea Southwark Hammersmith and Fulham Lambeth Wandsworth Newcastle-underLyme North & East Cornwall South Stoke North Stoke Blackpool North Devon Central Cornwall Teignbridge West of Cornwall Torbay

+35,373 +38,502 +41,000 +34,627

+12.6 +12.2 +12.0 +11.8

+40,461

+11.2

+25,731 +35,513

+11.0 +10.0

+22,051 +36,307 +29,261

+9.7 +9.6 +9.3

-7,001

-6.0

-11,353 -10,450 -10,777 -12,731 -11,851 -18,115 -9,717 -15,665 -15,107

-6.1 -6.2 -6.5 -6.5 -6.6 -7.8 -7.9 -8.1 -8.4

Description of MFF and its History The history of the MFF (summarised in RAWP1, 1998) starts with the Resource Allocation Working Part (RAWP) in 1976, which introduced the principle of equity into resource allocation on the basis of need and unavoidable cost, and recognised that 11

67% staff expenditure weighting * HCHS allocation which is 77% of PCT revenue allocations (Department of Health, 2005c).

16

“the costs of exactly the same form of care may vary from place to place depending on local variations in market forces”. RAWP initiated a review that reported in 1980 (AGRA, 1980), which considered the use of SCA in formulating a staff MFF and rejected it on the basis that it would reflect historical patterns of funding and practice, selecting instead the GLM approach. The first MFF related to non-professional staff and had a range of 17% between London and Rest of England. The MFF was reviewed in 1988 (NHS Management Board, 1988) and the rationale for the GLM approach was restated, identifying private sector pay rates as a proxy for direct and indirect costs, where direct costs represent higher pay costs to recruit and retain staff and indirect costs are a by-product of higher turnover, e.g. higher recruitment costs, use of bank and agency staff, purchase of services from the private sector 12. The last major review took place in 1995 13, with a further review in 2002 (Wilson et al, 2002a) and revision in 2003/4 (HSC 2002/012), leading to the current system, in which the staff MFF now includes all professional and nonprofessional staff, giving it the maximum possible coverage; its differential range has widened to 51%. Table 1.3 History of MFF Review Date

Minimum MFF Score

Maximum MFF Score

Range

1980

95.6

111.9

17%

1988

Includes Administrative and clerical, ancillary workers, unqualified nurses.

Excludes Professional Staff

27%

1993

93.5

125

34%

1995/6 2002

85

128

51%

(MFF Score relates to 1995/6 allocation period). Includes maintenance, administrative and clerical including managers, unqualified nurses and ancillaries, ambulance, qualified nurses, midwives, PAMs and P&T. (MFF Score relates to 2005/6 allocation period). Includes all medical and nonmedical staff from 2003/4.

Medical Staff

None

Why the Specific Cost Approach has been Rejected in the Past The last detailed review of the staff MFF was undertaken for the Resource Allocation Group (RAG) by the University of Warwick (Wilson et al, 1996). The Department of Environment simultaneously undertook a review of the Area Cost Adjustment which drew similar conclusions (Elliott et al, 1996). Wilson et al once again considered and then discarded the possibility of applying the Specific or Recognised Cost (SRC) approach: “The SRC approach has intuitive appeal. It appears to be a ‘common sense’ approach to the problem, in contrast to the GLM type methods which appear to involve something of ‘an act of faith’ that general labour market indicators can 12

As part of our review we investigated non-pay indirect costs of recruitment, e.g. advertising, in the non-pay section of trusts’ general ledgers. The coding structure within general ledgers did not permit any robust form of comparison so that, even with highly detailed local data, it proved to be unfeasible to measure indirect costs of recruitment efficiently. Costs associated with substitute of inputs in terms of bank and agency have been analysed and reported on during the review. We have described them as a ‘volume’ effect (as distinct from pay which is a ‘price’ effect). 13 The Resource Allocation Group commissioned Warwick University in 1995. Warwick reported in 1996 (Wilson et al, 1996).

17

proxy the non-wage labour cost elements faced by providers in different parts of the country. However, while superficially attractive and straightforward, the SRC approach has a number of shortcomings: • • •



measuring the scale of these costs is, in practice, much more difficult than at first it might appear; in conjunction with a continuing national wage agreement, the SRC condemns providers in high cost areas to paying below the shadow wage rate necessary to optimise turnover and quality; the SRC approach provides a permanent perverse incentive, encouraging providers to increase their actual costs because these will in turn influence the MFF formula; the SRC approach remains open to abuse because of the problems of quantifying the non-wage elements of the costs in a generally acceptable manner.” (Wilson et al, 1996, pp13-14)

The Wilson et al (1996) and Elliott et al (1996) reviews have underpinned subsequent studies (e.g. NERA in 1998; PWC in 1998, Maxwell Stamp plc 1999, Blanchflower 2002, Davies 2002) which continue to conclude that SCA is unworkable in practice, implying that some version of GLM is preferable. The Revenue Grant Distribution Review Group (FRG paper 65, 2002) went so far as to infer that there were “fundamental flaws both intellectually and in practice with a SCA method”, enumerating them as: 1. It takes account of what is spent rather than what needs to be spent; 2. The difficulties of standardisation render it useless - one cannot filter out all the effects other than region, quality etc; 3. There is an unavoidable judgemental aspect as to which are allowable specific costs and which are not, e.g. overtime; 4. The data collection is onerous and raises issues over the intelligibility of the method; 5. Previous attempts have suffered from paying little attention to mix of staff and grades by adding them in the complexity increases; 6. The local labour market is not self contained and must be compared to the whole economy not just itself. In summary, all previous reviews of market force adjustments have argued that the SCA failed to pass the test of practicality, technical robustness, reliability of calculation and freedom from perverse incentives, since it was regarded as virtually impossible to distinguish between those elevated costs which were avoidable, i.e. inefficiencies, and those which were unavoidable.

Problems with the Current Method Based on the General Labour Market At the same time, the GLM is not without its problems (Blanchflower, Oswald et al, 2002) in using the general labour market as a proxy for variable actual costs across the NHS. Throughout this review of the SCA approach to the staff MFF, trusts have articulated their objections to the MFF in terms of its impact and its rationale: •

The lack of connectedness between the GLM and the NHS labour market has attracted most criticism. The NHS is virtually a monopoly employer of doctors, nurses and other health professions and so the service at large remains

18



• •



unconvinced about the logic of using private sector wages to reflect NHS costs. What is more, the NHS comprises several different labour markets, so that doctors work in a closed occupational market with national or international geographical boundaries, in contrast to unqualified staff who are drawn from an open local labour market. The range and gradient of the staff MFF is challenged as being too wide in a labour market that is distinguished by London Weighting geographical allowances and perhaps some grade drift in areas of obvious competition with the commercial market, e.g. administrative and clerical staff. The original aim of the MFF was to enhance equity throughout the NHS, but the large and visible shift in resources driven by the MFF leads to feelings of inequity nationally. Adjacent trusts may have different MFF scores, producing ‘cliff-edge’ effects, which attract different funding levels even though trusts may be recruiting from the same labour market. Cliff edges produce a sense of inequity even in high MFF areas. The MFF has also been criticised for complexity and a lack of transparency in the methodology used.

Contemporary Difficulties - Encountered in this Review Circularity. We are observing the MFF from a different vantage point to that of RAWP in 1976 which started with a clean sheet, since there was no MFF. Their rejection of the SCA was based on circularity associated with previous NHS funding patterns and spending practice. The GLM, in contrast, offered at the time an independent measure of cost differentials. We are now in a position where the impact of GLM poses its own circularity, since the funding for staff is a product of funding patterns which are themselves a consequence of the GLM. Under those circumstances we would expect a degree of convergence between the cost base of trusts and the GLM represented by the MFF. This poses a theoretical bind, making it almost impossible to separate cause and effect in costing structures. Agenda for Change (AfC). The policy underpinning AfC creates a tension with spatial wage variation as it aims towards a nationally consistent skill-wage structure. Grade drift, which has been part of the response in high cost areas to achieve market rates, will be reduced by AfC (albeit with the explicit option to use recruitment and retention premia as an alternative). It would also eliminate negative grade drift, in which low MFF trusts were able to attract staff on relatively low grades and, as a result of AfC, have been obliged to upgrade their staff. “We are waiting for the dust to settle around Agenda for Change”, observed one of our interviewees. At the moment we are not in a position to quantify the impact of AfC but the expected tendency may be to narrow the cost differentials nationally. The GLM theory predicts that employers would nevertheless seek some methods of retaining the market differentials. It is perhaps worth noting that the MFF (funding adjustment for providers for unavoidable cost differences, both direct and indirect), is not a mechanism for paying staff; formula funding through the MFF and employers’ staff payment systems have different applications and functions. What is the Specific Cost Approach? Critics have rejected the SCA in the main by anticipating practical and conceptual problems. As a result there is no blueprint of

19

what a Specific Cost Approach would look like if implemented. As a minimum it would represent differences in wage costs to individuals, which for the NHS would equate at least to London Weighting. It has been suggested (Elliott, 1996) that SCA would also include indirect costs associated with recruitment and high turnover. Our study is the first detailed empirical review of SCA for the NHS and, as such, had the task of building an approach. We have gone beyond basic wage differences by looking at both cost and volume of labour, i.e. wages and productivity, on a geographical basis. The methodology was refined throughout the process, informed by a Reference Panel of trusts which contributed to the review.

Timescale and Process The project commenced in January 2006 and reported in July 2006, with final research completed in November 2006. The data sets we have used are summarised in Table 1.4. The costing and activity data reference period was selected as 2004/5, the most recent complete year available to us. The 2005/6 staff MFF has been used throughout, which is based on a three year rolling average of historic earnings data using the general labour market method and based on labour market data for 2001-2003.

Structure The study has drawn on a range of data sets and a separate chapter reports on each. Section A contains an introduction and summary chapter. Section B contains five chapters documenting the findings for a micro sample of 14 trusts, using a bottom-up costing approach. Section C looks at England as a whole through national data relating to nursing, medical staff and specialty (HRG chapter) costs. Arithmetic benchmarking tools are used alongside econometric modelling techniques to derive a picture of specific cost behaviour. Section D applies learning from Section C to a national data set that combines medical and non-medical staff together with costing data. It also adopts a theory-driven econometric method to test the feasibility of adopting a Specific Cost Approach at a macro level.

Table 1.4 Data Sets Used in the Staff MFF Study Chapter

3

4 5 6

7 9

10

Data Set and Theme Micro study design and sample of 14 trusts, selected on the basis of three criteria: (i) adjacencies, (ii) MFF ranking to gain a spread of low – medium – high, (iii) access and motivation Trust general ledgers 2004/5 (micro sample) • Combined with HES episode and admission data 2004/5 Trust payrolls 2004/5 (micro sample) Questionnaire survey of trusts 2006 (micro sample) • Qualitative survey of labour market conditions administered through a structured interview with HR and Nursing Directors Non-medical census data September 2004 • Combined with other labour market references to test Chapter 6 perceptions Healthcare Commission (HCC) 2004/5 data on ward nurse staffing and costs in 4,435 wards covering all hospitals in the country, plus: • Data on quality indicators, drawn from trust star rating scorecards • House Price Index • Rurality index Medical census September 2004 • Combined with Hospital Episode Statistics 2004/5 + outpatient + A&E activity

20

11 12

13

14

Health Resource Group (HRG) unit costs and Reference Costs 2004/5 • Combined with FCE activity by HRG 2004/5 All Staff Data Base: • Non-medical census data September 2004 • Medical census September 2004 • Episode Statistics 2004/5 + outpatient + A&E activity • Trust Financial Returns (TFR3A) 2004/5 Bespoke data sets for provided by the Department of Health, including: • Unit labour costs 2004/5 • Reference Cost Index 2004/5 • Hospital Episode Statistics 2004/5 Bespoke data sets provided by the Department of Health, including: • Unit labour costs 2004/5 • Reference Cost Index 2004/5 • Hospital Episode Statistics 2004/5 + extended set of variables

In the main, we have structured each chapter to describe the data set, methods and results. The final discussion section in each chapter addresses three aspects, in accordance with the research brief: •

Spatial variation – what are the geographical differences in costs and productivity, together with quality of care, vacancies, use of agency, bank and overtime?



Avoidable and unavoidable – where differences exist, to what extent can they be defined as avoidable or unavoidable?



Feasibility – the chapter has reported on a specific data set. What is our assessment of the feasibility of using a similar approach or data set as an alternative to the current GLM method?

21

CHAPTER 2. SUMMARY OF FINDINGS AND CONCLUSIONS This chapter summarises the findings and conclusions of the review of the Specific Cost Approach to the staff MFF under the headings (a) role of theory, (b) design of study, (c) approach, (d) spatial variation, (e) avoidable or unavoidable nature of spatial variation and (f) feasibility of applying the approach and the data set to generating an MFF.

THE ROLE OF THEORY Economic theory has a role in the study, in driving design (e.g. Chapter 13) and in evaluating findings. We consider here the role of labour market and public choice theories.

General Labour Market Labour market theory underpins the staff MFF, predicting that it is more expensive to employ staff in some areas, notably London, than others. Competitive wages will rise or fall according to the cost of living. Within a given skills set, spatial wage differentials will reflect differences in the cost of living plus cost of amenities in different geographical areas. (Amenities and disamenities reflect financial and nonfinancial differences, such as job satisfaction and attractiveness of location). In terms of the NHS, where wages are determined by national structures and are therefore sticky, trusts in areas with low cost of living and low market wages (i.e. low MFF areas) will be paying above the going rate for staff, in contrast to trusts in high cost/high wage areas (high MFF areas) which will be paying staff below the market rate. The theory predicts that this asymmetry between NHS and general labour markets will lead low MFF areas to attract more staff of better quality, who will stay longer, reflecting better recruitment and retention conditions. The outcome is expected to be higher productivity and lower turnover associated with fewer vacancies. Conversely, the theory predicts that high MFF areas will attract a poorer quality workforce and experience greater difficulty in recruitment and retention, reflected in higher turnover rates, increased reliance on bank and agency and lower productivity. Economic theory also suggests that the NHS wage in high MFF areas will have a tendency to drift upwards (as employers strive to recruit) and so be measurably higher than wages in low MFF areas.

Public Choice The same set of events is open to interpretation by alternative economic theories. Public choice theory relates behaviour to the procedures of decision-making rather than to the outcomes (Wiseman, 1985; Cullis and Jones, 1992). An observation that high MFF trusts have lower productivity and more expensive staff would be consistent with the prediction that managers maximise their budgets and, in order to do this recurrently, will spend their full budget allocations each year. A consequence would be that trusts receiving higher funding (through the external MFF mechanism) would raise their expenditure in line with budget, resulting in lower productivity and higher unit costs.

22

Dominant Theory We allow for both arguments throughout the analysis but, on the basis of empirical evidence, ultimately conclude that the labour market theory offers a better overall explanation of spatial wage cost differentials.

DESIGN OF STUDY The review was divided into a micro study and a macro approach employing both arithmetic and econometric tools: •

The micro study was based on fourteen trusts, selected according to (i) MFF score to achieve a minimum – maximum spread across the 0.85 – 1.28 range of the index, (ii) geographical clusters and (iii) access and motivation. The study had a quantitative component, based on general ledger and payroll systems data, and a qualitative element based on interviews undertaken within the trusts.



The macro component was based on a sample of up to 173 hospital trusts, i.e. all acute trusts in England. A variety of data sources and methods were employed, with the aim of testing their usefulness. For ease of display much of this data has been summarised into quintiles where quintile 1 represents the twenty percent of trusts in England that are in the lowest MFF range and quintile 5 represents the fifth that are in the highest MFF bracket. Quintile 1 is dominated by the South West Peninsula, Trent and Cumbria while Quintile 5 comprises trusts mainly in central, north and west London and the Thames Valley. The MFF describes spatial or geographical variation of trusts and we have looked for variation in cost behaviour that is consistent with the MFF.

Figure 2.1 shows the levels of measurement captured in this study and maps a set of connections, each dependent upon one or more data sets: •

Examine labour market conditions o Study: qualitative survey, triangulated against external data



Quantify wages per employee (cost per wte) o Study: HCC, payroll analysis and general ledger analysis



Identify productivity differentials (wte per workload measure 14) o Study: HCC, general ledger analysis, medical and non-medical census analysis



Identify unit labour cost differentials (staff cost per workload measure) o Study: HCC, general ledger analysis



Identify total unit cost differentials at specialty level, standardising for casemix o Study: specialty analysis based on HRG costs



Identify total cost differentials standardising for a basket of factors o Study: econometric analyses contained in chapters 13 and 14

14

Workload is variously expressed as hospital output units, e.g. finished consultant episodes, and also as resource input or capacity units, e.g. available beds.

23

Figure 2.1 Reviewing the Specific Cost Approach to the Staff MFF

Staff MFF = Index of Spatial Wage Variation in General (Private Sector) Labour Market CONNECTION? Price Pay Cost per WTE

Unit Labour Cost

HRG Unit Cost by Specialty

NHS Labour Market Conditions (Qualitative Survey)

Structure & Geography Location, Rurality, Hospital Type

• • • • •

Vacancies Turnover Use of Bank & Agency Quality Cost of Living

Volume Productivity WTE per workload measure

24

SPATIAL VARIATION The staff MFF is an index of spatial wage variation in the general (private sector) labour market. Our starting point was to explore labour market conditions in the NHS, based on the micro study questionnaire survey and national data, and then to look at staff costs. Unit labour costs are a product of pay (price) and productivity (volume). While it was readily accepted that NHS pay rates vary with geography, within the limits of a national pay system (through, for instance, London weighting allowances), the productivity dimension was more controversial. An important strand of our enquiry has been to look at spatial productivity differences, with no a priori assumptions as to whether they were avoidable (inefficiencies) or unavoidable (conditioned by the labour market).

Labour Market Conditions General labour market theory suggests that spatial variations in private sector pay captured by Standardised Spatial Wage Differentials (SSWDs) influence the labour markets in which the NHS operates either directly, through variations in salary costs, or indirectly, through variations in costs associated with vacancies, high use of bank and agency, use of overtime, low productivity and lower quality of care. We explored labour market conditions through a structured questionnaire survey of HR and nursing directors in our micro sample of 14 trusts, administered through interviews in March – April 2006. Three labour market profiles emerged, consistent with the geographical distribution in the sample of the south west, London and the north, and are based on perceptions (detailed in Chapter 6) that have been verified against national data (in Chapter 7). In the South West, trusts described a stable workforce that remained with the organisation for a long time, resulting in low turnover, with a high average age and a high proportion of staff employed on a part-time basis (estimated by one trust at 50%). There was historically some use of overtime; bank staff may be drawn from a pool of part-time staff who work exclusively on the bank. A recurrent theme in the interviews was the buoyancy of the local economy, which increased competition among employers within the local labour market for support workers, and which raised the cost of housing to a level which was unaffordable to the local population (increasingly purchased by equity-rich but work-poor older people moving to the area in retirement). The trusts in the north of England presented a similar profile in every respect except for the local economy and housing. Recruitment of support workers was not difficult and the cost of housing had remained lower than in other parts of England (although the issue of affordable housing for nurses and other NHS workers was a theme which ran throughout the interviews). The quality of the workforce was regarded as high due to its stability. London trusts described a picture of a younger workforce, living more often in rented accommodation, with higher turnover leading to higher vacancy factors at any given time, requiring greater use of bank nurses to cover these vacancies. It was also noted that the proportion of part-time staff was relatively low (estimated by one trust as 10%). High proportions of full-time staff are consistent with a younger age profile and higher cost of living. One trust observed that bank staff were drawn from their own full-time employees who routinely worked bank shifts to enhance their wages. Demand and Supply of Labour. The demand-supply balance was perceived to have shifted across all labour markets (at the time of interview), particularly among newly qualified nurses and physiotherapy staff, due to: (i) increases in pay associated with recent awards, (ii) increases in the supply of newly qualified recruits through growth in the number of training commissions, and (iii) a degree of insecurity which had entered the job market due to financial instability in the NHS, associated with announcements of redundancies, that had

25

increased retention, reduced turnover and so reduced the number of vacancies. There was a feeling that the NHS labour market had entered a new era of improved recruitment, which offered some advantages in raising the quality of recruits since employers could be more selective. Overseas recruitment drives, it appeared, were a thing of the past. Turnover. Turnover data for the period 2003/4 (based on census data for 2003 and 2004) supports the perception that low MFF trusts have a more stable workforce, as summarised in the table below which shows that 19% of nursing staff in quintile 5 left their organisations compared to 13% in quintile 1. Our interviewees, speaking from current experience of 2005/6, identified turnover as being low or medium for registered nurses, registered midwives and total staff. No trust perceived turnover to be high. Table 2.1 Nursing Staff: Average of Leaver Rates (Census 2003 and 2004, n = 172) Quintile Teaching Non-Teaching Total

1 13% 13%

2 14% 13% 14%

3 15% 16% 16%

4 15% 17% 17%

5 22% 18% 19%

All Hospital Trusts 17% 15% 16%

Vacancies, defined as the difference between establishment and in-post, according to the HCC national data for ward nursing staff, showed a steady increase throughout the quintile range from 6% to 22%. Table 2.2 Ward Nurse Vacancies (HCC Dataset, n=165) 2004/5 Quintile Total

1 6%

2 7%

3 9%

4 14%

5 22%

All Hospital Trusts 17%

Use of Overtime diminished in the high MFF range according to national nursing data (HCC) but the micro study of payroll data did not find systematic patterns for any group except ancillary where London’s proportion of gross pay was 3.3% higher than in non-London trusts. Use of Bank and Agency was consistently higher in high MFF trusts across all staff groups except managers. The HCC analysis showed this for ward nurses and the micro study payroll analysis (n=9 trusts) investigated it across all staff groups. Table 2.3 % of Total Wage Bill (HCC Dataset, n=165) 2004/5 Quintile Overtime Bank Agency

1 2.3% 5.1% 2.6%

2 2.1% 4.9% 2.2%

3 1.7% 7.4% 4.1%

4 1.5% 10.7% 4.0%

5 0.4% 15.9% 5.1%

All Hospital Trusts 1.6% 8.7% 3.6%

The low resort to overtime in the upper quintile (and the relatively low use across the NHS) is perhaps surprising. It may reflect the fact that bank (and even agency) is a generally cheaper option, (though not necessarily more cost effective, if it has hidden costs in terms of productivity and retention). Table 2.4 London Uplift Comparing Bank & Agency as % of Gross Pay (n=9) Quintile

Non-London London London increase

Administrative & Clerical

Ancillary

Management

Medical

Nursing

3.6% 16.5% +12.9%

5.2% 23.4% +18.2%

0% 0.5% +0.5%

5.6% 8.9% +3.3%

8.7% 26.7% +18.0%

Scientific, Therapeutic & Technical 3.5% 14.4% +10.9%

All Staff 5.9% 16.3% +10.4%

26

Quality. Five quality indicators were included in the HCC data set and, with the exception of drug errors per bed, trusts in the top quintile performed better on these indicators. At the same time, the trusts in the top quintile did not appear to deliver better quality, according to the quality markers supplied by the Department of Health and used in the star rating reviews. This reflects the findings published in the Health Commission’s Ward Staffing report (2005) which found that patients “are less happy with the care received in London hospitals” (p10). A recent study (Hall, Propper & Van Reenen, preliminary draft, 2006) has gone further by examining the connection between labour markets, wage differentials and quality expressed as death rates. In keeping with labour market theory, the study predicted that “areas with higher outside wages should suffer from problems of recruiting, retaining and motivating workers and this should harm hospital performance” (p1). The study found that stronger local labour markets (i.e. higher MFF areas) significantly worsened hospital outcomes in terms of both quality and productivity. A 10% increase in the outside (local labour market) wage was associated with a 3% - 8% increase in death rates. It drew the unambiguous conclusion that “an important part of this effect is operate through hospitals in high outside wage areas having to rely on temporary agency staff as they are unable to increase (regulated) wages in order to attract permanent employees” (sic, p1). Empirical evidence, therefore, supports labour market theory by stating that paying below the market rate (in high MFF areas) results in higher use of temporary staff, lower productivity and poorer quality. Rurality. We considered rurality through reference to literature and as part of the econometric analysis conducted later in the study. A literature survey (Department of Health, 2005b) identified five reasons why rurality might increase costs: diseconomies of scale/scope, travel costs, unproductive time, the basis of precedent (lack of rurality adjustment in England compared to Scotland, Wales and N Ireland) and other factors such as telecommunications. The survey found that, while there were clear perceptions reported that rurality carried increased cost burdens, there was little empirical evidence to support this. Our econometric analysis, similarly, found that rurality was not associated with higher staffing costs. If anything, rurality was associated with lower costs than urban areas (Chapter 14). Competition (trusts within 20 mile radius) and population density were urban factors correlated with higher costs. Rural labour markets have been identified as areas with relatively low private sector wages and with hospital workforces that show lower turnover, higher productivity and better quality outcomes than those of densely populated urban areas. The implication of the econometric analysis is that any cost inflators associated with rurality (e.g. travel times) are outweighed by cost deflators, most notably labour market factors. This is little comfort to rural areas which argue that low costs may reflect poverty of access (through limited infrastructure) to services. Equity differences can be captured when costing models use a standard outcome or level of access to compare rural and urban areas (discussed in Smith, 2007). For example, a detailed study of the emergency ambulance service (MHA/ORH, 1997) showed that more ambulances were needed to meet the national standards, e.g. 75% of life threatening calls to be reached within 8 minutes, so that rurality tended to increase the cost of this service. The national resource allocation formula now recognises this through a separate payment adjustment. In the context of this staff MFF review, however, the evidence has consistently indicated that rural areas have lower staffrelated costs than urban areas. Housing. Mean house prices are correlated 0.7 with the staff MFF with a rise in prices throughout the quintiles, except for quintile 1 which ranks second between quintiles 2 and 3. There has been a nationwide increase in prices but a shift in the differentials over the last decade (HBSO, 2006), narrowing the gap between the South West Peninsula and London. The theme of house prices ran through the micro study as a cost of living indicator and recruitment constraint. The study found, however, that within any given staff group the NHS was similarly disadvantaged across the country (so that nurses struggled to buy houses whereas consultants were better placed). Mean house prices are highly correlated with

27

spatial wage differentials (the general labour market) and demonstrate the link between private sector wage rates, costs of amenities and the staff MFF. Qualitative and Quantitative Data. Tables 2.1 – 2.4 demonstrate that trusts’ perceptions obtained through interviews (qualitative data) were backed up by national data relating to turnover, workforce and local economic conditions. Timing and Links with General Labour Market Predictions. The data, both quantitative and qualitative, is broadly consistent with general labour market theory but poses a question of timing. In responding to the questionnaire survey, participants gave a sense of their current experience, setting it in the context of how conditions have changed in recent years. Earnings statistics, underpinning the staff MFF and based on the general labour market approach, on the other hand, use a rolling three year average of retrospective data. This earnings data may take some time to reflect the impact of local economic movements upon wage rates.

Pay and Productivity Pay per wte 15 rises in line with the staff MFF across all groups except doctors. This finding was consistent across the micro study data sets (general ledger in Chapter 4 and payroll in Chapter 5) and the national Healthcare Commission (HCC) nursing data (in Chapter 9). Variation in pay cost per wte is summarised in Table 2.5. Basic Pay is the payment to individuals that excludes Geographical Allowances and other enhancements. Gross Pay is defined as Basic Pay plus Geographical Allowances, Overtime and Other Adjustments. Total Wage Cost is defined as Gross Pay plus Employer’s Costs. Table 2.5 Summarising Variation in Cost per wte

Doctors Nurses Scientific, Therapeutic & Technical Administrative & Clerical Ancillary Management Total

HCC Gross Pay n=4,435 wards, 165 trusts

Payroll Basic Gross Pay Pay N=9 n=9

General Ledger Actual (Total Budget (Total Wage Cost) Wage Cost) n=13 n=14

Difference between top and bottom quintiles (mid points)

Difference between London and nonLondon as % of nonLondon

Difference between London and non-London as % of non-London

+18%

-26% +4% +9%

-16% +23% +30%

+19% +37% +55%

+8% +34% +30%

+12%

+33%

+76%

+39%

+3% +9% +4%

+30% +16% +22%

+61% +43% +48%

+36% +13% +33%

Table 2.5 points to a difference in results emerging from our analysis of the trusts’ payroll and general ledger systems, especially among medical staff. The general ledger contains items that do not appear on the payroll, e.g. agency staff including medical agency locums. Other differences occur in treatment of consultant contract back-pay, bonus payments and provisioning. 15

where ‘pay’ represents the total wage cost per whole time equivalent employee, including geographical allowances

28

Productivity is significantly lower in higher MFF trusts for all staff groups except managers and ancillary. The most compelling evidence comes from the HCC study of ward nurses where we found 47% more nurses per hospital admission and 35% more after adjusting for case mix (based on the complexity index summarised in Appendix 1). Table 2.6 summarises the results by staff group. Table 2.6 Summary Variation in wte per Workload Measure Productivity Measure

Sample Size Difference Being Measured

HCC wte (skill mix adjusted) per 100 admissions

N=4,435 wards, 165 trusts Difference between bottom and top quintiles (mid points)

Medical Census wte per 100 FCE (1) unweighted, (2) complexity and volume adjusted (CVA) n=173 trusts Difference between top and bottom quintiles (mid points) (1) +73% all grades, unweighted FCE (2) +46% all grades, and +22% for career grades, CVA

General Ledger Budget wte per 1,000 admissions

N=14 London non London difference +83% across all grades

Found association between: Low productivity, high MFF

+25%

Low productivity, high MFF

Scientific, Technical & Therapeutic Administrative & Clerical

+56%

Low productivity, high MFF Low productivity, high MFF

Ancillary Management Total

-18%

Doctors

Nurses

+47% gap, reduces to +35% casemix adjusted

+53%

+39%

No association No association Low productivity, high MFF

Unit Labour Costs (ULC) by Staff Group The impact of higher average wages and lower average productivity in high MFF trusts is to raise the relative unit labour cost, accentuating the spatial variation through a combined price and volume effect. This was explored in the micro study (Chapter 4, general ledger) by taking all staff costs and dividing them by a measure of workload, e.g. 1000 admissions, across 14 trusts. We observed an almost threefold difference in cost between the lowest and highest cost trust (from £1.2 million to £3.5 million per 1,000 admissions, see Appendix 4.1) across the whole workforce. Similar differentials were apparent when we considered individual staff groups. London trusts in aggregate spent 85% more than non-London trusts on staffing per 1,000 admissions (see Table 4.4). Within the general ledger analysis it was striking that, while there was some variation between budgeted and actual costs within trusts, this was neither systematic nor of the same magnitude as the variation between trusts’ budgets. Broadly, trusts aim to work within budget (Crilly and Le Grand, 2004) so that differences in their cost bases are a function of the budgets available to them. This analysis is consistent with both (a) labour market theory predictions where high MFF equates to high staff costs and (b) public choice theory which would anticipate that higher budgets (through exogenous resource allocation) would produce higher expenditure, and highlights the difficulty in distinguishing between cause and effect.

29

The staff group ULC analysis based on micro-trust financial data produced aggregated results that set an agenda for further enquiry. Subsequent stages of the project (reported in Section C) refined the workload measures (to take account of complexity and volumes of outpatient and A&E attendances) and separated out the price and volume effects within unit labour costs.

Unit Costs by Specialty We considered maternity costs in detail using the general ledger and found no spatial variation in costs. This was consistent with our analysis of HRG chapters where we found an uneven link between geography and cost, showing no spatial variation in maternity and ophthalmology costs but a positive relationship between MFF and costs in Chapters S (haematology, infectious diseases, poisoning and non-specific groupings) 16, P (diseases of childhood) and H (musculoskeletal). We do not know what drives these variations in cost behaviour as HRG costs are not sufficiently transparent to show their composition.

AVOIDABLE VERSUS UNAVOIDABLE COSTS We have used two techniques, benchmarking and regression modelling, to separate out avoidable and unavoidable differences in costs. Both approaches require standardisation of casemix complexity (Appendix 1) and trust type (teaching, non-teaching acute and specialist). Benchmarking is a linear approach, applying yardsticks of best practice, the selection of which involves some judgement. Its advantage is that it is well understood and used extensively throughout the public sector, and also that its assumptions are made explicit, e.g. in setting a ratio of staff against workload. Regression modelling is technically more complex, allowing it to handle a broad range of variables simultaneously. Its ability to reduce large data sets to a few coefficients is its strength but also poses a problem when it comes to presenting the results to a general audience. It uses a probabilistic approach to estimate the certainty with which a variable can or cannot be explained by other factors, and so it quantifies uncertainty. Benchmarking approaches are favoured by managers because they depend upon assumptions which, once selected and agreed, are used to eliminate uncertainty and provide a basis for decision-making. A further difference between the two approaches is their level of aggregation. The benchmarking approach aggregates trusts to quintile level (by dividing the MFF range into five) while controlling for trust type (acute, specialist, teaching) and location (London, nonLondon). Differences between high and low cost trusts offset each other within each quintile, so that individual trust variations are netted off at broad spatial levels. Regression analysis treats each trust as a separate observation and so measures all noise in the system.

Pay Ward Nurses in England. The HCC arithmetic analysis of wards attributed the observed 18.3% price variation in nurses to geographical and workload factors. It comprised 10.7% geographical allowance, 6.3% skill mix linked to caseload complexity and 1.3% described as ‘grade drift’ associated with the London labour market. The residual grade drift element could be interpreted either as unavoidable (required to meet labour market recruitment

16

Chapter S is a heterogeneous group of HRG codes that includes complications of procedures, planned procedures not carried out, ‘poisoning, toxic, environmental and unspecified effects’, admission for unexplained symptoms, as well haematology, infectious diseases and poisoning.

30

conditions) or as an avoidable cost incurred by trusts within their budgets, while caseload complexity was a PbR tariff issue, rather than MFF cost. The HCC sample price regressions for staff pay costs excluding geographical allowances, suggest that the MFF is fully accounted for by unavoidable factors such as type of hospital and location. However, when the analysis is carried out with total costs per wte including geographical allowances, there is a small residual positive effect of MFF on price suggesting that the MFF might be providing cash that can be used for buying more expensive nurses (0.6% of variation). This is consistent with the direction of the arithmetic calculations, although slightly smaller in scale. The micro study payroll analysis of all staff groups concluded that between London and nonLondon trusts there was a 22% difference in pay costs per average wte and suggested that at least 18% could be described as unavoidable, consisting of 4% basic, 9% geographic allowance, with the further 5% representing the flow through effect on overtime + other costs. The all staff data base econometric analysis of Chapter 12 investigated the total wage cost per WTE and found that size and type of hospital accounted for 26% of variance while geographical factors (including the staff MFF along with the London location, which is both geographical and a driver of cost through London Weighting) accounted for a further 28%. The total explained variance in staff cost per wte amounted to 54%. The benchmarking analysis of the all staff data base identified a price variance between quintiles 1 and 5 of 19%, similar to the findings of earlier work relating the HCC data set of nurses in England and the micro study payroll analysis. Most of this was judged to be unavoidable (recognising that an element was skill mix associated with more complex work and would be paid via tariffs).

Productivity of Ward Nurses A benchmarking approach has been used to illustrate the magnitude of cost differences and to discuss whether they are avoidable or unavoidable. Benchmarking depends upon judgement and interpretation, and so we cannot use this exercise to conclude that raised costs are definitely avoidable. Regression modelling, likewise, can tell us what proportion of costs is explained and unexplained, but cannot tell us how much of the unexplained variation is avoidable. Benchmarking. Volume variance 17 (of standardised grade E wte per 100 complexity adjusted admissions) between quintiles 1 and 5 is 35%, showing that the number of nurses needed for the same output is 35% higher in quintile 5 than in quintile 1. Much of this variance is located in London acute trusts. Trust peer groups were defined as acute, teaching and specialist. If acute and teaching hospitals with relatively low productivity (mainly London and quintiles 4 and 5) were to function at the average for their peer group, then the productivity gap would reduce to 20%. If they were to function at the level of the most efficient quintile (Q1) then the productivity differential would reduce to 15%. These are ‘what if..?’ rather than ‘what could or should be’ statements, serving to quantify rather than explain differences. Volume Regressions found that only a small fraction of variation in productivity ratios could be accounted for by our econometric model, which was surprising given the range of 17

‘Variance’ is used according to accountancy conventions to describe difference or variation from a standard. It does not conform to the specific statistical measure of variance. This should not cause confusion since the arithmetic and statistical analyses are conducted separately throughout the report.

31

variables that were included in the equation. The direction of findings was consistent with the arithmetic approach since the MFF itself appears to account for up to 32% of variance (excluding three outlier trusts), suggesting that there may be other factors at work on the volume of nurses provided, which may be avoidable. The results also suggest that both specialist and teaching hospitals in London have fewer staff per admission (complexity adjusted), whilst acute hospitals have more, implying lower productivity in London acute trusts.

Productivity of Medical Staff A benchmarking approach was applied to the medical workforce and workload, allowing us to consider constraints operating in the trust environment, such as the geography of medical training and education, requirements for recognition of junior doctor posts, Royal College professional standards and the role of consultants in supervising junior staff, in addition to growth drivers such as the consultants’ contract. Workload measures were refined to include a complexity adjustment (based on national HRG unit costs) and a volume adjustment (based on the number of outpatient and A&E attendances) in addition to the basic measure of FCE (selected in preference to admissions or spells because it provided the best match to HRG unit costs and to medical specialties). We concluded that trusts have limited discretion in varying medical staffing numbers (for a given workload) since the workforce is largely driven by education requirements. According to this analysis only 1% of the medical workforce, or 3% of career grades, could be identified as an avoidable cost. It suggests that London’s apparent over-resourcing in medical staff is a product of the medical education system which trains 28% of England’s doctors in London hospitals while treating 15% of England’s patient admissions. In the context of the whole medical workforce 1% is not large, but it is potentially significant for individual trusts since the avoidable cost element would be located in the non-consultant career grade (staff grade and associate specialist) within high MFF trusts, mainly in London.

Productivity of All Staff Productivity expressed as wte per occupied bed was explained by hospital type and size (R2=67%) with very little to be added by geographical variables (R2=2%) 18. When the productivity measure was defined as WTE per complexity adjusted FCE equivalent patient 19 then the size and trust type explained 28% of the variance and the Staff MFF explained a further 12%. The implication was that there is a (reasonably) predictable relationship between staffing and bed capacity, unrelated to geography, but a much less predictable relationship between staffing and output. The benchmarking approach observed more order within the data, partly by aggregating trusts into quintiles, while also allowing for differences between acute, teaching and specialist. It used scenario modelling to compare volume variances between quintiles 1 and 5 in weighted WTE per complexity adjusted FCE equivalent patient. The current position, described as Scenario A, showed 15.5% difference across the range 20. Scenario B showed the effect of adjusting all trusts’ performance (ratio of staff to workload) to the average, bringing to 7.4% the difference between the top and bottom quintiles. Scenario C made the 18

See Table 12.18 and accompanying text for details on how the staff MFF, rurality plus London/non-London variables were used to construct this R2. 19 Complexity adjusted FCE equivalent patient = FCE volume weighted to take account of caseload complexity + an adjustment for outpatient and A&E attendance volume. 20 The unweighted variance was 9.1% (Appendix 12.1, Table 12.App1).

32

strong assumption that all trusts could match the best performing quintile, within the hospital type, showing a 6.2% difference between quintiles 1 and 5 (reflecting trust type mix rather than geographical variation). Judgement was applied to determine a plausible balance. The two extremes, Scenarios A and C, were judged to be unrealistic since they characterised productivity differentials as being entirely unavoidable or entirely avoidable. The average (B) was also rejected, informed by evidence that labour markets differ in their exposure to turnover rates, prompting reliance on labour substitutes (bank and agency), which is likely to induce some unavoidable productivity differentials. On this basis, a realistic estimate of unavoidable spatial volume variance was pitched between Scenarios A and B, generating a fourth scenario D in which the unavoidable volume variance between quintiles 1 and 5 was estimated to be +11.4%, compared to the current 15.5%, i.e. a quarter of the current volume variance is judged to be avoidable against these scenarios.

Price & Volume Variance Index SCA D An index called SCA D was created by combining a flexed price variance (in which 10% was assumed avoidable) and the selected volume variance (in which 25% was assumed avoidable). It follows a similar contour to that of the MFF, with a marginally higher minimum and lower maximum position, starting 1.8% higher at quintile 1 and ending 1.7% lower in quintile 5. Figure 2.2. Index Based on Price & Volume Variance S ta ff M F F & S C A D

1 .2 5 0 1 .2 0 0 1 .1 5 0 Index

1 .1 0 0 1 .0 5 0 1 .0 0 0 0 .9 5 0 0 .9 0 0 0 .8 5 0 0 .8 0 0 1

2

3

4

5

Q u in t il e S ta ff M F F

SCA D

Unit Labour Cost (ULC) and Reference Cost Index (RCI) In Chapter 12 we found that size and type of hospital explained 33% of variance in the unit labour cost per complexity adjusted FCE equivalent patient (where ULC represents the amount of labour used and not the price of labour 21). The Staff MFF also had a modest effect (11%) but not a statistically significant one. The implication of this is that unavoidable factors (that we have identified) account for only one third of variation in unit labour inputs. Chapter 14 used an extended econometric database to combine up to 19 factors to explain variation in ULC and RCI. In keeping with findings from other chapters, the data indicated that (a) the staff MFF is associated with higher hospital costs; (b) most of these higher costs 21

The term ‘unit labour cost’ here has a different definition to that of the spatial variation discussion of general ledger results described earlier (reference Chapter 4). ULC in Chapter 4 includes a price and volume effect whereas in Chapter 14 ULC measures only the volume of staff resources.

33

can be linked to specific cost drivers e.g. teaching status, size, amount of specialist work (all of which are correlated with the MFF), bed occupancy rates; cost drivers have not been parsed between avoidable and unavoidable; and (c) there is a small proportion of variation in costs that can be attributed to the staff MFF alone. As a test of the feasibility of using NHS data to separate out avoidable and unavoidable cost elements it was not successful, as we did not emerge with models that contained high levels of explanatory power. Our best result (R squared = 61%) came from applying empirical parsimony rather than any particular hypotheses.

Specialty and Hospital Costs To obtain a specialty cost comparison we turned to the comprehensive national HRG unit costs, mapping cost differences as a percentage of the national average cost for the trust (casemix weighted according to HRG) against the MFF index. We found that trusts aggregated in quintile 1 were -8% below their casemix weighted national average cost whereas trusts aggregated in quintile 5 were +20% higher than their national average cost. By summarising trusts at quintile level we mapped an index (Figure 2.3) of 0.92 – 1.20 which resembles the staff MFF quintile mean of 0.91 – 1.18. The graph below maps the staff MFF and the HRG index derived from the unit cost analysis, set around a base of 1. The initial striking feature is the coincidence between the staff MFF index and the HRG index. The implication is that the MFF ‘works’ in that it reflects cost differentials. However, the circularity element needs to be noted because these cost bases are underpinned by current funding.

Figure 2.3 Staff MFF Index and Trust HRG Cost Index by Quintile Around a Base of 1 1.2 1.15 1.1

Index

1.05 1 0.95 0.9 0.85 0.8

1

2

3

4

5

Staff MFF

0.91

0.94

0.98

1.04

1.18

HRG Costs

0.92

0.96

0.97

1.03

1.20

Trusts Quintiles along MFF Range

Figure 2.4 reflects the same data set as figure 2.3, but displayed at the level of individual trust rather than quintile in a bivariate relationship between MFF and cost. The adjusted R squared coefficient of determination between the staff MFF and the HRG index is 31%, which is statistically highly significant and explains just under a third of the variation in costs . Figure 2.4 shows us that at the level of individual trust there is considerable (69%) unexplained variation between the cost base and the distance from national average, represented by the HRG index (where 1 = the national average). A multivariate regression analysis which controls for trust type (i.e. acute, teaching and specialist) and London location

34

explains a further 17% of variation, leaving 52% of variance unexplained. Differences between individual trusts are wider than differences between quintiles, which tend to follow the contour of the staff MFF. Figure 2.4 Staff MFF and Trust HRG Cost Index by Individual Trust

1.8 1.6

Index

1.4 1.2 1 0.8 0.6

1

6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

86

91

96

101 106 111 116 121 126 131 136 141 146 151 156 161 166 171

Trusts

Staff MFF

HRG Cost Index

Whole Hospital Costs – Econometric Approach Regression and stochastic frontier analysis were applied at trust level, using whole hospital costs or HRG chapter (specialty) costs. The analyses (Chapter 13) showed that locality is clearly a factor that influences costs, though how it does so could be explained in two very different ways. It could be that the MFF is acting as a locality indicator, suggesting that it is to a great extent performing its function in the resource allocation formula. Alternatively, it could be that the greater income available via the MFF is simply spent, thereby increasing costs. However, there is no evidence from the results reported in this study that this leads to differential cost inefficiency. If that is the case, then the test would be whether higher levels of the MFF merely allow trusts to provide more services to their local populations, which is outside of the scope of this study. The finding of no relative inefficiency may be surprising to those who believe that there is considerable variation across trusts in efficiency, but it should be borne in mind that the econometric analysis is at a high level of aggregation which may dilute variations of efficiency in specific areas.

Summary of Unavoidable Spatial (Labour Market) Variations We have described spatial variations in terms of price and volume and concluded that the price variation is largely unavoidable. The volume variation is marked between trusts and we have identified a proportion that we judge may be avoidable in nursing and, to a lesser extent, in medical staff. These two staff groups (medical and nursing) accounted for 60% of wage costs in our micro sample. At the level of all staff we estimated some potential avoidable cost between high and low MFF trusts. At the total trust cost level we found that, though individual trusts displayed differences in cost behaviour, the broad pattern of variation followed the contour of the MFF. The implication is that, at a high level of aggregation, the MFF is measuring net unavoidable cost differentials and that spatial variations in inefficiency are not apparent.

35

FEASIBILITY The question of feasibility of developing SCA in deriving the MFF is discussed in relation to each data set.

Payroll Data We asked each of the trusts in our sample to supply a copy of their year end payroll for the financial year 04/05 based on an agreed list of fields. The purpose was to analyse the data to identify spatial wage variation, and also to test whether payroll systems provided source data that could adequately support an on-going Specific Cost Approach. The rationale behind the data field specification was that, to generate an SCA, the payroll data would need to be: a) Readily available - part of the everyday operation of trusts, not requiring special procedures to produce b) Readily understood c) Detailed - rather than aggregated d) Easily analysed - the data could be analysed in its raw form without the trusts needing to manipulate it prior to external analysis e) Comparable - the data must facilitate inter-trust comparisons Thanks to the dedication and persistence of the individuals tasked by the trusts to provide the payroll data, we received nine payrolls capable of being analysed and which covered the spectrum of the Staff MFF. Nevertheless, we experienced considerable difficulties in trying to analyse the data, leading us to assume that the payroll is seldom, if ever, analysed by trusts even though it is the source data for a substantial proportion of their cost base. The first obstacle was availability. Of the fourteen trusts involved, nine could supply a reasonably robust data set on the second or third attempt; of the five remaining trusts, two could not provide any payroll data and three could provide only partial data. The second major problem was the proliferation of classes of pay in each payroll. In one trust we counted 327 classes, which rose to over 400 when coding errors (spelling mistakes) were taken into account. Although the payrolls did provide detail, the lack of a standard classification of pay types made them difficult to understand, time consuming to analyse and awkward to compare. A third problem, evident to a greater or lesser degree in each of the payrolls, was error, e.g. WTE figures which were missing, double-counted and under-counted or Leaving dates which were missing or incorrect. Perhaps the biggest challenge presented by the data, however, was the use of the new Agenda for Change coding structure, which in some cases appears to have been deployed on a minimalist basis. For example, “Band 5” is used as the label to replace the old categories of “Nurse Grade E”, “A&C Scale 6”, and “Ancillary Supervisor” among others. Bland grade descriptions give little insight into the workforce characteristics, and so it became necessary to use a combination of codes, increasing the scope for error, resorting to a coding book for Job Descriptions plus a coding book for Pay Scales, Cost Centres and Expense codes. Overall, our experience with the payrolls leads us to believe that, in their present form, they do not possess the key characteristics (a) to (e) outlined above. We conclude that it would not be feasible to use payroll data on a wider basis to develop a Specific Cost Approach to determining spatial variation in costs.

36

General Ledger The general ledger (trust financial system detailing staffing budgets and expenditure at the level of grade within cost centre, e.g. ward) has produced useful data on spatial variation, indicating that wages (cost per wte) increase and that productivity falls with the MFF. We have found that medical staff behave differently from other professional groups, since there appears to be no spatial wage variation at grade level (consistent with the payroll analysis). The spatial patterns we observed were strongest at the level of the trust, and at the more detailed level of specialty began to break down. We aimed at the outset to take the data that was submitted and to work with it, without requesting second or third cuts of data. This was designed to minimise the burden placed upon trusts and to frame the project as a feasibility study. In the course of the analysis it became apparent that there were questions of data quality and comparability. We concluded that the analytic output did not repay the amount of time required to build a robust line of enquiry: •

The main problem was that local coding structures, by their nature, reflect internal budgetary and management structures and do not lend themselves to inter-trust comparison. The obstacles are (a) lack of common specialty labelling, (b) within specialty labels the content varies in terms of ward, outpatient clinics, medical staffing and sub-specialties, e.g. spinal in orthopaedics.



Specialty analyses are limited because they will not capture indirect costs associated with theatre, laboratories and other central departments and will not adequately account for variation in service models.



Bottom-up comparisons of trust data lack local acceptance when undertaken externally. There is no confidence that like with like comparisons are being made, so the analysis is unlikely to reach a conclusion that would appeal to stakeholders’ sense of fairness or consistency.



The ledger analysis suggests that differences between service models, i.e. range of provision, are more important than differences within defined specialty areas. This diversity is difficult to quantify and measure from a bottom-up trust comparison of staff costs because it is trying to grasp non-comparable or residual elements.

The advantage of the general ledger-based approach is that it provided a rich data set which afforded insight into local service configuration. It also allowed us to separate staffing from non-pay costs. The disadvantage is that, even with laborious analysis, a comprehensive or universally acceptable approach has proved elusive. These drawbacks led us to the view that an alternative to the bottom-up methodology would need to be adopted if a specialtybased analysis were to be pursued. A set of criteria was identified, building on the payroll criteria above, but incorporating the ‘whole cost’ dimension of direct and indirect costs and sensitivity to workload measures which are used to standardise costs. Ideally the new approach would be: • • • • •

Comprehensive o covering all specialties o covering all functions (direct and indirect) Credible: locally accepted (i.e. locally generated) Generically coded – based on a common national coding structure Capable of reflecting casemix and complexity weightings In the public domain, allowing comparability between trusts

37

This critique sets the scene for the analysis of HRG costs, which satisfies these criteria even though it covers all costs rather than just staffing.

HRG Unit Costs HRG unit costs are the basis of the trust Reference Cost Index and are used to formulate the PbR tariff. The costs satisfy the criteria listed above and the analysis provided a close match between what we dubbed the ‘HRG Index’ (i.e. distance from national average casemix weighted costs) and the staff MFF index. On the face of it there would be some merit in using an analysis of HRG costs to dampen the impact of PbR to account for spatial differences. There is a basic problem, however, that would lay this approach open to greater criticism than the current GLM method. Rather than being apparently disconnected like the GLM, this SCA would be rather too connected. It would effectively embed the current spatial variations at quintile level. If estimated and applied at trust rather than quintile level, it would compensate for distances from national average costs, acting as a countervailing force against the PbR mechanism. It would effectively neutralise any economic incentive driven by PbR, by compensating for differences from trusts’ expected or casemix weighted average level, rendering PbR tariff + MFF adjustment into a tautologous relationship that would always sum to the national average. Objection to application of the HRG Unit Cost base in formulating a SCA is therefore conceptual rather than practical.

Healthcare Commission Data Set of Ward Nurses in England We analysed ward nursing wage costs and workload based on information extracted from the Healthcare Commission’s Ward Nurse Staff data set covering acute trusts in England for the period 2004/5. The data revealed spatial variation in nursing costs in pay (price) and productivity (volume), a proportion of which was identified as avoidable. The analysis has led us to conclude that a specific cost approach can help to describe the impact of the general labour market on nursing staff costs using everyday trust data. It also has limitations, however, which would prevent it from being employed as a general approach. Firstly, the data set on which the analysis was based is a bespoke data base which required a great deal of work on the part of the trusts and the Healthcare Commission to compile. Attempting to recreate this data set on an annual basis, not to mention one that covers all staff groups, would be prohibitively expensive in terms of cost and time. But perhaps the most important limitation is the degree of subjectivity required in the definition of avoidable and unavoidable variances and the impact this definition has on the overall result. In the analysis we have equated “avoidable” with inefficiency, yet there are potential arguments that would identify some element of this inefficiency as unavoidable, resulting from a fragmented labour market, for example, or occasioned by higher turnover, requiring greater use of bank to cover short term vacancies. Overall we believe the approach can be used to help describe, in a general fashion, how costs of care vary with location but it is not suited to defining precisely why such costs vary.

All Staff Extended Data Set Chapter 12 brings together an all staff data base, which at trust level combines medical and non-medical staffing, pay costs and activity adjusted for casemix complexity, volume of A&E and outpatients. The comprehensive data set developed for this econometric analysis takes us closer to the possibility of using NHS-related data to determine a market forces factor, using a Specific Cost Approach (e.g. see Appendix 12.3 for an illustration of how NHS factors such as location and hospital type can be modelled to mimic a staff MFF index).

38

Use of the data set addresses some of the problems identified earlier (levelled by critics of SCA, Chapter 1), since it has overcome data collection problems, taken into account both mix of staff and complexity and has standardised against a range of factors, providing a rationale for avoidable and unavoidable costs (based on structure, geography, price and volume variances). It also addresses the criticism that SCA is based on local labour markets rather than the economy as a whole, because this model takes a pan-England approach. The argument of perverse incentives at the macro level also appears weak, since trusts are too far away from the level of analysis for this MFF calculation to provide an incentive that would override the other major targets and incentives built into NHS performance management systems. The major criticism which cannot be refuted is the assertion that SCA takes account of what is spent rather than what needs to be spent. One obstacle to feasibility of this approach comes from recent developments in NHS organisational structures which promises to limit the amount of data available. We started with a sample of 173 trusts and finished by using 127 trusts due to gaps and inconsistencies in the data. The main gap was caused by absence of Foundation Trust financial information. Special data collection systems would need to be put in place to circumvent this since in the future the number of Foundation Trusts will increase.

Econometric Model Building – Theory Driven As an exploratory study of feasibility, the analyses reported in Chapter 13 successfully demonstrate the problems with conducting a definitive econometric study in this area:• There does not appear to be a national data set readily available that would allow us to resolve the data deficiencies. • Taking account of case mix is very important, but it is unclear how this should be done. From an economic and econometric point of view, the appropriate method is to estimate a multiple output model, but the analyses reported here demonstrate that this approach is totally compromised by the reduction in degrees of freedom and the loss of precision caused by multicolinearity. Amongst the single output methods, the Casemix Adjusted Average Cost/Case Mix Adjusted Output (CMAC/CMAO) model is theoretically the best, but it is arguable that it was out-performed by a more ad hoc model that took account of case mix by inclusion of a case mix index that has no real economic meaning in the context of cost functions; moreover, other models with less justifiable combinations of unadjusted and adjusted variables performed equally as well. • The Cobb-Douglas specification appears to work well, but should really be replaced by a flexible functional form such as the translog. An earlier analysis of NHS data using the translog (Scott and Parkin, 1995) found this to be promising but compromised by problems with NHS data; it is likely that NHS data will have improved sufficiently to make this feasible. However, this was not tested in the MFF study because of the potential overload of variables compared with sample size in the multiple output models. This would remain a problem for the future as the sample size of NHS Trusts is unlikely to increase significantly; panel data methods might be used but these do require additional assumptions about behaviour over time. In terms of feasibility, therefore, we concluded that an econometric approach would not be appropriate without improvements in data quality, particularly relating to input price and capital measures. A range of models was developed and tested, the strongest of which, in terms of explanatory power, was driven by empiricism rather than theory. There was not a single model that emerged above others that we would recommend for application.

39

CONCLUSION We draw together the themes discussed above into a set of conclusions related to the project brief, which was to identify spatial variation in costs drivers, distinguish avoidable and unavoidable components, and determine whether a Specific Cost Approach would present a feasible alternative to the current GLM measure of the staff MFF. Circularity Circularity is an interpretative problem that makes it difficult to separate cause and effect. High MFF trusts attract higher levels of funding through the GLM-based staff MFF. It is difficult to disentangle the extent to which this is a response to labour market necessity or the cause of higher budgets and therefore higher costs and lower productivity. Earlier on we used public choice theory as a counter-argument to labour market economic theory, suggesting that any extra resources would be spent and therefore by definition the MFF would match expenditure on staff. Weaknesses in the argument, however, should be noted. First, whilst it is true that break-even budget organisations do have incentives to spend all they get, there have been moves to allow trusts to keep surpluses, so there are some incentives not to do this. Second, even if they do spend it all, in principle we should be able to find it in avoidable expenditure. One of the main findings of this review is that empirically distinguishing avoidable and unavoidable costs is very difficult, but nevertheless, the point still remains. Third, the MFF used to be passed to hospitals via the PCTs and health authorities which, it may be argued, had nothing to gain by allowing feather-bedding. Finally, we might not expect trusts that were spending more because they were given more to have higher staff turnover. The greater expenditure should stop this. The fact that we find high turnover strongly correlated with MFF rather goes against the idea that trusts are overpaying because they are given more to spend. Higher costs manifest themselves as unwelcome side effects of not paying the going rate (turnover, agency dependence, poorer quality), so that it would seem that hospitals are not gaining much from the extra cash flow. The data itself points to how we might break out of the circularity argument. We have found that at a broad level (segmenting the country into quintiles) cost behaviour follows very closely the contours of the staff MFF. However, at the level of individual trust, this is not the case. It is much more difficult to find spatial patterns in trust performance because some spend more than expected and some spend less. At trust level it is reasonable to attribute variation to factors such as management capabilities or historical developments of clinical services and technology (sometimes described as ‘path dependence’, e.g. David, 2000). The observation that management capabilities furnish trusts with a degree of control over their destiny, (or at least spending patterns), helps to eliminate the argument of circularity, i.e. that cost structures are entirely a product of funding patterns. Once we have weakened the circularity argument, then the coincidence between cost behaviour and the staff MFF index at quintile level is all the more striking. Within a margin of up to +-2%, (judgements derived in the benchmarking exercise, Chapter 12), the MFF index can be regarded as accounting for unavoidable spatial variation in costs. Connectedness The dominant criticism of the MFF is its lack of connectedness with the NHS, querying why private sector pay rates should bear any relationship to NHS costs. The strength of evidence to emerge in favour of the link between the NHS and private labour market has been one of the surprises of this study. Palatable or not, the economic theory works in its prediction that in low wage areas NHS staff will be paid above the going rate, turnover and vacancies will be lower and quality and productivity will be higher. In high wage areas the converse will be true: NHS staff will be paid below the going rate, turnover and vacancies will

40

be higher, while productivity and quality are lower, stimulated by higher use of substitute (bank and agency) staff. We have gathered enough empirical evidence to sustain the prediction. We went further and tried to recreate the MFF, using only NHS data, as a test of connectedness between private and public sectors with passable results (Appendix 12.3). It was also possible to draw a neat statistical relationship between the MFF and two variables, the average cost of NHS labour and nursing turnover rates by trust, giving a well specified model that predicted 62% of the movement in the staff MFF. As a result, we were able to conclude that employers in NHS organisations in high MFF areas are responding to signals in the non-public labour market by spending more on their staff and being forced to accept higher rates of turnover. The staff MFF is highly connected to NHS labour market factors. Timing The general labour market, we conclude, is relevant to the NHS but the relationship is not necessarily in equilibrium. Mismatches within and between the two will occur due to lack of synchronicity (timing of economic cycles, NHS funding injections, workforce initiatives, etc), prompting tensions and movement. Agenda for Change is a good example where wage differentials within the NHS will alter, changing the relationship between the NHS and private labour market. The policy will narrow spatial variation in wages and therefore remove some of the labour market flexibilities the NHS can use to respond to external labour markets, though this is counterbalanced by explicit options to introduce recruitment and retention premia. At this stage we have limited evidence of the actual impact of AfC on trusts’ cost base. House prices will have an immediate effect on the cost of amenities but a timelagged relationship with wage rates and the staff MFF. As price differentials shift between areas, we cannot predict how long the labour market (and therefore the MFF) will take to respond. Perceptions of trust staff relayed through interviews related to current dynamics, but the staff MFF draws on historic earnings data so there will always be a time gap between active market forces and the MFF. Cost Behaviour Broadly, cost behaviour was found to be aligned to the MFF. Spatial wage (price) variation was clear and quantified and, for the most part, regarded as unavoidable. Productivity (volume) differences were more difficult to interpret as being avoidable or unavoidable. We could not discount the possibility that apparent inefficiencies could be unavoidable, resulting from a fragmented labour market, or occasioned by higher turnover and extensive use of substitute staff. Nevertheless, after adjusting for casemix and hospital type, we estimated a proportion of nursing and medical staffing volume differences that could be considered avoidable, on the basis of best practice benchmarking, supported by regression analysis. Quintile 1 (lowest MFF) trusts were consistently the most productive or efficient in the sample of hospital trusts and evidence throughout our separate studies suggested that the MFF range should be marginally narrower and so slightly flatter, dampening the distance between Q1 and the rest. Medical Staff Doctors emerged as a distinct staff group both in terms of (a) labour market pricing, since spatial variations in pay and vacancy indicators did not conform to the staff MFF, and (b) volume differences since the medical workforce is subject to structural constraints that trusts have little control over and are therefore unavoidable. Medical education and location of training (which continues to be London-centric) is key to this. One of the clearest findings to emerge from this review was the different-ness of the medical workforce and its mismatch with GLM, indicating that there was a case for breaking the link between GLM and medical costs.

41

If doctors were to be excluded from the general labour market-based MFF then, it follows, an option would be to fund the staff group through a Specific Cost Approach. This is an option, rather than a necessity, since the distribution of medical staffing is broadly consistent with MFF geographic patterns, even though not directly caused by labour market factors. We have not spelled out what a SCA for doctors would look like. Further work would be needed to devise a SCA solution for medical staff that fully takes into account structural constraints and their interaction with existing funding streams, i.e. SIFT, MADEL. We have identified productivity deficits in high MFF areas but, by analysing the workforce planning constraints in the balance between junior doctors and consultants (where, for example, reductions in junior doctors’ hours stimulated service demand for more junior doctors and consequently more consultants to train and supervise), they have been described as largely unavoidable (at a spatial rather than trust level). The productivity differences, perhaps surprisingly, have not been depicted as an equity problem. Any inequity is further upstream, stemming from uneven distribution of medical students and junior doctor placements. Within the NHS the main criticisms of the MFF have been (i) scepticism over the validity of a general labour market index, (ii) inequity caused by the scale of redistribution due to a wide minimum-maximum range of 0.85 – 1.28 (51% difference between the two) and (iii) inequity caused by cliff edges between neighbouring trusts. Exclusion of medical staff from the MFF would go some way in addressing the first two of these problems; it would reduce the scale of redistribution between low and high MFF areas. (The effect could be more apparent than real, though, since the SCA might emulate the MFF pattern). The third problem, cliff-edges, has nothing to do with medical staff (or staff generally) and would need to be addressed by separate adjustments to the MFF in the future through enhancements to the existing GLM methodology and improved ‘smoothing’ between geographical areas. Sensitivity of Conclusions to Agenda for Change We have found that objections to the GLM-based staff MFF have been eliminated through a combination of theoretical, fieldwork and other empirical findings. On this basis we conclude that there is no case for introducing a Specific Cost Approach for non-medical staff. The remaining challenge to the MFF’s validity, then, comes from future change and, in particular, the potential impact of Agenda for Change which will narrow spatial wage variations. The Reference Panel, for example, indicated that grade mix differentials between high and low MFF trusts were being eroded through job evaluation. Analytic and empirical responses to the challenge are as follows: •

The ‘grade drift’ element of pay differentials, we have found, is surprisingly small. The analysis of nursing staff (HCC, chapter 9) showed an 18.3% difference in pay between quintiles 5 and 1, comprising 10.7% geographical allowances, 6.3% grade mix associated with workload complexity, and 1.3% grade drift. The first two factors, accounting for most of the difference in pay, will not be affected by AfC.



The productivity (volume) differential between high and low MFF trusts, we have found, is larger than the pay (price) differential. The HCC nursing analysis identified a 35% difference between quintiles 1 and 5, largely accounted for by the use of substitute staff, i.e. bank and agency. Agenda for Change’s impact will be on price rather than volume, so will not directly affect this productivity differential.



Across the all staff data base (chapter 12) we found a similar price variance (to nursing, Chapter 9) and a smaller volume variance. Benchmarking judgements produced an estimate of +1.8%/-1.7% ‘avoidable’ cost (caveated by uncertainty

42

about what is truly avoidable or not). The point to note is, these are small percentages, suggesting that most spatial variation is unavoidable. •

The MFF reflects standardised spatial wage differentials between geographical areas based on private sector pay only. It does not reflect the difference between the NHS and the private sector. If Agenda for Change is found to have had a general inflationary impact on pay awards, then this will affect the relationship between the NHS and local labour market but will not have a direct bearing on the external SSWD underpinning the MFF.



AfC will not have an immediate direct impact on the private sector labour market and so the existing tensions between NHS and ‘outside’ will continue to operate, resolved through a combination of geographical allowances, vacancies, turnover and high volumes of substitute staff.



The theory predicts that formal and informal responses will countervail any change in the differential between NHS and ‘outside’ labour markets. The formal response will take the form of recruitment and retention premia. Informal responses will be apparent through higher turnover and vacancies in tight labour markets, with a tendency to weight the job to attract a higher grading by, as one trust put it, “making more of a meal of the definition of these jobs than we might if we had a buoyant labour market” (Chapter 6, question 1 (d)). The opposite tendencies would be operating in areas with over-supply of labour.

In conclusion, Agenda for Change will not undermine the principles underpinning the staff Market Forces Factor. AfC will counter grade-drift but, as demonstrated here, grade drift has a relatively small effect on the cost of labour. Geographical allowances, workload complexity and productivity differences have the biggest effect, and will not be affected by AfC. Feasibility The chronology of the SCA review was structured into three broad phases: (i) micro study, (ii) feedback to trusts, (iii) development of macro data sets, drawing on findings from the micro study and feedback. Analysis of price/volume variation in spatial costs developed over the course of the study, informed by limitations and possibilities of the data. We concluded that it was not feasible to use detailed local data sets to generate a full SCA and that the best specialty-based cost data nationally was the HRG unit cost set used for Reference Costs that has been developed over a period of some years. We succeeded in drawing together a large trust-based data set that combines workforce, workload and costs into a single model together with geographical and structural variables. This could be replicated on an annual basis but would be increasingly difficult to accomplish as the number of Foundation Trusts (which are not required to make standard Trust Financial Returns) increases. At national level there is no single data source, nor a unified methodology, that would fulfil the requirements of a price and volume SCA. The SCA study has provided valuable insights into cost behaviour. We found reassuring consistency when we triangulated results, e.g. between interview perceptions and national workforce data; HCC nursing data and micro study financial systems for price differences; HCC, general ledger and medical census data for volume differences. The study has been instructive in showing how the NHS labour market and the staff MFF are connected. We have nevertheless identified theoretical and practical problems that would limit the value of SCA. Following this detailed review we do not recommend that SCA should replace the GLM methodology.

43

SECTION B. THE MICRO STUDY Section B describes the micro component of the research programme. Unlike the macro analysis of national data sets, the micro study was process-dependent, requiring participation of trusts in supplying a data set and contributing to the study. The purpose of the exercise was to explore (a) spatial variation in staff costs, (b) avoidable/unavoidable costs, and (c) feasibility of applying a Specific Cost Approach to the staff MFF based on local data sets.

CHAPTER 3.

DESIGN OF THE MICRO STUDY

The first stage in the micro study was design of the data specification and selection of a sample of trusts. A process of trust-recruitment followed and the data specification was finalised in discussion with senior trust finance staff.

Data Specification Three data sets were collected. The first was a 2004/5 year-end summary of staffing and cost data from trust general ledger systems, defining grade and type of staff within cost centre (work area). The second was an output from the payroll system, showing pay components at the level of payroll number 22. The third type of data was qualitative, drawn from interviews with HR and Nursing Directors in the participating trusts. The aim was to minimise the burden on trusts, partly out of pragmatism since finance departments were stretched in closing their year-end and planning budgets for 2005/6, and partly to test the feasibility of using raw output from trust systems.

Sampling The following sampling criteria were adopted in selecting trusts: •

MFF Range and Rankings, selecting providers at extremes of the range and then, within this, a spread of low – medium – high MFF rankings. The staff MFF is based on pay zones linked to PCT boundaries. The national ranking largely reflects a north-south and urbanrural (east-west) divide, with London at the top of the range. The lowest 2005/6 staff MFF score is 0.85 (South Devon Health Care NHS Trust) while the highest is 1.28 (Moorfields Eye Hospital). The overall MFF, including the impact of land and buildings, ranges from 0.89 (South Devon Health Care NHS Trust) to 1.29 (St Mary’s, Paddington). St Mary’s has a high land-value MFF which drives up its overall ranking. Our sample has a staff MFF range of 0.86 – 1.28 and displays similar measures of centrality (i.e. mean and median) to the national range. Table 3.1 Comparison of National and Micro Sample MFF Range STAFF MFF 2005/6 NATIONAL SAMPLE Minimum 0.8514 0.8640 Range (max-min difference) 0.4312 0.4158 Mean 1.0102 1.0386 Median 0.9805 0.9802 Maximum 1.2826 1.2799

22

Individuals could not be identified through this process

44



Clusters, selecting adjacent trusts - ‘cliff edge’ effect is one of the major criticisms of the GLM method, caused by providers being located in different pay zones, attracting different MFF weightings, but being geographically close and so drawing from the same labour market. We drew together two clusters of trusts, with one in the south west of England and another in central London. The third group of trusts was spread across the north of England, so did not represent adjacent trusts but did have some common labour market characteristics.



Motivation. All trusts that had made representations to the Department of Health through correspondence about the MFF were invited to participate, (although not all took up the invitation).



Access. For pragmatic reasons, we appealed to colleagues and associates with senior positions in the NHS for access to their trusts.



Feasibility. The short time frame available for the study (starting in January 2006 with preliminary results by end of April) meant that 10-14 trusts was the maximum sample size envisaged as feasible to handle. In the event we drew together a sample of 14 participating trusts. Each trust within the sample satisfied at least one of these criteria. Table 3.2 summarises the participating organisations together with details of MFF score and criteria-fit. Table 3.1 indicates that the sample matches the distribution of staff MFF scores across England.

Process The micro study was launched in January 2006 through communication with trusts, recruitment of a sample, development of a data specification, amendment and agreement of the data specification through face to face meetings with trust finance directors (or through telephone interviews in two cases) and collection of the data. The payrolls were analysed in February 2006 and, where necessary, resubmitted by trusts. In March the general ledger was analysed in terms of (a) staff group, (b) specialty, considering maternity (with the aim of continuing the enquiry to other specialties, e.g. ophthalmology and orthopaedics), (c) medical staff by specialty. Unit labour costs were generated for trusts, based on admissions and FCEs, and compared against their MFF ranking. The qualitative survey was piloted in February and administered in March/April through face to face and telephone interviews, based on a structured questionnaire. Throughout this period January – April we were interrogating national data sets and comparing the results with the output of this micro study. The Healthcare Commission (HCC, formerly Audit Commission) set of nursing staff in wards across England was a useful database which allowed us to test the generalisability of our micro sample findings. At the same time, issues that were emerging from discussion with micro sample trusts were fed into the HCC enquiry. These included questions of geographical variation in quality and housing costs as a primary driver of labour market pressures. The micro sample trusts participated in a Reference Panel on 18th May. This marked the end of the initial phase of analysis, the findings of which were being presented for scrutiny. We wanted also to receive views on the distinction between avoidable and unavoidable cost components of geographical difference. Feedback from the Reference Panel trusts determined the direction of the second phase of analysis which was conducted throughout June and for the remainder of the project.

45

Table 3.2 Sample of 14 Trusts in the Micro Study SAMPLING CRITERIA National Trust No. Staff MFF Ranking Ranking: (Based on for High, out of National Cluster: MFF Sample 2005/06 % Low, Mid Adjacencies, 232 Ranking) MFF Trusts Ranking Range Cliff Edge

Geography

Motivation, Access Motivation

1

0.8640

2

1%

Very Low

South West

2

0.9191

28

12%

Low

South

3

0.9220

34

15%

Low

North

Motivation

4

0.9408

68

29%

Low

North

Access

5

0.9511

82

35%

Low

South West

Access

6

0.9561

86

37%

Low

North

Access

7

0.9791

115

50%

Mid

Cluster 2

South West

Access

8

0.9814

118

51%

Mid

Cluster 2

South West

Motivation

9

1.0037

133

57%

Mid

South

Access

10

1.1521

205

88%

High

Cluster 1

London

Access

11

1.1823

215

93%

High

Cluster 1

London

12

1.1999

218

94%

High

Cluster 1

London

13

1.2087

219

94%

High

Cluster 1

London

14

1.2799

230

99%

Very High

Cluster 1

London

Cluster 2

46

CHAPTER 4.

GENERAL LEDGER

The analysis of the general ledger is presented here, firstly by outlining the data and approach and then by summarising the results at staff group and at specialty level. Maternity and orthopaedics are the two specialties explored in detail.

DATA AND METHODS All trusts in the sample (n=14) supplied a database containing staff costs at the level of grade within cost centre (work area) as well as non-pay expenditure. The general ledger reflects local management and clinical structures. It shows staff (e.g. nurse Grade G) within cost centre (e.g. name of general surgery ward) within specialty or department (e.g. general surgery) within directorate (e.g. surgical directorate). The database related to financial year 2004/5, selected to screen out the impact of Agenda for Change which was implemented at varying rates throughout 2005/6 (with some impact in 2004/5). Staff Group Coding. A common coding structure was applied by assigning six broad staff group headings: (i) medical, (ii) nursing, (iii) scientific, technical and therapeutic (ST&T), (iv) managers, (v) administrative & clerical (A&C), (vi) ancillary works and maintenance. ST&T was the most diverse group, including occupational therapy, physiotherapy, dietetics, speech therapy, radiography, pharmacy, scientific, laboratory, pharmacy and technical staff. The ledger contained budgeted and actual staffing and costs for 2004/5. Actual Staffing wte was the least consistently defined data field, with different headings provided by each trust including average for 12 months, actual for the final month 12, worked wte (which usually related to staff in post plus bank and agency), contracted wte (which usually related to substantive staff in post), paid wte. Worked wte was the field selected to represent actual staffing across the sample. Local Coding of Specialties. Specialties had been assigned by trusts as part of their departmental structure. There is no requirement or necessity for trusts to adopt common coding structures, just as there is no requirement to adopt common service models. The approach in this exercise was to select specialties, e.g. maternity, on the basis of trust codes and to look in detail at the cost centres or activities contained within this. Workload at trust level was measured by admissions, on the basis that this currency is readily available (HES 2004/5) and understood, offers reasonable consistency and comprehensiveness if the balance within specialty between episodes and outpatients is similar between hospitals, screens out the impact of in-take specialties where a patient may be admitted by a general physician and then referred to a specialist, increasing the ratio of FCE to admission. (Appendix 4.4 shows this ratio). In the analysis of medical staff we used FCE to measure workload as this represents the throughput to each specialty. HES data and HRG workload data was used to capture births as a measure of workload for maternity, to supplement locally provided data.

47

Figure 4.1 Hospital Model Linking Cost Type with Clinical Area and Staff Group Showing % of Actual Total Wage Cost in Each Staff Group (Based on Micro Sample of General Ledgers 2004/5) STAFF GROUPS Admin & Clerical & Medical Staff Management

Other Main Staff Group Located in Area

Type of Service

Type of Cost

Clinical Area (Correspond to Cost Centres)

Front of Hospital

Direct Clinical Cost

A&E

Nurses

Ambulatory Care

Direct Clinical Cost

Out Patient Clinics Day Surgery

Nurses Nurses

Surgical

Medical

Orthopaedic

Direct Clinical Cost

Paediatric

In-Patient

Maternity

Wards

Critical Care: ITU/HDU Theatres Pathology

Specialtyspecific located in Linked to wards, out costpatients, centres. theatre, Direct nonA&E, clinical costs imaging, pathology. Direct clinical cost.

30%

Imaging Central Clinical Support

Site Service

Indirect Clinical Cost

Site Overhead

Nurses Nurses Nurses Nurses Nurses

36%

Nurses Nurses/ ODP Scientific & Laboratory

Speech Therapy Pharmacy

Radiographers Physiotherapists Occupational Therapists Dietitians Speech Therapists Pharmacists

Domestic Catering CSSD Works and Maintenance

Ancillary Ancillary Ancillary Works and Maintenance

Physiotherapy Occupational Therapy Dietetics

Estates Finance Corporate Services Corporate Overhead Human Resources Information Corporate Functions

11%

Works & Maintenance Managers Managers Managers Managers

48

15%

4%

4%

The Purpose The aim of the study was to investigate staffing cost variation at three levels of analysis: • Cost per wte (wage cost) • Wte per workload measure (productivity) • Staff cost per workload measure (unit labour cost) We intended to analyse trusts across the two dimensions of staff group and specialty. Maternity was selected in the first place because it had clearly defined boundaries (unlike, for example, the distinction between general medicine, cardiology and care of the elderly) and because it was comprehensive (avoiding hospital-community substitution since all services were resourced and organised from the hospital base). Orthopaedics was also selected, again because of relatively clear and identifiable specialty boundaries. If the approach proved productive we intended to go on to examine ophthalmology and then more complex specialties such as cardiology. Every aspect of the process was used to inform the feasibility of (a) formulating a Specific Cost Approach and (b) rolling it out nationally. Quality, completeness and data consistency were features that would contribute to the enquiry.

The Approach Specialty analysis based on trust general ledger codes tends to measure direct service costs, omitting indirect clinical costs (laboratories, allied health professionals, radiology, pharmacy), indirect non-clinical costs (domestic, catering, site costs) and corporate overheads (finance, information and other central functions). Direct service costs are predominantly nursing and medical staff. (Since we have found in our sample that these comprise 36% and 30% respectively of the trusts’ pay bill, this indicates scope to measure up to two thirds of the full average unit labour cost.) Trusts use locally-devised allocation and apportionment algorithms to convert direct and indirect costs into unit costs that link into national systems, e.g. Health Resource Group (HRG) unit costs. Staff costs were standardised against a measure of workload, to produce a series of unit labour costs or productivity ratios. There are two potential approaches: (i) staff group, or (ii) function or specialty, based on either (a) direct cost or (b) full average cost approach.

RESULTS The results are presented for (a) staff groups across the organisation, (b) medical staff by specialty and (c) maternity plus orthopaedics, testing the feasibility of undertaking specialty analyses. Staff Groups The average proportion of staffing and salary costs across the trust sample is summarised below. There are underlying differences in the balance of staff groups between trusts, e.g. in ancillary due to variable patterns of contracting out, and in management (where we know that at least one trust has omitted their central management functions such as finance and IT).

49

Table 4.1 Distribution of Actual wte and Costs Across the Sample % OF TRUST % OF TRUST ACTUAL WTE ACTUAL WAGE COST Medical 12% 30% Nursing 43% 36% Scientific, Technical & Therapeutic 17% 15% Administrative & Clerical 17% 11% Management 3% 4% Ancillary, Works & Maintenance 8% 4% Total 100% 100%

Three variables were analysed in relation to the MFF: cost per wte, wte per 1,000 admissions, cost per 1,000 admissions. At the budgeted cost level it was possible to use the full sample of 14 trusts. At the actual cost level, because we do not have an actual wte field in the Trust No. 14’s ledger, it was necessary to consider the sample of 13 excluding Trust No. 14. Budgeted cost per wte had a more consistent pattern than actual cost per wte among medical and nursing staff. The general finding was that London (i.e. Trusts No. 10-14) is ranked highest in terms of cost per wte, showing a positive association between MFF score and the cost of employing staff. This finding was broadly repeated for the productivity measure wte per 1,000 admissions. These two inputs, of high cost per member of staff and high usage of staff, contributed to a strong positive relationship between unit labour costs (cost per 1,000 admissions) and staff MFF. Trust 10 (the only nonteaching hospital in the London cluster) had higher productivity than the other London trusts (i.e. lower wte per 1,000 admissions), but its high cost per wte restored its ranking at the total cost per 1,000 admissions level. There did not appear to be a strong geographical pattern among non-London trusts. London appears to be more expensive and less productive than non-London hospitals. The analysis does not explain why, but simply serves to highlight these patterns. The main findings to emerge are that: • Medical staffing cost per wte has a weak spatial pattern overall; • Management cost per wte has a weak spatial pattern; • There is strong spatial variation in the cost per wte for nurses, scientific, technical & therapeutic staff, administrative and clerical staff, ancillary works and maintenance staff; • Spatial variation in productivity, i.e. wte per 1,000 admissions, is strong for all staff groups except management and ancillary, works and maintenance; • Spatial variation in unit labour costs is strong for all groups except ancillary, works and maintenance; • The gap between budget and actual in the same trust is consistently narrower than the gap between budget and budget across different trusts 23. The relative strength of these patterns is conveniently summarised by the R2 measure of goodness of fit between the staff MFF and the three variables analysed here (cost per wte, wte per 1,000 admissions and cost per 1,000 admissions) shown in Table 4.2. 23

This is consistent with the suggestion in Chapter 2 that managers aim for financial break-even, predicting a match between expenditure and budget within a narrow +-% tolerance.

50

Table 4.2. R2 Measure between Variables & MFF at Staff Group Level wte per 1,000 admissions

Average Cost £ per wte

Total Staff Cost per 1,000 Admissions

Budget

Budget

Actual

Budget

Budget

Actual

Budget

Budget

Actual

n=14

n=13

n=13

n=14

N=13

n=13

n=14

n=13

n=13

Medical

40%

47%

0%

83%

77%

76%

85%

81%

80%

Nursing

82%

81%

40%

61%

48%

61%

83%

78%

78%

Scientific, Technical & Therapeutic

73%

72%

86%

51%

57%

59%

68%

73%

71%

Admin & Clerical

81%

79%

84%

59%

52%

49%

77%

73%

71%

Management

25%

14%

15%

45%

42%

32%

66%

60%

54%

Ancillary, Works & Maintenance

79%

71%

82%

11%

42%

8%

2%

0%

0%

Grand Total

84%

82%

77%

56%

51%

56%

82%

78%

77%

Table 4.3 Unit Cost Ratios Ranked by Actual Cost per 1,000 Admissions – All Staff Trust (No. MFF Rank) 5 1 6 7 3 9 4 2 8 10 12 11 14 13

Average Cost per wte Budget £28,246 £30,843 £30,922 £31,283 £29,082 £30,388 £33,492 £28,841 £30,696 £39,092 £45,210 £38,002 £42,167 £40,356

Average Cost per wte Actual £27,991 £31,663 £31,813 £31,625 £32,428 £33,230 £35,900 £30,123 £31,010 £39,673 £44,415 £38,912 £40,338

Budget wte per 1,000 Admissions 44 40 50 50 56 55 50 59 57 47 64 73 72 86

Worked wte per 1,000 Admissions 44 40 49 49 51 50 48 58 56 47 65 75 0 87

Total Budgeted Cost per 1,000 Admissions £1,241,618 £1,234,673 £1,547,875 £1,559,534 £1,626,022 £1,658,676 £1,684,151 £1,694,255 £1,747,704 £1,845,611 £2,887,008 £2,781,039 £3,050,904 £3,473,238

Total Actual Cost per 1,000 Admissions £1,219,678 £1,256,152 £1,558,870 £1,559,826 £1,643,393 £1,653,963 £1,721,002 £1,740,644 £1,742,704 £1,835,456 £2,890,867 £2,903,595 £3,028,638 £3,524,914

51

Table 4.4 Summary of Spatial Variation in Unit Cost Ratios

Per 1,000 Admissions

Staff Group 1 Medical 1 Medical Total 2 Nursing 2 Nursing Total 3 Scientific, Technical & Therapeutic 3 Scientific, Technical & Therapeutic Total 4 Admin & Clerical 4 Admin & Clerical Total 5 Management 5 Management Total 6 Ancillary, Works & Maintenance 6 AC&W Grand Total Grand Total

London/nonLondon London non-London % London Uplift London non-London % London Uplift London non-London

Average Pay per wte Budget £89,663 £82,644 +8% £35,524 £26,421 +34% £36,133 £27,801

Mean of Average Cost per wte Actual £103,208 £86,987 +19% £38,149 £27,929 +37% £44,583 £28,791

% London Uplift London non-London % London Uplift London non-London % London Uplift London non-London % London Uplift London non-London % London Uplift

+30% £24,967 £17,949 +39% £52,921 £46,862 +13% £20,818 £15,300 +36% £40,916 £30,773 +33%

+55% £32,474 £18,490 +76% £62,286 £43,488 +43% £27,435 £17,068 +61% £47,940 £32,327 +48%

Admissions

362,895 795,740 -54%

Budgeted wte 10 5 +83% 28 22 +25% 13 8

In post + bank + agency wte 9 5 +68% 26 21 +20% 11 8

Pay Budget 876,441 442,557 +98% 984,797 585,147 +68% 462,187 227,540

Pay Actual 895,029 449,647 +99% 974,123 594,557 +64% 476,544 228,076

+56% 13 9 +53% 3 1 +117% 4 5 -18% 70 51 +39%

+35% 10 8 +22% 2 1 +71% 3 5 -30% 61 49 +25%

+103% 329,233 155,182 +112% 148,355 60,453 +145% 84,781 76,435 +11% 2,878,990 1,557,745 +85%

+109% 333,201 155,980 +114% 145,731 59,336 +146% 86,379 76,981 +12% 2,915,180 1,575,547 +85%

Average of MFF

1.18 0.93 +26%

52

Medical Staff Medical staff function on a specialty rather than location (e.g. ward, outpatient, theatre) basis so that, where their specialty was apparent among general ledger codes, it was possible to map them to specialty. At broad specialty groupings we found a significant relationship between output (FCE) per consultant wte input (caseload throughput). The variables were negatively correlated so that higher MFF trusts in the sample (London) had lower caseload throughput per consultant. Application of a casemix complexity weighting did little to alter this relationship (explained in Appendix 1). Cost per wte We found a relatively weak relationship between the MFF index and medical staffing pay (Table 4.2) but Table 4.5 shows that the cost per wte among individual grades is weaker still, bearing almost no relationship with geography. When all grades are combined the association between cost per wte and geography is stronger than that of any individual component, leading us to suppose that the relationship between pay and geography must be driven by the way in which grades are combined, i.e. the grade mix of doctors. Table 4.6 shows that low MFF trusts use higher proportions of staff grades whereas high MFF trusts use more registrar grades, which are more expensive than staff grades (see Appendix 4.5). Table 4.5 R2 of MFF Index & Budgeted Cost per WTE by Grade Consultant

Staff Grade

Associate Specialist

Other Career Grades

SpR/ Reg

SHO

HO

Grand Total

R Squared

6%

0%

19%

0%

10%

31%*

29%

40%*

Direction

+ve

-ve

-ve

+ve

+ve

+ve

+ve

+ve

Table 4.6 R2 of MFF Index and Proportion at Each Grade

R Squared Direction

Consultant

Staff Grade

Associate Specialist

Other Career Grades

SpR/ Reg

SHO

HO

Grand Total

20%

56%**

12%

0%

64%**

72%**

0%

29%

-ve

-ve

-ve

-ve

+ve

+ve

+ve

-ve

* Significant at 5%, ** Significant at 1%

53

Table 4.7 Productivity Ratios for Consultant Staff – Based on Unweighted Workload Specialty-Based Episode per Consultant

Trust Number

Adult Adult Physicians - Physicians - Paediatric Non Acute Physicians Acute Staff MFF Surgeons

Gynae

FCE per Total Total FCE consultant per (Anaes, total exc Episode per Surgical Episode Episodes Surgeons Total Anaes, episode per Rad, Path) per per per Anaesthetist pathologist radiologist anaesthetist Consultant Rad, Path Consultant

1

0.8640

931

1,429

256

1,059

1,810

1.36

9,382

10,051

1,421

1,983

1,038

697

2

0.9191

831

866

3,067

590

982

1.74

6,708

8,001

1,492

1,668

837

575

3

0.9220

1,025

1,074

293

1,910

404

1.67

7,558

6,027

1,445

1,831

1,152

711

4

0.9408

545

1,350

295

1,309

761

1.85

6,654

7,130

1,063

1,475

712

493

5

0.9511

1,072

2,549

609

817

2.40

8,206

6,459

2,515

2,253

1,277

834

6

0.9561

759

1,707

52

700

1,255

1.81

7,803

8,027

1,550

1,753

816

573

7

0.9791

1,183

1,653

202

1,296

2,080

1.54

5,914

5,195

2,112

1,853

1,156

719

8

0.9814

643

1,143

418

656

424

2.26

7,039

6,438

1,391

1,627

735

524

9

1.0037

679

2,276

130

423

136

1.61

5,731

7,122

964

1,471

803

533

10

1.1521

435

1,763

4

433

1,837

2.18

10,317

5,899

1,490

1,762

808

566

11

1.1823

463

981

156

398

89

1.70

3,701

4,204

522

797

380

263

12

1.1999

482

956

134

392

199

1.75

4,621

7,109

732

1,341

506

393

13

1.2087

580

586

66

236

247

1.36

3,009

6,355

688

900

363

260

14

1.2799

699

1,119

167

439

455

1.17

4,746

5,050

750

1,155

551

376

R Squared (against MFF)

38% * -ve

13%

12%

44% (n=13) * -ve

16%

7%

35% * -ve

36% * -ve

40% * -ve

56% ** -ve

57% ** -ve

60% ** -ve

Significant at * 5%, ** 1%

Direction

54

Table 4.8 Productivity Ratios for Consultant Staff – Based on Workload Weighted for Complexity Specialty-Based Weighted Episode per Consultant

Trust Number

Weighted Total FCE Weighted Weighted Weighted Weighted per (Anaes, Total exc episode per surgical Adult Weighted Adult episodes Total Anaes, episode per episode per Rad, Path) Physicians Physicians - Paediatric per Complexity - Acute Index Surgeons Non Acute Physicians Gynaecology pathologist radiologist anaesthetist Consultant Rad, Path Consultant Staff MFF

1

0.8640

1.12048

1,043

1,601

287

1,186

2,028

10,512

11,262

1,592

2,222

1,163

781

2

0.9191

1.24419

1,034

1,078

3,816

734

1,222

8,346

9,955

1,856

2,075

1,041

716

3

0.9220

1.293354

1,325

1,389

378

2,470

522

9,775

7,795

1,869

2,368

1,490

920

4

0.9408

1.261108

687

1,702

373

1,651

960

8,391

8,992

1,340

1,860

898

622

5

0.9511

1.129676

1,211

2,880

688

0

923

9,270

7,296

2,842

2,545

1,442

942

6

0.9561

1.188986

903

2,030

62

832

1,492

9,277

9,544

1,843

2,084

971

681

7

0.9791

1.16254

1,376

1,921

235

1,506

2,419

6,875

6,040

2,455

2,154

1,344

836

8

0.9814

1.265746

814

1,447

529

830

537

8,910

8,149

1,761

2,060

930

663

9

1.0037

1.241618

843

2,826

161

525

169

7,116

8,843

1,198

1,826

997

661

10

1.1521

1.189082

517

2,096

5

515

2,185

12,268

7,015

1,771

2,095

961

673

11

1.1823

1.453077

673

1,425

227

579

129

5,378

6,108

758

1,158

552

382

12

1.1999

1.315393

633

1,258

177

516

262

6,078

9,351

963

1,764

666

516

13

1.2087

1.531712

888

898

102

362

378

4,609

9,735

1,054

1,379

556

399

14

1.2799

1.315393

920

1,471

220

578

598

6,243

6,642

986

1,520

725

495

43% * +ve

30% * -ve

9%

11%

39% (n=13) * -ve

15%

33% * -ve

16%

40% * -ve

58% ** -ve

58% ** -ve

62% ** -ve

R Squared (against MFF) Significant at * 5%, ** 1%

Direction

55

Productivity Ratios of Medical Staff We developed productivity measures by mapping medical staff and episode data to specialty (based on their cost centre and specialty designation), to generate a ratio of consultant per 1,000 episodes, summarised to a meaningful level for comparison, i.e. adult acute physicians, surgery, paediatrics, radiology, pathology, anaesthetics and A&E. We tested alternative workload measures for radiology, pathology and anaesthetics. For radiology and pathology we applied HRG-related data but found unexplained inconsistencies, suggesting that either data was missing or that different currencies were being used, e.g. tests in some and requests in another. Because of this we have used hospital episodes. For anaesthetics we have used the number of surgeons as a basic indicator and also the number of surgical episodes. Tables 4.7 and 4.8 summarise the results: • There is a negative relationship between the staff MFF and consultant staff productivity (measured as FCE caseload), both unweighted (R2=60%) and weighted for complexity (R2=62%). This relationship is consistent, whether looking across episode-based clinical specialties or support specialties (pathology, radiology and anaesthetics), standardised against episodes for the hospital. The results imply that the workload per consultant is lower in London (high MFF) than in the South West (low MFF) at these broad specialty levels. • We found that the position became less clear-cut as the specialty definition became more refined. At a narrow level of focus, reduced here, for example, to adult acute physicians, the productivity difference between trusts has no significant geographical variation. The broader the definition, the more apparent is the productivity gap between high and low MFF trusts. The implication is that it is not just the configuration within specialty but also the way in which specialties are combined (as in the case of grade mix earlier) that contributes to the overall productivity differential.

Maternity The general ledger, in most trusts, has a hierarchical structure that groups cost centres (budget-holding work areas, e.g. a ward) within specialty or department within directorate. Maternity services usually sit within the Women’s Health Directorate, and cost centres include (a) labour ward/delivery suite nurses and midwives, (b) community nurses and midwives, and (c) ante-natal clinics. Midwifery is specified at the level of (a) + (b). Medical staff often sit outside the maternity cost centres but within women’s health, and are defined as obstetrics and gynaecology since the doctors cover both specialties. Maternity was costed at a hierarchy of four levels: cost per wte: 1. Midwifery (nursing) staff wte per birth and cost per birth: 2. Midwifery (nursing) staff 3. Maternity staff – all staff designated as ‘maternity’ in the trust’s specialty coding 4. Maternity staff plus medical staff (where they are not already described within ‘maternity’) The analysis produced two types of findings:

56





In terms of comparability, only the midwifery levels in this hierarchy were thought to produce a reasonable level of comparison between trusts. By focusing on nurses involved in delivery and the number of births we could be confident that we were comparing like with like in terms of staffing and workload. At the broader levels of maternity and maternity + medical staff we were being consistent in our selection criteria (i.e. using trust specialty designations) but within these designations it was clear that different service functions were included, e.g. fertility services and genitourinary medicine. The quantitative findings were unambiguous in their lack of association with geography (i.e. the MFF) at levels 2-4. This study indicates that there is a link between nurse pay (cost per nurse wte) and the staff MFF, but this does not translate into any other aspect of unit labour costs in maternity. There is no association between geography (expressed by the MFF) and efficiency in terms of birth per nurse wte, nurse cost per birth, consultant pay or consultant cost per birth. There is a weak association between medical pay (cost per wte across all grades) and the MFF.

Maternity (delivery) HRGs were used to weight birth volumes but had no impact on the findings based on general ledger costs. The HRG costs of the 14 trusts were also analysed to test whether they varied with the MFF. We found that they had little relationship with either (a) the general ledger unit costs or (b) the MFF. Table 4.9 Midwifery Costs MFF Rank 1 12 7 10 3 6 14 11 4 8 2 9 13 5

£ per wte £30,862 £40,073 £32,659 £36,010 £30,747 £30,781 £38,540 £36,521 £31,706 £31,577 £31,962 £30,510 £37,245 £29,388

Birth per Nurse wte 34 36 29 32 26 25 31 29 25 22 22 20 23 11

Cost per Birth £914 £1,106 £1,131 £1,137 £1,199 £1,216 £1,246 £1,256 £1,288 £1,405 £1,483 £1,492 £1,601 £2,573

Table 4.10 Relationship Between MFF and Range of Variables R2

Relationship between MFF and: • Nurses pay per wte

86%**

No relationship between MFF and: • Birth per nurse wte • Nurse cost per birth • Consultants’ pay (cost per wte) • Medical pay (cost per wte all grades) • Consultant cost per birth

15% 1% 4% 25% 0.5%



Medical cost per birth

0.08%

** Significant at 1% 57

HRG Analysis There are 8 health resource groups (HRGs) associated with maternity. HRG unit costs are defined for episodes grouped in each of these HRGs, and also for type of admission, defined as elective inpatient (1% of episodes), non-elective inpatient (98% of episodes), or day case (1% of episodes). National average unit costs for HRGs have been used as a weighting scale which can be applied to trust activity and costs to determine casemix or complexity. The weight is calculated as unit cost for the HRG divided by the unit cost per normal delivery episode, HRG N07, i.e. around a base of 1 for normal deliveries without complication. The national average weight for a delivery episode (i.e. birth) is 1.40. This is consistent with the average weighting for our sample of 14 trusts. The HRG unit cost weightings are summarised below in Table 4.11. • The total maternity cost per trust has been calculated by adding together all day case and inpatient costs for each of the HRGs N06 - N12. There is no relationship between this set of costs and the direct costs described as ‘maternity’ in each trust’s general ledger. • The average weighting factor for each set of delivery episodes has been calculated (Appendix 4.2). Weighted delivery episodes are calculated by applying this weighting factor to delivery HRG episodes 2004/5. • We found that there is no relationship between geography, i.e. MFF, and complexity of casemix. There is, in practice, very little measured variation in the casemix between hospitals. Table 4.11 Maternity HRG Codes and National Average Unit Cost Weightings HRG Code N06 Normal Delivery with complication N07 Normal Delivery without complication N08 Assisted Delivery with complication N09 Assisted Delivery without complication N10 Caesarean Section with complication N11 Caesarean Section without complication N12 Antenatal admission not related to delivery event

Weight 1.69 1.00 2.02 1.40 3.08 2.25 0.57

The table below summarises the R2 coefficient of determination between the MFF and measures of total maternity costs, using both HRG costs and the general ledger as the source. Both sets of data provide consistent results in that there is no association between specialty costs at this level and the MFF. Table 4.12 Fit (R2) between MFF and HRG and General Ledger Unit Costs R2 No relationship between MFF and: • HRG total maternity cost per birth (unweighted) • HRG total maternity cost per birth (weighted) • General ledger total maternity cost per birth (unweighted) • General ledger total maternity cost per birth (unweighted) • Delivery weighting factor which indicates complexity of casemix

6% 0.9% 0.5% 2% 6%

58

Other Specialties: Orthopaedics The results from maternity, which is relatively straightforward to select and identify, did not provide a pattern of spatial cost variation, and showed zero relationship with the staff MFF. This differs from the comprehensive analysis of staff groups which has shown that systematic spatial cost variations do exist. The implication is that, if the variations cannot be identified in maternity, then they must reside somewhere else, e.g. in other specialties; in indirect costs (e.g. laboratories) which are not captured by this specialty approach; or the range and mix of specialties, i.e. service configuration, which would lie outside any like-with-like comparison since they reside in the parts of trusts that are different or extra. We extended the enquiry to another specialty, orthopaedics, based on the general ledger data set. The selection criteria of cost centres vary between trusts, as orthopaedics may (i) be a specialty coded in its own right, (ii) have a different name (e.g. locomotor), or (iii) be mixed with other surgical specialties in larger directorates, e.g. surgery or critical care. This latter category is difficult to penetrate without local knowledge and, out of the sample of 14 trusts, there were 4 trusts which did not identify ‘orthopaedics’ as a specific separate specialty. In these cases, it would be necessary to know which named wards were orthopaedic, rather than general surgical, in order to measure orthopaedic direct costs. The main finding of this enquiry is that the internal coding structure of trusts reflects budgetary and management structures and does not lend itself to inter-trust comparison. The obstacles are (a) lack of common specialty labelling, (b) within specialty labels there will be differences in content in terms of wards, outpatient clinics, medical staffing, and (c) the same specialty label of orthopaedics may describe different sub-specialties, e.g. explicitly excluding spinal in Trust No. 4. We extracted cost and volume data for 10 out of our 14 trusts, but even within the 10 we found a lack of comparability Trust No. 13, for example, is a London teaching hospital and one of the largest trusts in the sample. For orthopaedics it showed 8 cost centres and a mid-range unit labour cost, even though at the organisational level this trust was consistently towards the top of the range in measuring unit labour cost per 1,000 admissions. The implication is that focusing on narrow specialty definitions (within the general ledger) will not capture the essential variation between trusts because it cannot access indirect costs and because it does not adequately account for variation in service models.

59

Table 4.13 Output from Investigation into Orthopaedic Costs within General Ledgers Staff MFF

Trust No.

1.0037

9

0.9511

Definition (Selection Criteria)

Episodes

Cost per wte

wte per 1,000 episodes

Cost per Episode

No. Cost Centres

wte

£ Actual Pay

Orthopaedics Specialty

7

244

£9,025,410

12982

£36,989

19

£695

5

Orthopaedics Specialty

11

159

£5,072,804

6500

£31,904

24

£780

0.9220

3

Orthopaedics Specialty

7

124

£4,571,488

4907

£36,867

25

£932

0.9791

7

Directorate of Trauma

9

142

£4,348,706

4563

£30,625

31

£953

0.8640

1

Locomotor Directorate

25

230

£7,700,477

7544

£33,480

30

£1,021

1.2087

13

Specialty of Orthopaedics

8

145

£6,621,890

5002

£45,668

29

£1,324

0.9561

6

Orthopaedics Specialty

19

227

£7,512,469

5548

£33,095

41

£1,354

0.9408

4

Orthopaedics Specialty (exc spinal unit)

15

374

£12,714,658

9221

£33,996

41

£1,379

1.1521

10

Department of Orthopaedics

10

129

£5,337,200

3656

£41,374

35

£1,460

1.1823

11

Specialty of Orthopaedics

7

146

£5,069,206

2593

£34,721

56

£1,955

0.9191

2

Sub-set of Surgical Directorate

Difficult to isolate out of 218 cost centres

7764

0.9814

8

Departments within Critical Care Directorate

Difficult to isolate out of 36 cost centres

4016

1.1999

12

Difficult to isolate out of 401 cost centres

3534

1.2799

14

Difficult to isolate out of 12 cost centres

1981

Cost Centres within specialty of Surgery

60

DISCUSSION OF GENERAL LEDGER ANALYSIS

Spatial variation The general ledger has produced useful data on spatial variation, indicating that wages (cost per wte) increase and that productivity falls with the MFF. We have found that medical staff behave differently from other professional groups, since there appears to be no spatial wage variation at grade level. Avoidable/Unavoidable The question of avoidable and unavoidable costs could not satisfactorily be addressed through the general ledger analysis. The spatial patterns we observed were strongest at the level of the trust, and at the more detailed level of specialty began to break down. Feasibility: Critique of the Approach We aimed at the outset to use the data that was submitted and work with it, without requesting second or third cuts of data. This was designed to minimise the burden placed upon trusts and to frame the project as a feasibility study. In the course of the analysis it became apparent that there were questions of data comparability (e.g. at least one trust excluded central senior management staff). The results of the approach reported here based on general ledger data were not encouraging; the output did not repay the amount of time required to build a robust line of enquiry. The obstacles were identified as: •

Feasibility. Local coding structures do not lend themselves to specialty analysis.



Direct versus Indirect Costs. Even where specialty codes exist, they capture partial information. This usually includes nursing staff in wards and outpatient departments and medical staff linked to the specialty. Large elements of resource, e.g. theatres for orthopaedics, sit outside the specialty designation.



Acceptability. A bottom-up analysis conducted outside the trust lacks local acceptance. There is no confidence that like with like comparisons are being made.



Service Model Diversity. Even if all these disadvantages did not exist, the difference in specialty composition between trusts means that comprehensive specialty coverage would be difficult to attain 24. The difficulty in achieving comprehensive cost-capture through a bottom-up approach is a major drawback.

The advantage of the general ledger-based approach is that it provided a rich data set which afforded insight into local service configuration among the 14 micro-study trusts. It also allowed us to separate staffing from non-pay costs. The disadvantage is that, 24

Trust No. 13 showed 8 cost centres against the specialty of orthopaedics, out of a total of 630 cost centres in the trust as a whole, with an average pay value of just over £0.5 million in a total pay bill of £345 million.

61

even with laborious analysis, a comprehensive or universally acceptable approach to cost comparisons by specialty has proved elusive. The experience of analysing three cost types (medical staff at specialty level, maternity costs, and the exploration of orthopaedics presented in Table 4.13) suggests that the requirement for comprehensiveness and acceptability outweigh the benefit of local detail. This discussion led us to the view that an alternative to the bottom-up methodology would need to be adopted if a specialty-based analysis were to be pursued. Ideally the new approach would satisfy criteria of being: • Comprehensive o covering all specialties o covering all functions (direct and indirect) • Credible: locally accepted (i.e. locally generated) • Generically coded – based on a common national coding structure • Capable of reflecting casemix and complexity weightings • In the public domain This critique sets the scene for the analysis of HRG costs, which satisfies these criteria and covers all costs rather than just staffing, and is detailed in Chapter 11.

62

CHAPTER 5. PAYROLL ANALYSIS DATA AND METHODS We asked each of the trusts in our sample to supply a copy of their year end payroll for the financial year 04/05 based on an agreed list of fields (Appendix 5.1). The purpose was to analyse the data to identify spatial wage variation, and also to test whether payroll data provided source data that could adequately support an on-going Specific Cost Approach. The rationale behind the data field specification was that, to generate a SCA, the payroll data would need to be: • • • • •

Readily available - part of the everyday operation of trusts, not requiring special procedures to produce Readily understood Detailed - rather than aggregated Easily analysed - the data could be analysed in its raw form without the trusts needing to manipulate it prior to external analysis Comparable - the data must facilitate inter-trust comparisons

Payroll entries satisfy the basic function of paying staff and, unlike general ledger entries, are not processed for reporting purposes, as a result of which the data is inevitably less structured. Our aim was to shed light on the make up of the overall pay to each worked in-post WTE.

Data Quality We experienced considerable difficulties in trying to analyse the data, leading us to assume that the payroll is seldom, if ever, analysed by the trusts even though it is the source data for a substantial proportion of their cost base. The first obstacle was availability. Of the fourteen trusts involved, nine could supply a reasonably robust data set on the second or third attempt; of the five remaining trusts, two could not provide any payroll data and three could provide only partial data. The second major problem was the proliferation of classes of pay in each payroll. In one trust we counted 327 classes, which rose to over 400 when coding errors (spelling mistakes) were taken into account. Appendix 5.2 outlines the classifications used by three trusts. Although the payrolls did provide detail, the lack of a standard classification of pay types made them difficult to understand, time consuming to analyse and awkward to compare. A third problem, evident to a greater or lesser degree in each of the payrolls, was error, e.g. WTE figures which were missing, double counted and under counted or Leaving dates which were missing or incorrect. Perhaps the biggest challenge presented by the data, however, was the use of the new coding structure, which in some cases appears to have been deployed on a minimalist basis. For example, “Band 5” is used as the label to replace the old categories of “Nurse Grade E”, “A&C Scale 6”, and “Ancillary Supervisor” among others. Bland grade descriptions give little insight into the workforce characteristics, and so it became necessary to use a combination of codes, increasing the scope for error, resorting to a coding book for Job Descriptions plus a coding book for Pay

63

Scales, Cost Centres and Expense codes. Overall, our experience with the payrolls has led us to believe that, in their present form, they do not posses the key characteristics outlined above. Nevertheless, thanks in large part to the dedication and, indeed, persistence of the individuals tasked by the trusts to provide the payroll data, we received nine payrolls capable of being analysed and which covered the spectrum of the Staff MFF. Table 5.1 Submitted Payrolls Trust No. (Geography) 1 (South West ) 4 (North) 5 (South West ) 6 (North) 7 (South West) 9 (South)

Staff MFF 0.864008 0.940817 0.951053 0.956106 0.979081 1.003713

11 (London) 13 (London) 14 (London)

1.182274 1.208667 1.279853

Average Non London trusts Average London trusts

0.94913 1.22360

Uplift

28.9%

Purpose The objective of our analysis was to review the individual elements of the total pay cost per average in-post worked WTE and establish the contribution that each element made to any spatial variation in total pay. For simplicity we divided our nine trusts into two groups, (i) London trusts and (ii) non-London trusts. The analysis was performed on a proportional basis by evaluating the relationship of wage elements of Total Wage Cost, linked to staff in-post, comprising categories labelled as Gross Pay, consisting of Basic + Geographic Allowances + Overtime + Other, plus Employers’ Costs. The Total Wage Bill equals Wage Cost plus Bank and Agency (where we do not have a WTE volume figure). We have profiled the Total Wage Bill as a stylised unit, using basic pay in non-London trusts as a base of 100.

RESULTS We present a summary of all elements of the Total Wage Bill and then go on to analyse each component of Gross Pay separately. Total Wage Cost and Wage Bill The analysis was carried out on a total payment by staff group basis 25 and summary tables for each trust may be found in appendix 5.3. We found that London’s average wage cost per In-post Worked WTE 26 was 22.1% higher than elsewhere. Of the total 25

Early more detailed work on payments by grade/staff type had to be abandoned due to data problems e.g. part implementation of Agenda for Change. 26 In-post Worked WTE = Contracted WTE x the proportion of the year worked (e.g. 1 Contracted WTE who joins six month into the year = 0.5 worked WTE).

64

uplift of 22.1%, 4.2% was due to higher basic pay (mainly non clinical staff groups) and 9% due to London Weighting. Table 5.2 Summary of Total Wage Cost London

% of Gross Pay

London uplift expressed as % of non-London Total Wage Cost

82.8%

106.1

72.0%

4.2%

1.8 18.9

0.0% 1.5% 15.7%

13.1 3.0 25.2

8.9% 2.0% 17.1%

9.0% 0.8% 4.4%

Gross Pay

120.7

100.0%

147.4

100.0%

18.4%

Employers Costs

24.1

20.0%

29.5

20.0%

3.7%

Total Wage Cost

144.9

Non London

% of Gross Pay

Basic

100.0

Geographical Allowances Overtime Other

176.8

22.1%

The amount spent on bank and agency is over three times greater in London than elsewhere (24/7.1), so that the overall uplift to London is 32.2%, expressed in terms of price per average in-post WTE. Table 5.3 Bank & Agency and Total Wage Bill Non London

London

Bank & Agency

7.1

24.0

Total Wage Bill

152.0

200.9

Uplift to London

32.2%

Basic Pay Basic pay, which excludes overtime and allowances, is on average 6.1% higher in London than the non-London trusts (Table 5.2) 27 28. Medical staffing pay is lower in London than elsewhere due in part to the higher proportion of junior doctors and also to a negative spatial variation (in terms of the staff MFF) within grade 29. (Appendix 5.3 gives details. The different allocation methods employed by the trusts for New Contract payments may account for a small variation between Basic & Other). Basic Pay is 6.1% higher in the London trusts but comprises a lower proportion of Gross Pay (72%) compared to the non-London sample (82.8%), giving a gap of 10.8% in the measure of Basic Pay as a proportion of Gross Pay. The implication is that all other types of payment in addition to Basic Pay are also higher in the London trusts. Table 5.6 interpolates these proportions to derive a relationship between Gross Pay, around the base index of 100. 27

Variations in allocating staff members into groups could account for small differences, e.g. Trust 14 (London) does not separately classify Management hence the distortion to A&C; if Trust 14 were excluded, the difference in average A&C Basic would remain high at 12%. However, the overall figure of 6.1% does not suffer an allocation distortion. 28 Trust 4 (North) has been excluded from Table 3 due to the lack of WTE figures. 29 This differs from the general ledger finding. The payroll excludes agency locums.

65

The next three sections look at how this 10.8% differential is made up, considering in more detail Geographical Allowances, Overtime and Other Allowances.

Table 5.4 Average Basic Pay per Worked WTE Trust No.

A&C

Ancillary

1 (S West) 5 (S West) 7 (S West) 4 (North) 6 (North) 9 (South)

15,785 14,099 15,307 15,315 13,400

12,619 12,117 14,159 12,069 12,306

Management 33,890 33,684 34,911 32,395 35,806

Medical

Nursing

ST&T

51,901 64,909 51,743 72,676 46,010

20,283 18,277 18,808 19,168 19,344

22,756 20,654 22,196 21,778 20,577

Grand Total 23,444 21,728 22,491 23,061 21,587

Non London

14,648

12,465

34,281

54,598

19,217

21,540

22,385

11 (London)

16,166

12,646

34,631

40,754

19,043

21,519

23,043

13 (London) 14 (London)

17,458 21,275

12,419 15,018

37,239 -

40,144 39,465

20,676 20,552

24,771 23,885

23,708 24,888

London

18,020

12,563

36,164

40,163

20,155

23,570

23,746

London Increase

+23.0%

+0.8%

+5.5%

-26.4%

+4.9%

+9.4%

+6.1%

Table 5.5 Average Basic Pay as a % of Gross Pay Trust No.

A&C

Ancillary

Management

Medical

Nursing

ST&T

Grand Total

1 (S West) 5 (S West) 7 (S West) 4 (North) 6 (North) 9 (South)

97.1% 94.7% 94.5% 91.8% 93.8% 92.9%

82.6% 84.4% 88.8% 79.5% 76.8% 78.1%

98.6% 95.3% 90.2% 96.9% 99.0% 95.4%

62.9% 80.2% 68.5% 68.9% 88.1% 65.1%

86.0% 85.6% 85.5% 86.6% 83.3% 86.4%

93.0% 90.9% 90.1% 88.5% 90.9% 91.6%

81.5% 86.1% 82.3% 82.0% 87.4% 81.4%

Non London

93.5%

80.4%

96.5%

70.0%

85.9%

90.3%

82.8%

11 (London)

82.6%

60.1%

87.7%

62.8%

74.2%

76.6%

72.2%

13 (London) 14 (London)

79.1% 87.5%

64.8% 82.8%

89.5%

59.9% 65.8%

71.4% 77.4%

75.3% 82.7%

70.4% 75.6%

London

82.4%

64.1%

88.8%

62.2%

73.6%

76.8%

72.0%

London Increase

-11.1%

-16.3%

-7.7%

-7.9%

-12.3%

-13.6%

-10.8%

Table 5.6 Interpolating Gross pay using difference in Basic and Basic as a percentage of Gross pay Non London

% of Gross

London

% of Gross

Basic

100.0

82.8%

106.1

72.0%

Gross Pay

120.7

147.4

66

London Weighting (Geographical Allowances) London Weighting is the largest distinguishing factor in London pay and, at an average 8.9% of Gross Pay, it represents 82% of the variance shown in Table 5.5. Payment of London Weighting is an exogenous factor beyond the control of the trusts and is a major contributor to spatial variations in wage costs 30. Table 5.7 Geographical Allowances as a % of Gross Pay Trust No.

A&C

Ancillary

Non London

-

-

11 (London) 13 (London) 14 (London)

13.8% 12.3% 9.0%

9.1% 9.3% 8.4%

London Increase

+11.8%

+9.2%

Management -

Medical

Nursing

ST&T

-

-

-

Grand Total -

6.2% 5.1%

3.0% 3.0% 3.0%

12.9% 14.4% 12.4%

10.0% 10.7% 10.3%

8.6% 9.3% 8.1%

+5.5%

+3.0%

+13.5%

+10.4%

+8.9%

Overtime Table 5.8 reveals that London pays marginally more overtime but that there is no consistency between London trusts in their patterns, e.g. in use of ancillary and nursing; two out of the three London trusts use less nursing overtime than any of the non-London trusts. In the Scientific, Technical and Therapeutic (S T & T) group London’s use of overtime is consistently higher.

Table 5.8 Overtime as a % of Gross Pay Trust No.

A&C

Ancillary

Management

Medical

Nursing

ST&T

Grand Total

1 (S West) 5 (S West) 7 (S West) 4 (North) 6 (North) 9 (South)

1.3% 1.4% 0.7% 1.1% 0.9% 2.5%

7.3% 6.3% 3.9% 5.7% 7.8% 8.2%

0.1% 0.0% 0.0% 0.3% 0.0% 0.1%

2.2% 1.0% 0.0% 0.0% 0.0% 0.0%

3.1% 1.6% 2.3% 1.3% 2.0% 2.2%

2.0% 0.9% 0.5% 1.4% 1.7% 1.5%

2.5% 1.5% 1.1% 1.2% 1.5% 1.6%

Non London

1.3%

6.4%

0.1%

0.4%

1.9%

1.4%

1.5%

11 (London) 13 (London) 14 (London)

1.1% 2.9% 2.5%

18.8% 6.5% 6.5%

0.0% 0.0%

0.0% 0.8% 0.7%

0.1% 1.0% 8.4%

3.9% 2.2% 3.4%

1.4% 1.6% 4.0%

London

2.3%

9.7%

0.0%

0.5%

2.4%

2.9%

2.0%

London Increase

+0.9%

+3.3%

-0.1%

+0.1%

+0.5%

+1.5%

+0.5%

30

Trust 9 appears to have paid Cost of Living Supplements in 2004/05 which may have been reclassified as “recruitment & retention premium” with the introduction of Agenda for Change. Within the analysis any adjustment will be between the impact of “London Weighting” and “other”.

67

Other Allowances The relatively high variance in the Medical group appears to be caused by New Contract payments. Some trusts coded these payments to Basic whilst others put them in Other. The categorisation does not affect our overall analysis but does cause some distortion to individual elements. A brief overview of the components of Other as found in our sample is outlined in Appendix 5.4. Table 5.9 Other Allowances as a % of Gross Pay Trust No.

A&C

Ancillary

Management

Medical

Nursing

ST&T

Grand Total

1 (S West) 5 (S West) 7 (S West) 4 (North) 6 (North) 9 (South)

1.6% 3.8% 4.8% 7.2% 5.4% 4.6%

10.1% 9.3% 7.4% 14.8% 15.4% 13.7%

1.3% 4.7% 9.8% 2.8% 1.0% 4.5%

34.9% 18.8% 31.5% 31.1% 11.9% 34.9%

10.9% 12.9% 12.2% 12.1% 14.7% 11.4%

5.0% 8.2% 9.4% 10.0% 7.4% 6.9%

16.0% 12.3% 16.6% 16.8% 11.1% 17.0%

Non London

5.2%

13.1%

3.4%

29.5%

12.2%

8.2%

15.7%

11 (London)

2.5%

12.0%

6.0%

34.2%

12.9%

9.5%

17.8%

13 (London) 14 (London)

5.8% 1.0%

19.4% 2.3%

5.4%

36.4% 30.4%

13.2% 1.8%

11.8% 3.6%

18.7% 12.3%

London

3.6%

16.9%

5.7%

34.3%

10.5%

9.9%

17.1%

London Increase

-1.6%

+3.8%

+2.2%

+4.8%

-1.7%

+1.6%

+1.4%

In summary, the gap between London’s average Basic Pay and average Gross Pay is 10.8% greater than that of the non London trusts and this gap is comprised of: London Weighting Overtime Other payments Total

8.9% 0.5% 1.4% 10.8%

Wage Cost per Average In-Post WTE The preceding analysis is used to build the stylised wage cost for the average in-post WTE for both London and Non London shown in Table 5.2. (We used 20% as oncost which is the average of all the trusts). The overall difference in average wage costs for the average in-post WTE is 22.1%.

Bank and Agency All trusts use some element of additional staff through Bank and Agency. We do not have WTE volume figures to set against bank and agency expenditure but we gain some indication of the financial impact by including it in the total wage bill, related to the average worked in-post WTE. Table 5.10 below expresses for each trust the payments for Bank and Agency per staff group as a percentage of the total gross pay for that staff group. It is apparent that London spends substantially more on Bank and Agency and that, on average, London employs 1 B&A staff for every 6 in-post

68

staff members while non-London trusts employ 1 B&A staff member for every 17 inpost staff members. Table 5.2 earlier shows that with the inclusion of bank and agency payments the difference in the Total Wage Bill per average in-post WTE between London and Non London rises to 32%. Table 5.10 Bank and Agency as a % of Gross Pay Medical

Nursing

ST&T

Grand Total

0.0% 0.0%

12.2% 1.8% 0.7% 2.7% 0.2% 12.9%

9.3% 6.0% 9.1% 6.9% 6.5% 14.4%

2.4% 7.6% 0.1% 2.1% 3.8% 7.1%

7.4% 5.9% 4.1% 3.8% 3.3% 11.3%

5.2%

0.0%

5.6%

8.7%

3.5%

5.9%

17.6%

21.7%

0.0%

13.1%

23.7%

10.2%

15.7%

21.3% 7.7%

24.1% 21.5%

0.9%

7.1% 6.8%

23.2% 37.9%

12.1% 31.8%

15.0% 20.3%

London

16.5%

23.4%

0.5%

8.9%

26.7%

14.4%

16.3%

London Increase

+12.9%

+18.2%

+0.5%

+3.3%

+18.0%

+10.9%

+10.4%

Trust No.

A&C

Ancillary

1 (S West) 5 (S West) 7 (S West) 4 (North) 6 (North) 9 (South)

0.5% 8.6% 6.0% 2.8% 0.8% 5.9%

2.3% 19.1% 0.3% 1.3% 3.5% 12.3%

Non London

3.6%

11 (London) 13 (London) 14 (London)

Management 0.0%

Trust Characteristics Within this sample we looked at the ratio of clinical:non-clinical staff to examine whether larger trusts required proportionately more or less non-clinical staff, addressing potential economies or diseconomies of scale in the use of non-front-line staff. The broad ratio was 70%:30% with no discernible pattern between trusts 31. Table 5.11 Proportion of Non Clinical Worked WTE Trust No.

Non Clinical

1 (S West) 5 (S West) 7 (S West) 4 (North) 6 (North) 9 (South)

1,261 989 736 1,081 1,676

33% 31% 27%

Non London

Clinical

Total 67% 69% 73%

34% 30%

2,602 2,176 2,027 2,087 3,894

66% 70%

3,864 3,165 2,763 3,168 5,570

5,743

31%

12,786

69%

18,530

11 (London)

1,497

30%

3,493

70%

4,989

13 (London) 14 (London)

2,560 843

33% 25%

5,274 2,490

67% 75%

7,834 3,333

London

4,899

30%

11,257

70%

16,156

We also considered turnover as a possible factor that might explain spatial cost variation. In this sample we found some evidence to support this, but insufficient to generalise. The figures in Table 5.12 were taken from the payroll files and represent employee headcount. Joiners represent employees coming onto the payroll for the 31

Non Clinical = Admin and Clerical + Ancillary + Management; Trust 4: WTEs not provided; Trust 14: outsourced their ancillary functions.

69

first time during 2004/05, and leavers are those who left the payroll during the year. The average turnover was 21% with Trusts 4 & 9 (c17%) and Trust 14 (27.6%) marking the boundaries and the other trusts being very close to the average. Trusts 1 & 5 (South West) countermand the notion that it is only London trusts that experience high turnover. The average Staff Churn (representing overall HR activity) was 23%, with the same trusts marking the boundaries. Table 5.12 Staff (employee headcount) Turnover by Trust Start of Year (a)

Trust No.

1 (S West)

Joiners

Leavers

Year End

Turnover

(b)

(c)

(d)

*

Staff Churn **

Growth

5 (S West) 7 (S West) 4 (North) 6 (North) 9 (South)

6,347 4,125 3,917 12,362 4,084 6,670

1,325 1,062 1,170 2,649 911 1,350

1,557 981 780 2,146 809 1,170

6,115 4,206 4,307 12,865 4,186 6,850

25.0% 23.6% 19.0% 17.0% 19.6% 17.3%

23.1% 24.5% 23.7% 19.0% 20.8% 18.6%

-3.7% 2.0% 10.0% 4.1% 2.5% 2.7%

Non London

37,505

8,467

7,443

38,529

19.6%

20.9%

2.7%

11 (London)

5,341

1,553

1,283

5,611

23.4%

25.9%

5.1%

13 (London) 14 (London)

8,381 3,605

2,157 1,324

2,040 1,035

8,498 3,894

24.2% 27.6%

24.9% 31.5%

1.4% 8.0%

London

17,327

5,034

4,358

18,003

24.7%

26.6%

3.9%

Total

54,832

13,501

11,801

56,532

21.2%

22.7%

3.1%

* c/(a+d)/2

**((c+b)/2)/((a+d)/2)

DISCUSSION OF PAYROLL ANALYSIS Spatial Variation We have observed strong evidence of spatial variation in wage costs between London and non-London trusts. All staff groups except for medical staff show higher average basic wage costs in London. The total uplift for London expressed as a percentage of the non-London Total Wage Cost (summarised in Table 5. 2) is: • Basic pay +4.2% • Geographical allowance +9.0% • Overtime +0.8% • Other +4.4% • Employers’ Costs +3.7% • Total Wage Cost +22.1%

Avoidable/Unavoidable The spatial uplift in wage cost can be rationalised as being unavoidable because: • Basic pay differences are most pronounced in administrative and clerical staff which are directly competing with the commercial sector. The Scientific, Therapeutic and Technical staff group is second highest in its basic pay differential. Basic pay will account for elements of grade drift which are consistent with a more competitive labour market in London. • Geographical allowance is exogenous and beyond the control of trusts.

70

• •

Overtime, Other (including maternity and unsocial hours payments) and Employers’ Costs are contingent on the higher basic pay and so may be described as a ‘flow through effect’. It is reasonable to describe the full 22.1% sample uplift in the cost per wte for London as unavoidable, based on London’s response to its labour market environment.

The question of avoidable/unavoidable cost distinctions is much less clear cut in relation to bank and agency staff. The resource amounts to extra volume of staff WTE, (unquantified here due to limitations of payroll data), turning it into a productivity or efficiency as opposed to wage cost issue. This is explored in more detail in Chapter 9 (using a national nurse staffing data set).

Feasibility The account of data quality earlier leads us to conclude that it would not be feasible to use payroll data on a wider basis to develop a Specific Cost Approach to determine spatial variation in staff costs.

71

CHAPTER 6. QUALITATIVE SURVEY A qualitative survey tool was designed and piloted in January/February 2006 within the Reference Panel. It took the form of a questionnaire which was administered in March-April 2006 through face-to-face or telephone interviews with HR and Nursing Directors or their nominees 32. The questionnaire contained mainly open questions and was structured across five sections 33 covering external labour market, organisational workforce flows, recruitment, general human resource issues and general trust issues. Twenty two interviews (plus some supplementaries) were conducted across twelve of the fourteen trusts in the micro study. Results are summarised below under section and question headings. Perceptions have been tested separately against external data in Chapter 7.

EXTERNAL LABOUR MARKET Question 1: Please give an overview of your labour market (a) Labour Market for Clinical Professionals The overwhelming response from the trusts was that the labour market for professional groups had eased considerably. Recruitment problems were confined to a small number of staff groups and to a number of specialist posts. There was a very good supply of newly-qualified nurses and physiotherapists and it was likely that there would be a surplus of supply over demand. There were also indications that, at least in some areas, the numbers of newly-qualified radiographers exceeded demand. •

Medical Staff: the labour market for both senior and junior medical staff was good in all of the trusts (Trust 1 in the South West excepted) apart from fairly minor problems in a number of specialties e.g. radiology, anaesthetics, histopathology and microbiology. It should be noted that these specialties were not causing recruitment difficulties across the board but only in selected trusts. Trust 1 had serious difficulties in the recruitment of junior doctors and usually had to recruit from overseas. Also in Trust 1, consultant posts took a long time to fill since they often involved a move into the area.



Nursing Staff: there was a strong labour market for qualified nursing staff. The majority of the trusts had expanded their labour market in the past through overseas recruitment, but there were no plans for further overseas drives. One trust commented on the significant costs associated with recruiting from abroad, e.g. airfares, cost of induction, work on registration and work permits. The two areas of recruitment difficulty highlighted were: o

a measure of difficulty in recruiting to grades F and G. This was a consistent response. Some trusts suggested that these grades were difficult to fill because the level of responsibility was not adequately rewarded. Another view put forward was that there was intense

32

In some cases additional interviews were undertaken with the HR or nursing/midwifery leads for maternity and orthopaedic services. 33 A further two sections were included to inform us about maternity and orthopaedics services. The responses provided some background to our general ledger specialty analysis but were not sufficiently systematic to warrant reporting here. For ease of presentation, questions from the survey tool have been renumbered into consecutive order.

72

competition from community services which were developing roles such as nurse practitioner and community matron. o

difficulties involved in recruiting to specialist areas such as NICU, PICU, ICU and theatres. This was also a fairly consistent response.



Newly-Qualified Nurses: all of the trusts had a sufficient supply of newly qualified nurses and there was some evidence that this had been the case for 2-3 years. A significant number of the trusts reported that there would be a surplus during 2006/7.



Midwives: the labour market for midwives was reasonably strong. Trust 11 (London) reported that the labour market had been tight (“we are always recruiting”) but that the appointment of a Recruitment and Retention Coordinator had improved the situation. Trust 7 (South) indicated how much the labour market had improved: “3 years ago the trust would have had 80-90 vacancies for midwives. Currently we have none”. Trust 5 (South West) struck a similar note: “Midwives used to be a problem, but not at the moment. If anything, we have a slight surplus and cannot accommodate all the hours that staff want to work”.



Allied Health Professionals: Trusts reported that the labour market had eased and a number of them indicated that there was likely to be a surplus of newlyqualified physiotherapists this year. Trusts were still having some difficulty in recruiting more senior staff (especially physiotherapists because of competition from the private sector). The main recruitment problem was for radiographers and, more seriously, for therapy radiographers. However, one trust (No. 5) indicated that there was likely to a surplus of newly-qualified radiographers this year.



Scientific and Technical: A number of trusts highlighted the difficulties of recruiting and retaining pharmacists because of the competition from the private sector. Other recruitment difficulties mentioned by a minority of trusts were: biomedical scientists and physiological measurement technicians.

(b) Labour Market for Healthcare Assistants and AHP Helpers The labour market for healthcare assistants and AHP helpers was buoyant. Trusts had worked hard on NVQ schemes and initiatives such as facilitating HCAs’ entry into professional training. The 80-20 scheme had proved to be effective and very popular. Trust 6 (North), for example, said that they had a waiting list for HCA posts and other responses indicated that AHP helper posts were very popular. The only 3 exceptions are outlined below: •

Trust 8 (South West) had a very low (c.0.7%) unemployment rate so that there were few potential recruits. The competition in the market resulted in high wastage rates from those who had been recruited and trained. However, the situation seemed to have started improving since the introduction of Agenda for Change.



Trust 7 (South) was operating in a buoyant economy fuelled by the service sector and tourism. The main competitors were the hotel trade, supermarkets and organisations such as Barclays Bank. The trust could recruit, with some difficulty, but turnover was high as staff were fairly easily attracted, if only temporarily, to

73

other employers. The perception was that the private sector “can afford to pay what is required”. Recently a supermarket chain had opened a superstore and paid £8.50 per hour for routine work. •

Trust 4 (North) had difficulty recruiting because of competition in the market.

(c) Labour Market for Administrative and Clerical The majority of the trusts reported recruitment difficulties for medical secretaries and a minority mentioned some problems for small groups such as clinical coders. Otherwise, there seemed to be a more favourable market for this staff group in London than elsewhere. •

Trust 11 (London) reported that the labour market was reasonable except for clinical coders and, to a certain extent, medical secretaries.



Trust 12 (London) indicated that they could recruit but that they were concerned about the quality because of competition from the private sector. The main shortage area was medical secretaries. The trust went on to comment on the uncertainty of how Agenda for Change would affect the market in the future: “Grade drift in London has been clawed out through AfC”.

Five trusts reported difficulties for the lower grades of A&C staff as well as the consistent problems of recruiting medical secretaries: •

Trust 1 (South West) and Trust 6 (North): recruitment to the lower grades of A&C was difficult because of the growth of competitors.



Trust 8 (South West): this was a difficult labour market because of competition from local employers and very low unemployment rates. Previously, there had been an element of grade drift in the trust and this had been highlighted by Agenda for Change.



Trust 9 (South): reported a shortage in the A&C labour market.



Trust 4 (North): the recruitment difficulties were due to the growth in the economy, stimulated by a strong regeneration programme which has been successful in bringing other work to the city, e.g. call centres.

(d) Labour Market for Ancillary Staff Groups London trusts reported that there was a reasonable labour market for ancillary staff groups. Trust 14, for example, said that recruitment of unqualified staff was easier; the area draws in a lot of asylum seekers and the trust works hard in recruiting from this group. Recruitment problems for these staff groups (e.g. porters, cleaning, catering) was concentrated, on the whole, in areas where there was a tourist industry producing intense competition from hotels, the leisure industry and growing competition in the market. The other serious competition came from large supermarkets. There were indications that the market had improved since the introduction of Agenda for Change but the downside of this was that the private sector could raise its pay levels in response.

74

All trusts in the South West reported that this was a very difficult market with strong competition from the hotel industry, tourism and supermarkets. Trust 5 had an ongoing recruitment process in place and was able to employ a number of students on short-term contracts to boost the numbers. This was a difficult, high turnover (20%), transient market. The situation had improved in Trust 1 with the introduction of Agenda for Change. Trust 8 paid Band 2 for laundry staff by making the individuals multi-skilled and noted that “we are making more of a meal of the definition of these jobs than we might if we had a buoyant labour market”. Trusts 4 and 6 in the North similarly experienced recruitment difficulties due to very low unemployment rates and to competition from the tourist industry and other large manufacturing industries. Question 2: Who are your competitors as an employer, e.g. which other trusts? The responses to this question highlighted the differences in “local labour markets” for professionals and other senior staff as against support staff. Trusts 7 and 8 (South West) identified a wide range of NHS trusts (within a 50 mile radius) as their competitors for professional/senior staff. Trust 11 (London) identified neighbouring Trust 13 (distance 5 miles) as a competitor in addition to trusts in Surrey. For support staff, the “local labour market” tended to be much tighter and to be subject to competition primarily from the private sector. For professionally qualified staff and other senior staff, the main competitors were listed as neighbouring trusts, PCTs and the independent sector. Judging from the responses, there is intense competition between trusts (and other healthcare organisations) in London, the North and the South West with the exception of Trust 1. For example, Trust 4 has 4 Foundation Trusts, a Care Trust, 4 PCTs, and 2 private hospitals all within commuter range. Trust 8 (South West) commented that although they have no healthcare competitors “on our doorstep … we are surrounded by hospitals” and provided a summary of the travel times to/from the town for 6 hospitals ranging 30 – 60 minutes (50 miles) away, pointing out that people who live midway would have a shorter journey time. Trust 6 (North) and Trust 1 (South West), on the other hand, have less competition from other NHS trusts because of their rural setting and distances involved. At the same time, this means that there is a smaller pool of staff to draw from. For support staff, the main competition was from the private sector, especially hotels, supermarkets, the leisure industry and other big employers, notably in tourist areas such as parts of the North and the South West.

Question 3: Do you offer the same or different salaries as your competing employers? There was a high degree of consistency in the responses to this question. Trusts were paying, on the whole, in accordance with Agenda for Change and there was an assumption that this would iron out most of the grade drift associated with the previous pay system. The majority of the trusts said that they would not pay more than Agenda for Change. Trust 12 (London) commented that previously trusts could exercise discretion in deciding the payscale point at which a member of staff started. “This discretion meant that we could employ people on higher scale points. Under Agenda for Change and the Consultants’ Contract, salaries are prescribed to a greater extent, so we have lost that discretion.”

75

At the same time, some of the responses highlighted a degree of suspicion that trusts competing in the same market (especially Foundation Trusts with large trust funds) would pay more favourably. However, there was no hard evidence to support this. Trust 14 (London) made the important point that RRPs and higher bandings would simply be inflationary in the health economy without achieving any expansion in the labour market: “It is a popular myth that London hospitals need to pay more to recruit staff.” Conversely Trust 12 (London) made the point that “the London employment premium cannot be calculated by a London cost of living formula, but rather what the market dictates we have to pay to recruit and retain staff.” Two trusts were paying higher than the base rate. Trust 4 (North) had its own local pay system which paid 3% above the national rate. Agenda for Change was being offered to staff but it was anticipated that many would not accept it. Trust 8 (South West) paid qualified nurses and AHPs a 2% premium. Pharmacists received a higher premium of £5,000 per annum. A small number of trusts indicated that they had local arrangements for removal expenses or shift allowances, although these were now being included in Agenda for Change. In relation to support staff (e.g. lower grades of A&C, ancillary, porters, catering) Agenda for Change had pushed the NHS rates above those of its competitors in social services and the independent sector. It had also made them more competitive in relation to the leisure industry and supermarkets. Question 4: What about other terms and conditions – do they differ from those of competing employers? The responses to this question were included in Question 3. Trusts tended to offer the same terms and conditions. The clear message to emerge was that NHS Terms and Conditions are popular with staff and give the service a competitive edge against the private sector. Question 5: What are the constraints on your labour market in terms of: (a) Housing (b) Transport (c) Child Care? (a) Housing Housing was the single biggest constraint highlighted by the majority of trusts. The responses from the trusts have been ranked in line with the degree of difficulty they indicated they were experiencing (low, medium, high and very high). It is clear from the responses that it is not only the cost of housing which is an issue but also the availability of suitable housing stock. The transport links are also a key element of this complex problem. • • • • • • • • • •

Trust 1 (South West) Trust 14 (London) Trust 12 (London) Trust 4 (North) Trust 11 (London) Trust 8 (South West) Trust 5 (South West) Trust 7 (South West) Trust 2 (South) Trust 6 (North)

very high very high very high high high high high high high medium

76

• •

Trust 3 (North) Trust 9 (South)

low low

A summary of the main responses is outlined below. •

Trust 1 (South West): The cost of housing was a major constraint. The tourism industry in the South West has had significant investment and was now a roundthe-year rather than just a seasonal employer. Housing costs had increased dramatically in the last 5 years in line with the growth in the economy. The rental market was very difficult because the owners could make much more money from short lets to tourists. The local authority was focused on finding affordable accommodation for the current population rather than providing housing for incoming healthcare staff. Migration and housing compounded labour market trends. Generally there was an out-of-county movement of economically ambitious young people and an inward movement of equity rich but “work poor” older people. Where young people remained in the county they were drawn to urbanised areas where housing was more affordable. The older population was pulled to non-urban areas. This has been termed the opposite of the South East escalator where there is a net influx of prospective economically valuable employees.



Trust 2 (South): Housing was a major constraint and had proved to be a deterrent when staff were considering which part of the UK to work in.



Trust 3 (North): Housing was cheap but located in areas of high deprivation which was regarded as undesirable by professional staff (similar to Trust 4’s analysis). Senior staff tended to commute from more than 25 miles away.



Trust 4 (North): There was a stark contrast between two geographical areas of the city, one of which was affluent while the other was very deprived with 14% of the population with an income below £5000 per annum. Housing was, therefore, very expensive or very cheap depending on the geographical area. The housing market for consultants was good with a range of attractive housing. However, this housing stock was too expensive for middle grade professionals. Unfortunately, the housing stock which was cheaper and within the range of the majority of staff was located in unattractive areas where staff did not want to live.



Trust 5 (South West): The trust regarded housing costs and availability as a major constraint with rapid recent increases in the price of property. There was not much rental accommodation, except for holiday lets. The majority of the housing stock was more suitable for an older, retired population with a preponderance of bungalows and flats.



Trust 6 (North): Housing was regarded as an obstacle to recruitment in an area which is expensive with attractive, residential housing stock. There have been occasions when applicants have refused to take up a post because, although they liked the city, the housing market was too difficult.



Trust 7 (South West): This was a major constraint – just recently three people had been offered posts at the trust but turned them down because of housing costs. Trusts 5 and 7 were similar in terms of housing costs and the lack of appropriate housing stock which exacerbated the problems associated with house prices. The trust was in partnership with local authorities and housing associations to ensure that staff could get a start on the property ladder.

77



Trust 8 (South West): Housing was a major constraint in this former market town which lacks a broad housing mix. It is a very middle class area and housing costs tended to be towards the upper end, with a shortage of flats or terraced houses.



Trust 9 (South): Housing was not a problem, even though it was expensive. The trust was managing to overcome some of the problems through collaboration with key worker schemes.



Trust 11 (London): Housing was a constraint because of high costs and this has been confirmed from an analysis of leavers’ forms. Even the housing in the wider, surrounding areas was expensive. When staff decided to move to more attractive areas, e.g. Surrey, there was intense competition for staff from other hospitals in this area and the trust has worked hard on this problem. It has developed links with the SHA’s key worker scheme, opened an accommodation office for housing advice and provided some accommodation on site for junior doctors, nurses and pharmacists.



Trust 12 (London): “Housing is the major constraint. Prices in London are higher than elsewhere and people are paying a higher percentage of their income on housing. The costs of transport are probably no higher than elsewhere but people are spending more time commuting. When people marry they move outside London – this is a lifestyle issue. Consultants will not move into London so we have to design innovative packages, e.g. generous relocation packages. The quality of education is a further issue. State schools in London are perceived as not being up to the mark, so people tend to pay for a private education. What is not clear is the extent to which we need to pay people for the extra costs of housing, for the extra cost of reduced quality of life, or whether people are content to be worse off than counterparts in other parts of the country for the sake of working in a London Teaching Hospital.”



Trust 14 (London): “Housing is the big constraint. The housing and travel link is complex and we are always working to address it. We know, for example, that staff would happily live 30 minutes down the train line to find affordable accommodation for their family, but this immediately knocks £3,000 off their annual earnings. So we are working with the train companies to persuade them to offer cheap off-peak season tickets for staff who can prove they are shift workers. It is a scandal that professional couples aged 25-28 are forced to share accommodation with other couples in order to live in London. We know from empirical work that the main reason why people tend to leave their jobs is to seek professional development. Here this only applies to 31% of leavers, whereas 59% left because of housing.”

(b) Transport London had the best transport links with good tube and rail access. A significant number of the trusts were faced with inadequate public transport systems and/or poor road infrastructures. Parking was a problem for almost all of the trusts. •

Trust 1 (South West): Travel links were poor with very little public transport. Essentially, the trust had to rely on staff driving to work. This was complicated by the fact that the service was provided on 3 main sites plus 15 community sites.

78



Trust 2 (South): Transport was described as difficult with a poor road infrastructure. The transport was poor in rural areas and, unfortunately, the hospital was split across sites that were 22 miles apart, one of which was outside the town that it was serving.



Trust 3 (North): Transport links were very good. A bus route ran through the hospital site and road and rail links were good.



Trust 4 (North): Public transport was good but also expensive. Parking problems were very bad with little prospect of resolving them because of landlocked sites.



Trust 5 (South West): Rural transport was poor and buses did not cover all areas. Parking on site was a problem.



Trust 6 (North): Public transport was poor and there were no buses going past the hospital, compounded by severe parking problems. The lack of parking put people off coming to work at the hospital: applicants had refused to take up a post when they heard that they would not get a parking space. Similarly, agency staff would not return when they realised how difficult the parking situation was.



Trust 7 (South West): Public transport was reasonable during working hours but did not cover shift hours satisfactorily. Rural areas had a poor transport system. The trust had responded to the problems by contributing to the Park and Ride cost.



Trust 8 (South West): Transport was reasonable with a good bus service from the centre of town, although staff travelling from elsewhere would need to take 2 buses. Because of free parking (which was a recruitment and retention cost) over half of the staff brought their own cars.



Trust 9 (South): Transport was reasonable with bus routes to the hospital. Parking was, however, a major problem.



Trusts 11, 12, 14 (London): transport links were excellent with ready access to the tube and rail systems. This meant that staff could be recruited from all over London and the Home Counties, stretching as far as the South Coast. The converse was also true, of course, and staff could be attracted to other NHS organisations competing in the labour market. Parking was a major problem.

(c) Child Care There was a high degree of consistency in the responses to this question. Trusts have put considerable effort into providing child care facilities and also advice to staff through Child Care Coordinators which were found to be effective. Whereas the general response was that “there can never be enough child care”, this was not regarded as a problem by the majority of the trusts. A number of the trusts described their facilities as “good” and child care “as not being an issue”. The only problem was raised by Trust 14 (London) where there is no space on site for a crèche. The trust does, however, have a child care coordinator.

79

ORGANISATIONAL WORKFORCE FLOWS Questions 6a – 6c: Is your turnover rate high, medium or low? What is the turnover rate of qualified nurses? What is the turnover rate among midwives? Turnover was reported to be reducing with the majority of trusts stating that turnover for registered nurses and midwives was low. None of the trusts indicated that they regarded turnover as high. The other clear message was that turnover has been reducing over the past 2-3 years. For registered nurses, eight of the trusts described the turnover rate as low with a range between 5% and 12%. For midwives, seven of the trusts described the turnover rate as low with a range from 3.7% to 11%. In nursing the highest turnover was in the lower grades. Trust 11 (London), for example, had low turnover (8%) for senior nursing staff and high (20%) for grades D and E, consistent with junior grades moving to obtain experience or with staff moving from rented accommodation. Turnover for HCAs and AHP helpers was reported by a number of the trusts to be higher than the rates for qualified staff. Questions 7a and 7b. Are your vacancy rates high, medium or low? What is the vacancy rate among qualified nurses? All of the trusts, with one exception (Trust 1) indicated that vacancies were reducing. The majority of responses described the vacancy rate as low. The favourable labour market position at present was demonstrated by two trusts who said that they had only six vacancies for registered nurses (Trust 8) or no vacancies for registered nurses (Trust 7). Trust 1 was the only trust which said that the vacancy rate for registered nurses was increasing rather than reducing, where the current vacancy rate of 9% was the highest for the last 10 years. London trusts appeared to have higher vacancy rates than non-London trusts (see Table 6.2) but had a different view of high-low; 8% - 11% was regarded as low in London whereas 4% - 6% was the range quoted as low outside London. Questions 8a and 8b: Is sickness absence high, medium or low? What is the typical sickness absence rate among nurses? The overall finding was that the trusts are working very hard to control sickness absence rates. The fairly consistent response was that sickness absence for nurses was 5 – 6%. Most trusts had a target to achieve a reduction to 4 or 4.5%. Questions 9a – 11b: These questions refer to the use of bank, agency and overtime. Responses relating to registered Nursing & Midwifery staff are summarised here: •

Overtime: Two of the London hospitals (Trusts 12 and 14) stated that they do not use overtime and Trust 11 (London) defined its usage as low. The only other hospital which does not use overtime is Trust 3 (North). The other trusts defined their use of overtime as low and said that it was used, primarily, in areas (such as ICU, NICU and theatres) where the specialist knowledge of the trust’s staff was the best way to provide cover. The only exception to this pattern was Trust 1 which defined its use of overtime as high.

80



Bank: The three London trusts and Trust 1 (South West) defined their use of bank staff as high. The others defined usage as either medium or low. Trust 1 has developed its own bank which was also used as a recruitment pool. Trust 7 (South West) used its bank staff to cover a “flexible ward” which was opened only when the service required it.



Agency: The fairly consistent response was that agency staff were used only as a last resort and to cover difficult vacancies (such as ICU or SCBUs) which could not be covered, at that point, by the bank. Most of the trusts had tight controls on agency costs.

RECRUITMENT Question 12: Would you describe the quality of recruits as high, medium or low? In general, the trusts described the quality of recruits as medium or high. There were, however, a number of recurring themes: •

Newly-qualified nurses (NQNs): the need to ensure that NQNs were more fit for practice was raised by a number of trusts. This was especially important in a period when numbers were being reduced.



Direct-entry midwives: Some concern was expressed at the length of time it takes for direct-entry midwives to become fully competent in the workplace.



Overseas recruits: Two trusts highlighted the problems associated with the time required for cultural adaptation and fluency in English for all recruits and the importance of “interpretation and application of language” with particular reference to the quality of junior doctors.



Junior doctors: Some concern was expressed regarding the quality of junior doctors and the fact that they now have less practical experience. Supervision is changing: senior doctors are becoming increasingly specialist while junior doctors need generalist training.



Admin & clerical and ancillary: six trusts stated that A&C and/or ancillary recruits were often of low quality and required training.

Trust 8 (South West) indicated that the quality of recruits was improving as the labour market loosened and that competition in recruitment gave trusts more choice in selection. Trust 1 (South West) had a lack of choice in pharmacy, AHPs and pathology which, in their view, was a major disadvantage. Age indicates length of service and Trust 12 (London) noted that the trust’s workforce was younger “which equates to less experience”. Question 13: What are the characteristics of your recruits in terms of (a) Age (b) Ethnicity (c) Experience (d) Part-time/Full-time? •

Age: Trusts 12 and 14 (London) both stated that their workforces were relatively young and, therefore, likely to have higher turnover. Trust 14 had higher than average numbers in the 35-45 age group and less than average aged 35-45. In contrast, Trust 8 (South West) stated that the trust was “drifting towards an older

81

workforce”. There has been a gradual ageing in the Trust 2 (South) nursing workforce due to the secondment of HCAs to undertake professional training. In Trust 1 (South West), a high proportion of the workforce was over 55. •

Ethnicity: Trusts tended, on the whole, to have higher levels of ethnic minority staff than the proportion in the population. This was especially the case where the trust had a significant number of nurses from the Philippines. Trust 8 (South West) stated that the NHS was the main source of employment for the local ethnic minority population.



Part-time/full-time: London trusts reported less part-time working than the other trusts in the sample. Trust 11’s staff were 87% full-time; Trust 14 stated that they had a predominantly full-time workforce; Trust 12’s staff were 90% full-time – the trust explained this by saying that staff need full-time earnings. In contrast, Trusts 7 & 8 (South West), 9 (South), and 6 (North) stated that part-timers account for about 50% of the total headcount. Trust 1 (South West) had a high number of part-timers but a number of these did two part-time jobs, either in the trust or with the trust and the PCT.

GENERAL HUMAN RESOURCE ISSUES Question 14: Do you offer family-friendly terms and conditions? If so, what is their impact? The majority of the trusts had achieved Improving Working Lives Practice Plus Standard. They all had a wide range of flexible working arrangements in place, e.g. • • • • • •

Term-time working Annualised hours Job share Condensing hours into less days Part-time working Child-care facilities

All of the trusts were confident that these arrangements were important in terms of both recruitment and retention. Trust 11 (London) stated that family-friendly policies were important in recruitment; candidates regularly ask for details and there was an expectation that policies would be in place. Trust 6 (North) had undertaken an assessment of the policies which showed that the options were well-known to staff and had a high take-up rate. Trust 8 (South West) noted that the Healthcare Commission’s quality of work-life balance put the trust in the top 20% of acute hospitals. Likewise, 76% of staff used flexible working opportunities. Trusts 1, 7 & 9 had a wide range of policies in place. However, there was a note of caution in their responses with regard to the need to achieve more of a balance for the future. Trust 9 (South) was conscious of the need to ensure that a balance was struck between staff aspirations and the need for efficiency and cost-effectiveness. Trust 1 (South West) commented on the difficulties of arranging rotas in a familyfriendly working environment. Similarly, Trust 7 (South West) said that flexible working and a high number of part-timers had consequences for full-time staff and covering rotas.

82

Question 15: What is your policy in setting salary budgets, e.g. mid point of the scale, near the top of the scale? The majority of the trusts reported that they set budgets at mid-point of the scale for a new development. For budgets including existing staff, budgets were set on actual costs.

Question 16: How do you attract staff back? There were three particular themes from the trusts’ responses: •

Return-to-practice initiatives were not really necessary at present because of the favourable labour market. (Trusts 2, 3, 5, 8, 9, 11, 12).



Some trusts maintained contact with ex-employees by keeping them on the bank and using them for occasional sessions. (Trusts 1 and 6).



The return-to-practice initiatives have not shown a good return on the high level of investment involved. Trust 3 gave details of a return to practice initiative across the SHA where the response was very low compared to the degree of investment.

It was apparent that Return to Practice initiatives had been toned down or discontinued because of the favourable labour market and also the level of investment required. However, trusts were pursuing other links, e.g. using the bank and “keep in touch initiatives” (Trusts 14 and 7). Question 17: Any other particular initiatives e.g. widening access points? The trusts used a range of innovative schemes to extend their labour markets. There was a strong focus on growing and developing their own workforces. Other key aims were to play a role in raising the skills level of the population and the economic regeneration of the area. Examples are outlined below. •

Overseas recruitment: A significant number of the trusts extended their labour market by recruiting from abroad. The most popular destination was the Philippines but recruits were also drawn from Spain, India and Italy. London trusts provided short-term employment for Australians and New Zealanders. All of the trusts emphasised that they had no plans for further recruitment drives to overseas destinations.



Sponsoring HCAs and AHP helpers: A number of trusts were sponsoring HCAs and/or AHP helpers to undertake professional training. Trust 5, for example, participated in the 80-20 scheme in which the former Workforce Development Confederation paid 80% of the training costs and the trust 20%. The training programmes were aimed at enabling HCAs and helpers to train as registered nurses and physiotherapists. The trust regarded this scheme as a major success and the right approach to the labour market since it enabled the trust to grow its own staff. This was particularly important since the staff concerned had already demonstrated a commitment to the organisation. Trust 8 also participated in the 80-20 scheme and, in addition, was sponsoring healthcare support workers on a 4-year degree course with the Open University. The first cohort, of about 12 students, was in its 3rd year and a second cohort was now entering the scheme. The students’ ages ranged from early 20s to early 40s. The scheme was

83

extremely successful and the students were doing very well with outstanding academic marks. Trusts 2, 3, and 11 had established similar schemes. •

Development of the support workforce. A number of the trusts highlighted their commitment to NVQs and the general development of the support workforce. Trust 1, for example, had invested heavily in NVQ training and, at any time, would have a large number of staff training for NVQs, including HCAs, cleaners, porters, catering, resulting in a different skill mix with a high proportion of experienced HCAs in the workforce. The trust had some long-serving HCAs (10-15 years in post).



Raising the skills level in the population. Trust 12 (London) has had a big drive to bring local people into employment through skills escalator programmes, literacy classes, training in NVQs and providing gateways into the professions. The trust was confident that these training initiatives had helped to reduce vacancies and also their level of dependence on overseas recruitment. In their view these schemes were consistent with the government’s strategy to use hospitals as vehicles for economic regeneration. Trusts 1 (South West) and 2 (South) were both providing training in skills for life, numeracy and literacy for non-traditional learners.



Social responsibility. Trust 2 had made a commitment to fill 50% of vacancies for HCA posts from Job Centre Plus candidates. This was a response to the unemployment levels in the area and the trust was given a financial incentive for this initiative. Trust 6 was involved in an employability scheme which had given them important links to organisations such as job centres, voluntary schemes, ethnic groups and single parent support schemes.

GENERAL TRUST ISSUES Question 18: What is the site configuration? Five of the trusts (all outside London) provided services on split-sites. The only responses included here refer to split-sites where there is more than one main site: •

Trust 1 has 3 sites and covers 13 widespread community sites (for diagnostics and outpatients);



Trust 2 works across 2 sites which are over 20 miles apart and there is a poor road infrastructure;



Trust 4 works across 3 main sites;



Trust 5 works across a split-site with 2 hospitals which are just 2 miles apart;



Trust 9 works across sites in 3 towns.

84

Table 6.1 Turnover TRUSTS LOCATION No.

TOTAL STAFF High

1

South West

2

South

3

North

4

North

5

South West

6

North

7

South West

8

South West

9

South

10

London

11

London

12

London

13

London

14

London

Medium

REGISTERED NURSES Low

High

Medium

13%

12%

Low

REGISTERED MIDWIVES High

Medium

Low

9%

5%

8.5%

10.4%

13.6%

3.7%

*

5%

6%

12%

11%

11%

12%

9.9% 11%

* *

11.4%

10% 14%

10% 13%

* *

*

* *

11-12%

14%

14%

14%

85

Table 6.2 Vacancies TRUSTS LOCATION No.

TOTAL STAFF High

1

South West

2

South

3

North

4

North

5

South West

6

North

7

South West

8

South West

9

South

10

London

11

London

12

London

13

London

14

London

Medium

REGISTERED NURSES Low

High

Medium

REGISTERED MIDWIVES Low

High

Medium

Low

9%

* * * * * * * * *

5.8%

5.6%

*

*

* * * *

0%

0%

* *

* *

5% 4%

*

*

8%

11%

*

*

86

Table 6.3 Overtime, Bank and Agency TRUSTS LOCATION No.

OVERTIME High

Medium

BANK Low

*

High

Medium

AGENCY Low

*

1

South West

2

South

*

3

North

0%

2%

4

North

5

South West

6

North

* * *

7

South West

8

South West

9

South

* * * * * *

10

London

11

London

*

12

London

13

London

14

London

High

Medium

Low 0%

*

0%

5%

* *

* * * * * * *

0%

* *

* *

0%

*

*

87

CHAPTER 7. PERCEPTIONS & NATIONAL DATA From the qualitative survey among Reference Panel trusts three geographical labour market profiles have emerged, consistent with the sample clustering and distribution in the south west, London and the north. This chapter summarises the perceptions and considers the extent to which we may generalise from them, based on comparison and reference to national data. In the South West, trusts described a stable workforce that remained with the organisation for a long time, resulting in low turnover, with a high average age and a high proportion of staff employed on a part-time basis. There was historically some use of overtime and bank staff were frequently drawn from a pool of part-time staff who worked exclusively on the bank. A recurrent theme in the interviews was the buoyancy of the local economy, which increased competition among employers within the local labour market for support workers, and which raised the cost of housing to a level which was unaffordable to the local population (increasingly purchased by equity-rich but workpoor older people moving to the area in retirement). The trusts in the north of England presented a similar profile in terms of workforce age and stability, but described a less overheated local economy and housing market. Recruitment of support workers was not so difficult and the cost of housing had remained lower than other parts of England (although the issue of affordable housing for nurses and other NHS workers was a theme which ran throughout the interviews). The quality of the workforce was regarded as high due to the stability of the workforce. London trusts described a picture of a younger workforce, living more often in rented accommodation, with higher turnover leading to higher vacancy factors at any given time, requiring greater use of bank nurses to cover these vacancies. It was also noted that the proportion of part time staff was relatively low (estimated by one trust as 10%), since staff could not afford to earn less than a full time salary. One trust observed that bank staff were drawn from their own full-time employees who routinely worked bank shifts to enhance their wages.

Demand and Supply of Labour A general observation that applied to all labour markets was that the demand-supply balance had shifted, particularly among newly qualified nurses and physiotherapy staff. The reasons for the change were threefold: (i) increases in pay associated with recent awards, (ii) increases in the supply of newly qualified recruits through growth in the number of training commissions, and (iii) a degree of insecurity which had entered the job market due to financial instability in the NHS, associated with announcements of redundancies, that had increased retention, reduced turnover and so reduced the number of vacancies. There was a feeling that the NHS labour market had entered a new era of improved recruitment, which offered some advantages in raising the quality of recruits since employers could be more selective. Overseas recruitment drives, it appeared, were a thing of the past.

88

A review of national statistics and literature indicates that the improvement in the labour market is likely to be sustained in the future. The total HCHS and General Practice workforce grew rapidly in the 10 years 1995-2005 (see Table 7.1), with an increase of 2.7% per annum in the total FTE between 1995-2005 and 4% per annum between 19992005. In March 2006 the Workforce Review Team published a draft set of recommendations for 2007/08. The Team revealed an acceptance that there will be an emphasis on productivity gains together with reductions in staffing and training commissions in some areas in line with financial recovery strategies, envisaging “a real prospect of significant unemployment amongst trained staff in all professions”. The Workforce Review Team also supports the attitudinal survey data from Reference Panel trusts regarding the surplus of newly trained staff in the market this year: “It remains the case that newly trained staff are having difficulty in finding jobs in a number of specialties and staff groups. In some cases, such as physiotherapy, there are vacancies at senior levels, while newly trained staff remain unemployed, although a national action plan is in place to deal with this. In a few medical specialties, a shortage of posts for those completing training has been observed.” The RCN Labour Market Review shows that training places for England increased by 5577 or 30% between 1999-2000 and 2003-04, stimulating growth in the supply of newly qualified nurses. In terms of qualified nurses, the RCN comments: “International recruitment grew rapidly in the late 1990s, and in recent years it has accounted for 4050% of new entrants, or about 12,000 or 14,000 new registrants per year. The upward trend has now stabilised and there may be a slight decline in future years” (RCN, 2005). Table 7.1 Average Annual Change for Selected Periods STAFF GROUP Total HCHS and General Practice Workforce

1995-2005

1997-2005

1999-2005

2004-2005

2.7%

3.4%

4%

3%

Medical Consultants

5.2%

5.3%

5.6%

5.2%

Qualified N&M (HCHS)

2.2%

2.8%

3.5%

1.9%

Qualified AHPs

4%

3.9%

4.1%

3.6%

Other Qualified ST&T

4%

4.4%

4.9%

4.8%

Manager & Senior Mgr 6.5% 7.3% 8.2% Source: Workforce Statistics from The Information Centre (Government Statistics).

4.3%

Vacancies Qualitative survey reports of reduced vacancies are consistent with the NHS Vacancy Survey of March 2005 which shows a decrease in the 3-month vacancy rate for all groups between 2004 and 2005: •

Consultants: reduced from 4.4% in 2004 to 3.3% in 2005;



Qualified nurses: reduced from 2.6% in March 2004 to 1.9% in March 2005;

89



Allied health professionals: reduced from 4.3% in 2004 to 3.4% in 2005;



Qualified scientific, therapeutic and technical staff: reduced from 2.6% in 2004 to 2.2% in 2005.

The three-month vacancy factor is defined as those vacancies which had taken 3 months or more to fill expressed as a percentage of staff in post plus 3 month vacancies. They are not, therefore, directly comparable (except in trends) with the Qualitative Survey responses which refer to all vacancies that the trust is endeavouring to fill. There was some evidence from the Qualitative Survey that London has a higher level of vacancies, although this could be due to the higher turnover in London which means that there will always be a higher level of vacancies to be filled. The NHS Vacancy Survey for March 2005 shows that London had the highest 3-month vacancy factor for nonmedical professional groups. Within this there are wide variations in London vacancy rates (from 1.9% in South West London to 5.3% in South East London). Also Bedfordshire and Hertfordshire have the second highest vacancy factor (5.0%) in England – this is higher than 3 of the London areas. Similarly, Hampshire and the Isle of Wight (at 3.2%) and Essex (at 2.6%) have vacancy factors which are higher than either South West or North West London.

Turnover National data confirms the survey findings that turnover has reduced sharply. Between 2000 and 2003 the turnover rate for registered nurses in England was fairly stable at around 13-14%. The turnover rate for London during this period was much higher at 22% in 2000 and dropping to around 17% in 2003 (Hutt and Buchan, 2005). Data from the Office of Manpower Economics for 2004-05 shows an average turnover rate for registered nursing staff of 10.8% with a range from 8.8% in the North East to 14.3% in London. The data for occupational groupings shows that RSCNs had the highest turnover at 12% and midwives had the lowest at 7.9% (Office of Manpower Economics, 2005). Turnover for HCAs and AHP helpers was reported by a number of the trusts to be higher than the rates for qualified staff. Data from the Office of Manpower Economics is consistent with this response. In 2005 the average turnover rate for nursing auxiliaries for England and Wales was reported to be 12.5% compared with 10.8% for registered nursing staff, (Office of Manpower Economics, 2005). Turnover data for the period 2003/4 (based on census data for 2003 and 2004) supports the perception that low MFF trusts have a more stable workforce. Table 7.2 summarises workforce flows on a quintile basis, in which quintile 1 represents the 20% of trusts at the lower end of the staff MFF range and quintile 5 represents the 20% at the highest end of the MFF range, mainly located in London. It shows that 19% of nursing staff in quintile 5 left their organisations compared to 13% in quintile 1 during the period. It is noteworthy that the workforce grew by 3% during the period since joiners equalled 19% of the workforce while leavers equalled 16% (consistent with the growth figures described earlier).

90

Table 7.2 Nursing Staff: Leavers QUINTILE 1 2 3 4 5 All Hospital Trusts

TEACHING 14% 15% 15% 22% 17%

NON-TEACHING 13% 13% 16% 17% 18% 15%

TOTAL 13% 14% 16% 17% 19% 16%

NON-TEACHING 16% 16% 20% 20% 23% 19%

TOTAL 16% 16% 20% 20% 24% 19%

Table 7.3 Nursing Staff: Joiners QUINTILE 1 2 3 4 5 All Hospital Trusts

TEACHING 16% 18% 18% 25% 21%

Age Profile Trusts outside London described an increasingly mature workforce. The RCN 2004/5 labour market review confirms this upward drift in age: “the number of nurses on the register who were 55 years or over had risen from 9% to 16%” between 1991 and 2004/05. With the exception of community and primary care, the NHS in London has a younger workforce than England as a whole. In autumn 2004 the DH launched a campaign to attract people over 50 to work in the NHS. London now has the highest figure of nurse entrants over 26 years old at 57% compared with an average of 46 per cent across England (Hutt and Buchan, 2005). An analysis of 2004 census data presented in Table 7.4 , based on the average age of all qualified nursing staff in hospital trusts, is consistent with the survey perceptions (i.e. younger in high MFF trusts), but the margin of difference is low, ranging from 42.1 – 41.3 across all nurses. The difference is more pronounced among men (who comprise 10% of FTEs). Table 7.4 Average Age of Qualified Nurses QUINTILE 1 2 3 4 5 All Hospital Trusts

MEN 40.1 40.2 38.9 38.3 37.8 39.1

WOMEN 42.5 42.4 42.4 42.4 42.0 42.4

TOTAL 42.1 42.0 41.8 41.7 41.3 41.8

The more detailed cumulative age distribution below gives greater credence to perceptions, showing that only 35% of qualified nurses in quintile 1 are less than 35 years old compared to 50% in quintile 5.

91

Table 7.5 Cumulative Age Distribution of Qualified Nurses % Cumulative Age Distribution Up to 25 years Up to 30 years Up to 35 years Up to 40 years Up to 45 years Up to 50 years Up to 55 years Up to 60 years Up to 65 years

1 7% 20% 35% 53% 71% 85% 94% 99% 100%

Quintile 3 8% 24% 41% 57% 74% 86% 94% 99% 100%

2 8% 23% 39% 56% 73% 87% 95% 99% 100%

4 9% 27% 44% 60% 75% 86% 94% 99% 100%

5 9% 28% 50% 64% 77% 86% 94% 99% 100%

All 8% 24% 42% 58% 74% 86% 94% 99% 100%

Table 7.6 Summary of Average Ages Across the Non-Medical Workforce Quintile: Qualified nursing, midwifery & health visiting staff Qualified allied health professionals Qualified healthcare scientists Other qualified scientific, therapeutic & technical staff Support to ST&T Support to doctors & nurses Central functions Hotel, property & estates staff Managers & senior managers Qualified ambulance service staff Support to ambulance staff Grand Total

1

2

3

4

5

42.1 39.8 41.5

42.0 38.7 41.2

41.8 39.2 41.6

41.7 38.9 41.4

41.3 37.7 41.1

40.0 40.1 41.8 40.0 43.7 44.8 58.0 43.8 41.5

39.5 39.6 41.7 39.9 43.3 44.8 40.2 50.6 41.2

39.3 40.1 41.9 41.0 44.0 44.2 45.6 45.0 41.4

39.3 40.9 41.8 41.4 44.6 43.9 47.4 39.3 41.5

38.7 39.2 41.1 41.4 44.1 42.7 44.3 41.3 40.8

Agency The NHS appears to have gained control over nurse agency costs and uses agency staff as a last resort. Reference Panel trust experiences are consistent with the Reducing Agency Costs Project, a scheme that brings together 34 NHS Trusts from across England with the aim of reducing spending on agency staff. Targeted actions to reduce the NHS spend on agency nursing, particularly in London, resulted in savings of £92 million (Department of Health, 2006).

Housing Housing costs were identified as a major labour market constraint by Reference Panel trusts, and we took the opportunity of comparing perceptions with the results of The Halifax County House Price Survey of the UK (14th April 2006) which showed movement over a ten year period 1996 – 2006. It found that the average house price in the most expensive county was currently 3.2 times than in the least expensive county, the same

92

as in 1996. Historically house prices in the UK have been characterised by marked differences between the North and the South. This continued to exist but the market, it was observed, has taken on a West-East tilt. About 2 million people moved away from London in the past 10 years and the West Country is proving to be one of the most popular destinations, both for people seeking to relocate and for those looking for a holiday home. The effect has been to reduce the differential between the South West and the South East, which remains the most expensive region. This is illustrated by Cornwall which had the biggest jump during the period, together with Dorset which also had one of the largest rises in house prices. The issue of house prices was raised at an early stage by Trust 4 in the South West, arguing that its property market was uncharacteristic of the general area, partly due to supply problems associated with planning restrictions which made housing overly expensive. In general, trusts laid heavy emphasis upon the role of house prices in driving up the local cost of living. Table 7.7 Movement in House Prices (Extract from Halifax Survey 2006) 1996 Average

% Difference from Most Expensive 1996

% Change 1996 - 2006

2006 Average

% Difference from Most Expensive 2006

South Humberside (least expensive 2006)

£43,935

-55%

163

£115,385

-61%

County Durham (least expensive 1996)

£41,861

-58%

198

£124,888

-58%

South Yorkshire Lancashire West Yorkshire Cornwall North Yorkshire East Sussex Dorset Wiltshire Hampshire London Surrey (most expensive 1996 and 2006)

£45,729 £48,187 £51,960 £53,081 £68,943 £64,686 £65,679 £65,873 £68,943 £75,385 £98,566

-54% -51% -47% -46% -30% -34% -33% -33% -30% -24% +0%

173 179 165 268 187 214 217 169 187 226 203

£124,031 £134,271 £137,589 £195,388 £185,048 £203,434 £208,355 £177,338 £197,668 £245,755 £298,835

-58% -55% -54% -35% -38% -32% -30% -41% -34% -18% +0%

Timing and Links with the Staff MFF In responding to the questionnaire survey, participants gave a sense of their current experience, setting it in the context of how conditions have changed in recent years. Earnings statistics, underpinning the staff MFF and based on the general labour market approach, on the other hand, use a rolling three year average of retrospective data. The survey highlights a time lag between activity changes in the labour market and adjustments to data that are intended to reflect these market changes. This is particularly evident in the case of house prices which generated a vigorous response from Reference Panel trusts. The general labour market theory, summarised in Chapter

93

2, indicates that wages are a reflection of the net cost of living and amenities, so that any shift in relative house prices would, it is predicted, work their way into the general labour market through spatial wage differentials. There is no apparent conflict between the GLM theory and observations from our qualitative survey in the long run. However, responses may not synchronise in the short run. Responsiveness of the labour market to shifts in house prices is likely to be lagged, in addition to which, the GLM data used to inform the staff MFF is based on historic data. The gap between perceptions and the GLM, according to this analysis, becomes an issue of timing rather than principle.

Quality Trusts in the micro-study survey were given the opportunity to comment on the quality of recruits and they drew a connection between continuity (age, length of service) and quality. A recent study (Hall, Propper & Van Reenen, preliminary draft, 2006) has gone further by examining the connection between labour markets, wage differentials and quality expressed as death rates. In keeping with labour market theory, the study predicted that “areas with higher outside wages should suffer from problems of recruiting, retaining and motivating workers and this should harm hospital performance.” The study found that stronger local labour markets (i.e. higher MFF areas) significantly worsened hospital outcomes in terms of both quality and productivity. A 10% increase in the outside (local labour market) wage was associated with a 3% - 8% increase in death rates. It drew the unambiguous conclusion that “an important part of this effect is operate through hospitals in high outside wage areas having to rely on temporary agency staff as they are unable to increase (regulated) wages in order to attract permanent employees” (p1). Empirical evidence, therefore, supports labour market theory by stating that paying below the market rate (in high MFF areas) results in higher use of temporary staff, lower productivity and poorer quality.

Rurality The question of rurality and its impact on costs emerged as a theme in the study (through the Reference Panel, reported in Chapter 8), and the evidence is considered here. A review conducted by the Department of Health (2005) identified five reasons why rurality might increase costs: diseconomies of scale/scope, travel costs, unproductive time, the basis of precedent and other factors. It found that, while there were clear perceptions reported in the literature that rurality carried increased cost burdens, there was little empirical evidence to support this. (The econometric analysis report in Chapters 12 and 14 likewise found that rurality was not associated with higher costs). The key findings of the 2005 study were: •

If economies of scale can be exploited, they can only be exploited by small hospitals becoming mid-sized. Constant returns or diseconomies of scale are present in large hospitals. Rural hospitals are attached to small population centres and so are obliged to remain small or medium sized. The feeling is that they are exposed to diseconomies of scale, but the evidence counters this by suggesting that large hospitals are in fact more expensive;



Travel costs and unproductive time due to larger travel distances, it is argued, increase costs. This has intuitive appeal but is not borne out by empirical evidence;

94



The ‘basis of precedent’ argues that England is the only country in the UK that does not have a rurality adjustment in its funding formula. This is an argument of principle rather than data, although the study does note that ‘Scotland, Wales and Northern Ireland have several areas of extreme rurality, sparsity and remoteness not found in England” (p7);



Other factors include: higher telecommunication costs; difficulty in networking; costs of access to training, consultancy and other support areas; the pace of development work; staff skill mix in which more staff need to work independently and therefore need to be more highly graded; and finally, the cost of providing the correct level of multidisciplinary input is thought to be high for small client groups with complex needs and where patients are highly dispersed. None of these arguments are supported by empirical evidence.

The perceptions that rurality induces cost pressures may be well-founded. The evidence of this MFF Specific Cost Approach review, however, suggests that labour cost pressures in urban areas tend to outweigh those of rural areas. It appears to be a fact that hospital workforces in rural areas, characterised by low turnover and relatively low private sector wages, have higher productivity and better quality outcomes than those of densely populated urban areas. This may be of little comfort to rural areas – being victims of their own success – but the evidence appears unequivocal.

Comparison between Qualitative and Quantitative Data There is no conflict between trusts’ perceptions, gathered through the qualitative survey, and quantitative data gathered through national sources. This is reassuring as it suggests that the attitudinal survey of Reference Panel trusts is a robust source of data that can be used to inform and interpret labour market spatial variations.

95

CHAPTER 8. REFERENCE PANEL Members of the Reference Panel met on 18th May 2006, comprising senior finance, HR and nursing personnel from the micro sample trusts. They provided feedback on the workstreams up to this point in the process and determined the course of the next stage of enquiry (presented in Sections C and D). Views of the Reference Panel were organised under five question headings, summarised here. Question 1. What are your perceptions of the MFF? Reference Panel members were asked to record up to three headline perceptions about the MFF, which may be positive, negative or neutral. The majority of opinion about the MFF was negative, criticising the lack of transparency in the GLM method, its inapplicability to the NHS labour market and the impact of cliff edges in pay zones. The neutral comments were also critical in tone, suggesting that the basis of the MFF should be seen to be more logical. Agenda for Change featured as an issue across the negative-neutral spectrum. The positive comments came entirely from London trusts which regarded the MFF as an important and necessary income source. Table 8.1 Balance of Perceptions Perception Negative Neutral Positive Total

Count 34 16 4 54

% 63% 30% 7% 100%

Table 8.2 Positive Perceptions of the MFF CRITIQUE Impact Impact Impact

QUOTES Necessary adjustment for ‘level playing field’ Necessary adjustment, but must carry credibility. Must not be seen with same level of scepticism as PbR, nor interfered with to suit national balance International excellence will not flourish in a market place – London subsidy will have to continue but can we afford the alternative of everything local?

Table 8.3 Negative Perceptions of the MFF CRITIQUE GLM Rationale GLM Rationale GLM Rationale

GLM Rationale GLM Rationale Transparency Cliff Edges Cliff Edges Agenda for Change

QUOTES Comparisons with the general labour market are unhelpful The general labour market comparison is not reasonable given that a significant majority of NHS staff are ‘tied’ to the NHS market I find it impossible to equate private sector salaries to the NHS labour market, e.g. what have costs of lawyers got to do with the nursing labour market; how transferable are such staff? Local wage cost comparison is not an appropriate comparison. NHS is a ‘high pay’ employer in the local market Use of private sector pay as a surrogate for NHS pay is 70% barking mad! Too complex, not understood There has to be more sense to the geographical differences between very close providers Difficult to understand accuracy when nearest neighbour would receive over £3.5m more for same activity With AfC pay structure, which is consistent (or should be) across the country (i.e. rate/salary range for the job), what is the relevance of looking at rankings based on GLM data (London Weighting aside)? … surely it is largely irrelevant!

96

Table 8.4 Neutral Perceptions of the MFF CRITIQUE GLM Rationale GLM Rationale GLM Rationale Transparency Transparency Impact Agenda for Change Agenda for Change

QUOTES MFF needs to be a fair reflection of the additional costs incurred in each geographical area Focus on wage costs, but problem often is ability to attract staff to high cost area, therefore should include, e.g. accommodation costs MFF should reflect the job market in an area and the difficulty (or otherwise) to recruit Currently appears overly complex and cliff edges impossible to explain. System must be simplified MFF should be a transparent system – easily understood and easy to adjust year on year without significant additional reviews Immediate change could threaten financial stability in a fragile economy Agenda for Change had a significant impact that must be taken into account How will the large move from local pay rates to AfC be factored in (e.g. laundry)?

Question 2. What drives the use of overtime, bank and agency staff (with particular reference to bank)? How would you explain the geographical variation? This question was stimulated by findings of the payroll analysis and the HCC analysis that a high proportion of nursing staff in London were temporary staff employed mainly as bank and sometimes as agency. Overtime accounts for a smaller volume of staff but, according to the HCC study, is used in low MFF trusts to a greater extent than high MFF trusts. Use of bank staff could be identified as the key resource that distinguished staffing levels among low and high MFF trusts. The Reference Panel suggested that geographical variation in use of temporary staff was affected by two separate factors: (a) the need for a competitive wage, since London trusts tended to employ their own full time staff as bank nurses, and (b) cover requirements, while acknowledging potential problems of control and accountability leading to inefficiency: (a)

Price. It is important to identify which staff are working as bank staff. In London this is more likely to be the hospital’s workforce, most of whom are full time. In Trust 1 (South West) a high proportion of staff work part-time and bank staff frequently work only on the bank. This ‘living wage argument’ is consistent with the general labour market prediction that in a high cost area wages to individuals will rise through whatever mechanisms are available to approach a competitive market rate. It follows that, at any given establishment level, London trusts would always employ a higher bank:in post ratio as a recruitment and retention measure.

(b)

Volume. London tends to have higher turnover leading to higher vacancies at any one time, driving the need for temporary staff. Non-London trusts characterised this as a lack of control and accountability which allowed bank nurse levels to drift upwards. Higher volumes of staff translate into low efficiency and productivity when measured against workload.

97

Figure 8.1 Reference Panel Feedback: Suggested Reasons for Geographical Variation in Use of Bank, Agency and Overtime • • • • • • • • • • • • • • • • • • • • • •



Availability of staff to do more hours: age factor and flexibility Full time, part time: who does extra hours? Varying policies on payment Agenda for Change Covering sickness, which may be a feature of poor management Sickness, vacancies, annual leave – should be covered in rostered template Lack of control – far too easy to engage bank and agency staff; there is a lack of accountability Lack of willingness to drive out costs produces higher staffing levels Private sector, ITC – new issue Mixed bag: geography and age profile are the most important factors Travel time deters overtime in London compared to south west where the average travel time is 15 minutes Labour markets: travel time is 30+ minutes in London; proximity of competition Peak activity – tourist season Policy shifts in target productivity, e.g. waiting lists, which require extra staff Specialist staff, e.g. in ICU, are uncontrollable Downsizing – temporary staff are used in moves towards smaller workforce Agencies target scarce staff Accommodation – can attract staff Isolation of hospital – so that A&E is isolated Ethnicity in an area Junior doctors’ hours – driving the costs of nurses as extra nursing hours are needed to substitute for reduced junior doctor hours In Trust 14 (London), the offer of bank hours is important to recruitment and retention as staff want to earn more money. 90% of bank staff are drawn from the hospital’s own staff in post. In Trust 1 (South West), a high proportion of bank staff have bank contracts (and are not part of the substantive work-force) to enable them to work flexibly in term times. Agency – used as a last resort In London people need to work 48 hours

Question 3. How might the productivity gap between low–high MFF hospitals be explained, with particular reference to medical staff, ward nurses, A&C, ST&T? The general ledger analysis showed a consistent pattern of lower productivity in high MFF trusts relating to most staff groups. This finding was repeated in the HCC analysis of ward nurses. The Reference Panel adopted two main perspectives, consonant with geography: London Trusts were sceptical of output comparisons restricted to FCEs or admissions that did not take into account other activity such as outpatients. They called for a more comprehensive measure of workload to underpin productivity comparisons. They also identified medical staffing trends that were beyond the control of trusts, e.g. sub-specialisation which was driven by Royal Colleges. Specific responses to benchmarking of maternity and medical staffing included questions about: • The impact of other income streams, e.g. SIFT and research revenue, i.e. medical staff as a ‘free good’; • Availability of neonatal intensive care units as a further workload weighting factor; • Size of maternity unit which would affect efficiency.

98

Non-London trusts tended to believe that higher resource allocation to London was driving higher numbers staff, without correspondingly higher workload throughput, leading to lower productivity. They identified junior doctors as a key element driving low medical staffing productivity, since junior doctors were disproportionately located in London. Figure 8.2 Reference Panel Feedback: Explaining Productivity Differences London • Several of the studies reported finding low productivity in London and the South East. The NHS needs convincing that this is an issue. “Productivity” and efficiency are very emotive at the moment. Low productivity may be an inevitable consequence of not paying the “going wage rate” but we want more reassurance; we need to know that the productivity gap is real and measurable; then we will seriously consider the reasons behind it. To achieve this it would be necessary to use a workload measure that captured more than simply patient spells. • Cause and effect: the cost of living influences the cost of employing staff of the same quality. • Teaching hospitals are now starting to focus on medical productivity. • There is more variation in medical versus non-medical productivity and therefore greater potential to make efficiency gains. • Working against improvements in medical productivity is a trend to sub-specialisation. This creates problems for on-call cover. It is not something that is driven by trusts. o Search for market share o Driven by Royal Colleges Non London • The productivity gap is due to more money being given to London (and less to the South West). • High levels of medical staff in London are due to: o the amount of non-trust related activities, e.g. taking part in Department of Health work, Royal Colleges etc o Too many junior doctors for the activity o ‘The greater good’ paid for in London • Nurses o ? – need for extra hours, more by staff than trusts o ? – weaker community or primary care delivery in London o Experience or stability of staff o Pathways – traditional work

Question 4. What are the key staff cost drivers that you would consider to be out of the control of the trust management, i.e. unavoidable? One of the purposes of the Reference Panel meeting was to move from the descriptive to the interpretative phase of work, allowing us to distinguish between avoidable and unavoidable costs. Trusts were tentative in identifying cost variations as unavoidable. One set of people summed up their response by saying that ‘location’ is really the only unavoidable feature of provision, leading to the question, ‘what are the costs that are inextricably linked with location?’ Spatial factors were summarised as a cost of living effect, e.g. transport and housing costs which would exceed the London Weighting allowance. This line of reasoning led some representatives to conclude that it might be more profitable to measure the cost of living differences directly rather than measure the bottom-up differences in hospital staff costs. This amounted to a move towards articulating the GLM rationale.

99

Figure 8.3 Reference Panel Feedback: Key Unavoidable Cost Drivers • • • • • • • • • • •

Rental and housing costs in London are the key component of higher costs in London London – transport is costly Economic factors Cost of relocation to the area The contribution to training (MADEL etc) Legislation Geographical allowances (like London Weighting) are unavoidable The only thing that is beyond the control of management is location The high costs need to be balanced against the positives of London as a place to be trained and work (at least in the initial stages of your career). Rather than identify external labour market ‘going rate’, we should look at equalising the real wage by accounting for cost of living differences Rather than build up MFF from bottom-up studies of cost differences or top-down analyses of labour market pressures, why not measure cost of living differences directly?

Question 5. What lines of enquiry should be pursued most vigorously in the remainder of the project and why: e.g. (i) Link between MFF and cost of housing, (ii) Complexity of workload, (iii) Impact of teaching status, (iv) Throughput variation, (v) Size – economies of scale, (vi) Other … specify? (i) (ii) (iii) (iv) (v)

Agenda for Change and its impact emerged as a consistent theme raised by the Reference Panel; Teaching status: our analysis of medical staffing would need to be able to control for the impact of teaching status. It would also be necessary to separate costs and income, e.g. SIFT; Productivity measures – need to be based on more sophisticated measures of workload (capturing outpatient and other activity apart from spells); Rurality: it was suggested that we needed to be able to comment on the impact of rurality; Location-based cost of living arguments came through strongly as the rationale for spatial differentials in pay costs. This is effectively what the GLM attempts to capture. It was apparent that the principles behind the GLM need to be explained much more clearly – with specific reference to the NHS labour market.

100

SECTION C. NATIONAL DATA SETS Section C explores three national data sets, dealing with ward nurses, medical staff and trust specialty costs. The results for hospital trusts in England are displayed in quintiles, according to MFF rankings. For nursing and medical staff we consider price and volume variance through arithmetic benchmarking approaches and then apply multivariate regression to test or develop the results.

CHAPTER 9. REVIEW OF WARD NURSE STAFFING (HCC DATA) This chapter presents an analysis of ward nursing wage costs and workload based on information extracted from the Healthcare Commission’s “Ward Nursing Staff” data set, described throughout the report as the HCC database. The purpose is to consider whether there is spatial variation in nurse staffing costs, the extent to which any variation is avoidable and unavoidable and, finally, the feasibility of adopting the database to support a Specific Cost Approach to calculating the MFF.

DATA AND METHODS This analysis is based on the 4,435 wards where WTE, cost and workload figures 34 were available. Table 9.1 HCC Dataset Overview Total Dataset Number Of Organisations Number Of Staffed Wards Number of Nurse Wte Annualised Wage Bill***

225 5,743 145,693 £3.7bn

English Acute Hospital Trusts Only Number of Trusts 35 165 36 Number of staffed workload wards 4,435 Number of Nurse WTE 37 112,940 Annualised Wage Bill £2.8bn

The data has been divided into quintiles defined by ranges of the staff MFF, from quintile 1 (the lowest MFF range) to quintile 5 (the highest MFF range). Each quintile contains approximately one fifth of trusts in the sample. The distance between the MFF midpoints of Quintiles 1 and 5 is 34%. The overall approach was to describe the individual components of the total wage bill, and review each in relation to the staff MFF. Wages have been analysed to describe the “price variance”, i.e. the differing cost of labour inputs. As a second step, we reviewed the number of nursing wte deployed in relation to the hospital workload and analysed the “volume variance”, i.e. the differing amounts of labour inputs deployed. The results of the price and volume variance analysis were then

34

Workload in the HCC database includes beds, admissions, transfers in and transfers out of wards. We used beds and admissions since transfers represent movement between wards, netting to zero (in principle) when wards are aggregated. Some wards consistently reported zero admissions (e.g. ICU), showing activity as transfers. However, as we sum to quintile level we assume that the admission was recorded on a different ward (i.e. surgical admissions) and the patient transferred into the zero admission ward. 35 The data contained 170 Hospital Trusts but 5 Trusts have been excluded due to poor data quality. 36 The data contained 4,710 staffed wards but wards where workload data was not reported have been excluded. 37 The annualised wage bill reported above excludes bank and agency costs.

101

used to interpret the question of which spatial differentials might be regarded as avoidable or unavoidable. Table 9.2 Description of the Data Set by Quintile Quintile

Mid MFF Range

No. Trusts

0.9290

0.8895

33

969

22,806

25,129

21,605

0.9586

0.9439

31

945

22,200

25,396

22,129

1.0113

0.9850

36

989

22,328

25,206

22,083

1.1000

1.0557

32

791

17,625

19,947

17,737

1.2826

1.1913

33

741

16,648

17,263

15,859

34%

165

4,435

101,608

112,940

99,413

MFF Range

1

0.8500

2

0.9291

3

0.9587

4

1.0114

5

1.1000

-

No. Wards

No. Beds

Nurse WTE

Std E WTE

The price variance describes the spatial difference in the direct costs of employing a nursing WTE, i.e. the Normal Wage Cost (Basic pay and standard allowances) plus Geographic Allowances (London Weighting and Cost of Living Supplements), analysed at the level of: 1. Nurse whole time equivalent (WTE), expressed as (a) unstandardised WTEs and (b) WTEs standardised to Grade E 2. Trust type (Specialist, Teaching or Acute) and complexity 3. London / Non London location The total wage bill 38 has been sub-divided into (A) In-Post Wage Costs, and (B) Substitution payments. The former represents what is paid to the trusts’ staff. It comprises their Normal Wage Cost (NWC). Substitution payments represent what the trusts pay to acquire additional labour inputs over and above their In-post staff. This category consists of Overtime (O/T), Bank and Agency payments. Skill mix effects on cost were isolated by standardising nurse staffing to grade E equivalents (explained in Appendix 9.1), referred to throughout the chapter as Std E WTE. Complexity or casemix was adjusted for by a complexity index, calculated at trust level (described in Appendix 9.3 and Appendix 1). Volume variance was analysed using a benchmarking or best practice approach, a standard accounting technique with wide application in the NHS environment (see for example Street, 2002; Schleifer, 1985; Dopuch and Gupta, 1997). Three scenarios were considered: • Volume Variance A (Status Quo - Loose) • Volume Variance B (Moderate) • Volume Variance C (Tight) Volume Variance A quantified the current volume variance (2004/5) in terms of the spatial difference in the total number of Std E WTEs per 100 complexity adjusted admissions (“the cover ratio”), analysed by trust type and location (London/Non London). Volume Variance B (Moderate) considered the impact of bringing staff:workload ratios into line with the average for the trust peer group, where peer 38

The figures used are inclusive of employer’s costs, e.g. NI and pension contributions.

102

groups are defined by trust type. Volume Variance C (Tight) applied the staff:workload ratio of the most efficient or productive quintile to the rest of the sample, allowing for differences between trust types. By combining the price and volume variances we were able to describe the total movement in average wage costs across the Staff MFF quintiles at trust level. We translated this into an index, which was mapped against the movement in the MFF, and compared the results. Multivariate regression analysis was then applied as a further means of exploring the distinction between avoidable and unavoidable costs, and as a way of introducing additional variables such as quality and urban/rural markers. The results of the earlier arithmetic analysis were set alongside the statistical regression analysis and compared for consistency.

PRICE VARIANCE Table 9.3 describes the geographic variation in the total wage bill and its components. It demonstrates, firstly, that Geographical allowances rise with the MFF and have a substantial impact on the In-post pay cost; secondly, how the proportion of the total wage bill spent on substitution staff doubles from 10% in quintile 1 to 21.4% in quintile 5. Finally, within the substitution payments, there is a marked decrease in the use of overtime as the MFF increases. Table 9.3 Geographic Variation in Total Wage Bill and its Components In Post Wage Cost Quin -tile

1 2 3 4 5 All

Normal Wage Cost

Geo Allow'

£m

£m

605 613 597 486 435 2,735

2 2 12 23 59 97

In Post Wage Cost

Substitution payments % of Total Wage Bill

OverTime

% of Total Wage Bill

£m 607 615 608 508 494 2,832

90.0% 90.8% 86.8% 83.8% 78.6% 86.2%

15 14 12 9 2 53

Bank Cost

% of Total Wage Bill

£m 2.3% 2.1% 1.7% 1.5% 0.4% 1.6%

35 33 52 65 100 285

Agency Cost

% of Total Wage Bill

£m 5.1% 4.9% 7.4% 10.7% 15.9% 8.7%

18 15 29 24 32 117

Total Wage Bill

£m 2.6% 2.2% 4.1% 4.0% 5.1% 3.6%

675 677 701 606 628 3,287

Wage Cost per WTE Table 9.4 shows that the normal wage cost per non-standardised WTE rises by 4.5% between quintile 1 and quintile 5 and that, when geographic allowances are included, the gap between the two quintiles widens to 18.3%. The table also shows the number of substitute WTEs employed which, when summed with the In-post, gives a total WTE figure allowing the total wage bill to be expressed per WTE. The number of substitute WTEs was calculated by dividing the total cost of each class of substitution (i.e. O/T, Bank, Agency) by the wage cost per In-post WTE (NWC + Geographical) as adjusted to account for the differing types of employment. Overtime WTEs were calculated by dividing the total cost of overtime by the wage cost per In-post WTE times 1.5 (i.e. time and a half). Bank WTEs were calculated by dividing the total cost of Bank by the wage cost per In-post WTE and Agency WTEs

103

by dividing the total cost of Agency by the In-post wage cost times 1.25 to account for the uplift seen in agency payments. Table 9.4 Wage Cost per Non Standardised WTE Quintile

Inpost Wte

NWC £m

NWC Per Inpost WTE

NWC + GEO

Wage Cost Per Inpost WTE

Substitute WTEs

Total WTE

Total Wage Bill £m

Total Wage Per WTE

1 2 3 4 5 All

25,129 25,396 25,206 19,947 17,263 112,940

605 613 597 486 435 2,735

24,091 24,126 23,677 24,341 25,181 4.5%

607 615 608 508 494 2,832

24,169 24,199 24,135 25,480 28,592 18.3%

2,427 2,255 3,439 3,542 4,444 16,108

27,557 27,651 28,645 23,489 21,707 129,048

675 677 701 606 628 3,287

24,480 24,478 24,473 25,815 28,925 18.2%

Components of the Price Variance – Grade Mix Spatial differences in the cost per WTE will be affected by differences in grade mix, stimulated by a range of factors including patient throughput, workload complexity and labour market conditions (in which high MFF trusts may use a richer skill mix to recruit and retain staff and low MFF trusts may be able to recruit to a lower grade 39). Having identified the price per non-standardised WTE in Table 9.4, we isolated the grade mix effect by calculating the price per Standardised Grade E WTE (Table 9.5) and measured the difference between the two. Table 9.5 shows that Normal Wage Cost per Std E WTE now shows a small decline of 2.2% between quintiles 1 and 5 whilst the combined total of NWC and Geographical Allowances show a rise of 10.7%. Table 9.5 Wage Cost per Standardised WTE Quintile

Inpost Std E Wte

NWC £m

NWC Per Inpost WTE

1 2 3 4 5 All

21,605 22,129 22,083 17,737 15,859 99,413

605 613 597 486 435 2,735

28,021 27,687 27,024 27,373 27,411 -2.2%

NWC + GEO £m

Wage Cost Per Inpost Std E WTE

Substitute Std E WTEs

607 615 608 508 494 2,832

28,112 27,771 27,547 28,655 31,124 10.7%

2,087 1,965 3,013 3,150 4,082 14,298

Total Std E WTE

Total Wage Bill £m

Total Wage Per Std E

23,692 24,094 25,097 20,887 19,941 113,710

675 677 701 606 628 3,287

28,473 28,091 27,933 29,031 31,487 10.6%

Table 9.6 measures the difference between Tables 9.4 and 9.5 to isolate the skill mix effect. The Grade Index shown in the right hand column reflects the conversion factor applied to non standardised WTEs to reach the Std E equivalent figure. (The lower the percentage the greater the proportion of nurses employed at grades lower than Grade E). It shows that grade mix rises through the quintiles, indicating that this skill mix effect accounts for a 7.6% rise in pay costs, leaving the remaining 10.7% rise to be accounted for by a combination of Geographic Allowances and Normal Wage Costs.

39

This is pre-Agenda for Change.

104

Table 9.6 The Grade Mix Element NWC & Geo per Non Standardised WTE

NWC & Geo per Std E WTE

Quintile

NWC

Geo

Total

NWC

Geo

Total

Grade Index

1

24,091

78

24,169

28,021

91

28,112

86%

5

25,181

3,411

28,592

27,411

3,713

31,124

91%

Increase

4.5%

13.8%

18.3%

-2.2%

12.9%

10.7%

Grade Mix effect

6.7%

0.9%

7.6%

Exploring Grade Mix – Skill Mix versus Residual Grade Mix or Grade Drift Table 9.7 reveals that grade mix rises with casemix complexity at the level of all trusts. Within this we see a spatial trend among teaching hospitals, where both complexity and grade mix advance throughout the quintiles, and an absence of spatial trend among the (small) sample of specialist hospitals. Among acute wards we observe an increase in grade mix between quintiles 3 and 5, allied to a reduction in the complexity index between these quintiles. Table 9.7 Grade Mix Index by Trust Type and Complexity Total

Specialist Grade Mix Index

Comp Index

WTE

Teaching Grade Mix Index

Comp Index

Acute Grade Mix Index

Comp Index

WTE

Grade Mix Index

Q

Comp Index

WTE

1

1.20

25,129

86%

2.38

179

84%

N/a

-

N/a

1.19

24,950

86%

2

1.20

25,396

87%

1.46

436

91%

1.25

9,176

87%

1.16

15,783

86%

3

1.28

25,206

87%

2.00

1,479

93%

1.34

5,988

89%

1.23

17,739

86%

4

1.29

19,947

88%

1.58

1,244

93%

1.38

6,172

90%

1.23

12,530

87%

5

1.32

17,263

91%

1.76

1,664

100%

1.43

6,815

94%

1.17

8,784

88%

112,940

88%

5,003

96%

28,151

91%

79,787

87%

WTE

If we make an assumption that elevated grade mix associated with higher complexity is warranted, which has intuitive appeal, then this may be labelled a ‘skill mix’ effect. The residual increase in grade mix, not associated with casemix complexity, may be termed ‘grade drift’ if it is simply a feature of spatial variation. The skill mix effect has been estimated at 6.3% (Appendix 9.4) with a residual grade mix of 1.3%. The estimation process adopts a benchmarking or best practice approach, in which grade mix in acute hospitals in quintiles 3-5 is restrained at 86%; (we found that the residual grade mix is located in London). In summary, we can describe the spatial variation in the price of labour inputs as 18.3%, comprising 10.7% geographic allowances, 6.3% payments for higher grade staff for more complex care and a residual 1.3% associated with London location and, by implication, the London labour market.

VOLUME VARIANCE In this section we review spatial variation in the volume of inputs, controlled for skill mix by focusing on Std E WTEs. Table 9.8 shows variation in both the number of WTEs and the number of beds across the quintiles. The ratio of In-post Std E WTEs to bed is very similar across the quintiles. However, the upper quintiles show a much

105

higher (c16%) establishment wte per bed ratio which, when this gap is filled with substitute WTEs, results in a higher total WTE to bed ratio. This rise in the WTE to bed ratio is investigated in relation to throughput productivity and complexity. Table 9.8 Standardised Grade E WTE per Bed Quintile

Est'ment Std E WTEs

Inpost Std E WTE

Substitute Std E WTEs

Total Std E WTE

Number of Beds

1 2 3 4 5 All

22,917 23,764 24,063 20,179 19,360 110,283

21,605 22,129 22,083 17,737 15,859 99,413

2,087 1,965 3,013 3,150 4,082 14,298

23,692 24,094 25,097 20,887 19,941 113,710

22,806 22,200 22,328 17,625 16,648 101,608

Est'ment Std E WTE per Bed 1.00 1.07 1.08 1.14 1.16 15.7%

Inpost Std E WTE per Bed 0.95 1.00 0.99 1.01 0.95 0.5%

Total Std E WTE per bed 1.04 1.09 1.12 1.19 1.20 15.3%

We compared throughput productivity in terms of the number of Std E WTEs per 100 admissions and found a substantial rise of 47% in admissions cover throughout the MFF range from quintile 1 to quintile 5 (see Table 9.9). We weighted admissions by the average complexity index for that quintile, with the effect of reducing the apparent productivity gap to 35% between quintiles 1 and 5. Table 9.9 Total Standard E WTE per 100 Admissions (Volume Variance A) Quintile

Total Std E WTE

Annualised Admissions

Std E WTE per 100 Admissions

1 2 3 4 5 All

23,692 24,094 25,097 20,887 19,941 113,710

1,277,213 1,133,156 1,081,482 844,076 730,743 5,066,669

1.85 2.13 2.32 2.47 2.73 47%

Complexity Index 1.20 1.20 1.28 1.29 1.32

Std E WTE Per 100 Complexity Adjusted Admissions 40 1.55 1.79 1.85 1.91 2.09 35%

We then applied the complexity index to standardise for potential casemix differences between trusts which would have an impact upon nursing workloads, differentiating between trust type (i.e. Specialist, Teaching, and Acute) and location (i.e. London and outside London). (This differentiation, in effect, allows for a non-linear relationship between complexity and workload). Table 9.10 summarises the typology, showing that Quintile 1 consists almost exclusively of acute trust wards, over 47% of quintile 5 is made up of specialist or teaching trust wards and 77% of wards in quintile 5 are located in London. Tables 9.11 – 9.13 summarise the admissions cover ratios for Std E WTE, analysed by quintile, location and type of trust. Trust peer groups are defined according to trust type and location. The average (complexity adjusted) wte per 100 admissions in teaching hospitals is 1.95 (Table 9.12) whereas in general acute hospitals it is 1.76 (Table 9.13). Scrutiny of the data shows the main areas in which trusts diverge from their comparator average: a) Quintile 1 generally has a low cover ratio, implying a greater degree of efficiency or productivity;

40

The calculation of the WTE per 100 complexity adjusted admissions is shown in appendix 5

106

b) The 1.84 cover ratio for the acute trusts in quintile 2 is above the average of 1.76 for the group; c) The teaching trusts in quintile 3 show a cover ratio of 2.36 which is higher than the average of 1.95 for the group; d) The London acute trusts in quintile 4 and all the acute trusts in quintile 5 report very high cover ratios when compared to the average of 1.76 for the group. Table 9.10 Number of Wards by Trust Type and Location Quintile

1

2

3

4

5

Type

London

Ex London

Total

% of quintile total

Acute Special Teaching Q Total

962 7

962 7

969

969

99.3% 0.7% 0.0% 100%

Acute Special Teaching Q Total

599 14 332 945

599 14 332 945

63.4% 1.5% 35.1% 100%

Acute Special Teaching Q Total

724 42 223 989

724 42 223 989

73.2% 4.2% 22.5% 100%

381 45 222 648

524 45 222 791

66.2% 5.7% 28.1% 100%

145

52.8% 8.0% 39.3% 100% 100%

Acute Special Teaching Q Total

143

143

Acute Special Teaching Q Total

246 59 268 573

23 168

391 59 291 741

716

3719

4435

Total

Table 9.11 Total Std E WTE per 100 Complexity Adjusted Admissions (CAA) - Specialist LONDON

NON LONDON

TOTAL SPECIALIST

Quintile

Complexity Index

Std E WTE Per 100 CAA

Complexity Index

Std E WTE Per 100 CAA

Complexity Index

Std E WTE Per 100 CAA

1 2 3 4 5

1.76

1.85

2.38 1.46 2.00 1.58 -

0.77 2.71 1.48 1.90 -

2.38 1.46 2.00 1.58 1.76

0.77 2.71 1.48 1.90 1.85

Table 9.12 Total Std E WTE per 100 Complexity Adjusted Admissions (CAA) - Teaching LONDON

NON LONDON

TOTAL TEACHING

Quintile

Complexity Index

Std E WTE Per 100 CAA

Complexity Index

Std E WTE Per 100 CAA

Complexity Index

Std E WTE Per 100 CAA

1 2 3 4 5 Average

1.45

1.99

1.25 1.34 1.38 1.25

1.65 2.36 1.95 2.06

1.25 1.34 1.38 1.43

1.65 2.36 1.95 1.99 1.95

107

Table 9.13 Total Std E WTE per 100 Complexity Adjusted Admissions (CAA) - Acute LONDON

NON LONDON

TOTAL ACUTE

Quintile

Complexity Index

Std E WTE Per 100 CAA

Complexity Index

Std E WTE Per 100 CAA

Complexity Index

Std E WTE Per 100 CAA

1 2 3 4 5 Average

1.15 1.16

2.11 2.20

1.19 1.16 1.23 1.26 1.18

1.56 1.84 1.71 1.83 2.15

1.19 1.16 1.23 1.23 1.17

1.56 1.84 1.71 1.90 2.19 1.76

BENCHMARKING SCENARIOS The differences in staff admission cover ratios analysed above contribute to an overall volume variance of 35%, described as Volume Variance A and representing the status quo, having standardised for grade and complexity. Benchmarking or best practice assumptions have been applied to estimate the potential scope for narrowing this spatial volume variance. Volume Variance B, a moderate set of peer-group assumptions, has been derived by taking the ratios identified in points (b) – (d) above and adjusting them to the average ratios for their type of trust, i.e. acute or teaching. Table 9.15 shows that the impact is to reduce the volume variance to 20%. Volume Variance C, a tighter set of assumptions, brings the staff:workload ratio into line with that of the most productive quintile, i.e. quintile 1. This was achieved by applying (i) a cover ratio of 1.56 in non-London acute trusts to match quintile 1, (ii) a ratio of 1.66 for London acute trusts 41. The impact of this adjustment is to bring the volume variance down to 15% (Table 9.16). It is important to emphasis that these are ‘what if...?’ scenarios, and not a statement of what could or should be. They allow us to explore the scale and location of productivity differentials but cannot be interpreted as adjudicating on what is truly avoidable in cost terms. Table 9.14 Adjustments Made to Ratios Trust Type Acute Teaching Acute Acute

London / Non L'don N/L N/L L L

Original Ratio 1.84 2.36 2.11 2.20 2.15

Revised Ratio 1.76 1.95 1.76 1.76 1.76

41

The rationale of this is that some inflation may be warranted to reflect labour market conditions; the marginal 0.1 equates to 6% of the base ratio of 1.56, which is also consistent with the apparent labour market element of price variation (1.3/18.3).

108

Table 9.15 Recalculation of Quintile Cover Ratios – Volume Variance B

T ru st Q u in tile T yp e

1

2

3

4

5

A cute S pecialist T eaching

A cute S pecialist T eaching

A cute S pecialist T eaching

A cute S pecialist T eaching

A cute S pecialist T eaching

S td E W T E p er 100 C o m p lexity W g h t'in g Ad ju sted London Ad m issio n s Non London L 'd o n

W g h t'in g Non London

W g h t'd Co ver R atio London

W g h t'd C o ver R atio Non London

T o tal C o ver R ep o rted R atio O rig in al D ifferen ce

-

1.56 0.77 -

0 0 0 0%

99% 1% 0 100%

-

1.55 0.01 1.55

1.55 0.01 1.55

1.55

-

1.76 2.71 1.65 -

0 0 0 0%

63% 1% 35% 100%

-

1.12 0.04 0.58 1.74

1.12 0.04 0.58 1.74

1.79

0.05

-

1.71 1.48 1.95 -

0 0 0 0%

73% 4% 23% 100%

-

1.25 0.06 0.44 1.75

1.25 0.06 0.44 1.75

1.85

0.10

1.76 -

1.83 1.90 1.95 -

18% 0% 0% 18%

48% 6% 28% 82%

0.32 0.32

0.88 0.11 0.55 1.54

1.20 0.11 0.55 1.85

1.91

0.06

1.76 1.85 1.99 -

1.76 2.06 -

33% 8% 36% 77%

20% 0% 3% 23%

0.58 0.15 0.72 1.45

0.34 0.06 0.41

0.93 0.15 0.78 1.86

2.09

0.23

T o tal V o lu m e varian ce

20%

-

0.00

35%

15%

T o ta l C o v e r R e p o rte d R a tio O rig in a l

D iffe re n c e

Table 9.16 Recalculation of Quintile Cover Ratios – Volume Variance C

Q u in tile

1

2

3

4

5

T ru s t T ype

A c u te S p e c ia list T e a c h in g

A c u te S p e c ia list T e a c h in g

A c u te S p e c ia list T e a c h in g

A c u te S p e c ia list T e a c h in g

A c u te S p e c ia list T e a c h in g

S td E W T E p e r 1 0 0 C o m p le x ity A d ju s te d A d m is s io n s Non London L 'd o n

W g h t'in g London

W g h t'in g Non London

W g h t'd C o ver R a tio London

W g h t'd C o ver R a tio Non London

-

1 .5 6 0 .7 7 -

0 0 0 0%

99% 1% 0 100%

-

1 .5 5 0 .0 1 1 .5 5

1 .5 5 0 .0 1 1 .5 5

1 .5 5

-

1 .5 6 2 .7 1 1 .6 5 -

0 0 0 0%

63% 1% 35% 100%

-

0 .9 9 0 .0 4 0 .5 8 1 .6 1

0 .9 9 0 .0 4 0 .5 8 1 .6 1

1 .7 9

0 .1 8

-

1 .7 1 1 .4 8 1 .9 5 -

0 0 0 0%

73% 4% 23% 100%

-

1 .2 5 0 .0 6 0 .4 4 1 .7 5

1 .2 5 0 .0 6 0 .4 4 1 .7 5

1 .8 5

0 .1 0

1 .6 6 -

1 .8 3 1 .9 0 1 .9 5 -

18% 0% 0% 18%

48% 6% 28% 82%

0 .3 0 0 .3 0

0 .8 8 0 .1 1 0 .5 5 1 .5 4

1 .1 8 0 .1 1 0 .5 5 1 .8 4

1 .9 1

0 .0 7

1 .6 6 1 .8 5 1 .9 9 -

1 .5 6 2 .0 6 -

33% 8% 36% 77%

20% 0% 3% 23%

0 .5 5 0 .1 5 0 .7 2 1 .4 2

0 .3 1 0 .0 6 0 .3 7

0 .8 6 0 .1 5 0 .7 8 1 .7 9

2 .0 9

0 .3 0

T o ta l V o lu m e v a ria n c e

15%

35%

-

0 .0 0

20%

109

CREATING A SPECIFIC COST INDEX We have estimated an 18.3% variance in the price of labour between the first and fifth quintile made up of 10.7% geographic allowances, 6.3% skill mix and 1.3% residual or grade drift. We went on to observe a 35% variance (Volume Variance A) in labour inputs as measured by the number of Std E WTEs per 100 complexity adjusted admissions and then, on the basis of assumptions, a reduced variance of 20% (Volume Variance B) and then 15% (Volume Variance C). The volume variance has been calculated using Standard E WTEs, to facilitate comparison between trusts, and has been adjusted for grade mix. In combining the price and volume variances we need to exclude the grade mix adjustment from the price variance (to avoid double-counting), reducing the price mix effect from 18.3% to 10.7%. By bringing the price and volume variance together we can construct an index that describes the spatial variation in total wage costs and can be compared to the staff MFF index 42, shown in Figure 9.1. The index is constructed by adopting Quintile 1 as the common starting position. The point of interest is therefore the extent to which indexes A, B and C move throughout the MFF range rather than their relationship with a specific quintile. Table 9.17 SCA A Index (STD E price Variance plus 35% Volume Variance) Q

Wage Cost Per Std E WTE

1 2 3 4 5

28,112 27,771 27,547 28,655 31,124 10.7%

Price Variance

-1.2% -0.8% +3.9% +8.8% +10.7%

Std E WTE Per 100 Complexity Adjusted Admissions 1.55 1.79 1.85 1.91 2.09 34.8%

Volume Variance

15.5% 3.9% 4.1% 11.4% 34.8%

Total Variance

SCA INDEX A

14.1% 3.0% 8.2% 21.2% 46.5%

0.890 1.015 1.042 1.115 1.303 46.5%

Table 9.18 SCA B Index (STD E price Variance plus 20% Volume Variance) Q

Wage Cost Per Std E WTE

Price Variance

1 2 3 4 5

28,112 27,771 27,547 28,655 31,124 10.7%

-1.2% -0.8% +3.9% +8.8% +10.7%

42

Std E WTE Per 100 Complexity Adjusted Admissions 1.55 1.74 1.75 1.85 1.86 20.0%

Volume Variance

Total Variance

SCA INDEX A

12.3% 0.6% 6.5% 0.6% 20.0%

10.9% -0.2% 10.6% 9.5% 30.9%

0.890 0.986 0.985 1.080 1.164 30.9%

Within all the tables in this section: 1. The Price variance (exc grade mix) is calculated as the difference in the Wage Cost per STD E WTE between each quintile expressed as a percentage of the wage cost found in quintile 1 (i.e. Table 9.17 Q1 to Q2 [27,771 – 28,112] / 28,112 = -1.2%). 2. The Volume variance is calculated as the difference in the admissions ratio between each quintile expressed as a percentage of the admissions ratio found in quintile 1 (i.e. Table 9.17 Q4 to Q5 [2.09-1.91]/1.55 = 11.4%). 3. The total variance equals the Price variance plus the volume variance multiplied by the Price variance, i.e. the additional labour inputs cost more due to price in the upper quintiles.

110

Table 9.19 SCA B Index (STD E price Variance plus 15% Volume Variance) Q

Wage Cost Per Std E WTE

1 2 3 4 5

28,112 27,771 27,547 28,655 31,124 10.7%

Price Variance

-1.2% -0.8% +3.9% +8.8% +10.7%

Std E WTE Per 100 Complexity Adjusted Admissions

Volume Variance

Total Variance

SCA INDEX A

3.9% 9.0% 5.8% -3.2% 15.5%

2.6% 8.2% 10.0% 5.3% 26.0%

0.890 0.913 0.985 1.074 1.121 26.0%

1.55 1.61 1.75 1.84 1.79 15.5%

There is a large step-wise progression between quintiles 1 and 2 for both SCA A and SCA B, which is eliminated in SCA C by adopting quintile 1 as the benchmark of best practice for acute trusts, thereby narrowing the gap between quintiles 1 and 2. The staff MFF follows a smoother gradient between these two lower quintiles. Thereafter, SCA B (moderate scenario) falls quite closely into line with the staff MFF, with a 2%3% shortfall in its progression (from 0.89 – 1.164 for SCA B compared to 0.89 – 1.19 for staff MFF quintile mid-points).

Figure 9.1 A Comparison of the Three Specific Cost Indexes with the Staff MFF Staff MFF and SCA - A,B & C 1.400

1.300

1.200

1.100

1.000

0.900

0.800

1

2

3

4

5

Staff MFF

0.890

0.944

0.985

1.056

1.191

SCA A

0.890

1.015

1.042

1.115

1.303

SCA B

0.890

0.986

0.985

1.080

1.164

SCA C

0.890

0.913

0.985

1.074

1.121

Quintiles

IMPLICATIONS OF THE PRICE AND VARIANCE ANALYSIS The price variance analysis identified 18.3% spatial variation in the cost per wte, in which 10.7% comprised geographical allowance, 6.3% was a skill mix effect associated with complexity and/or trust type and 1.3% was a residual grade mix effect located in London acute hospitals. The distinction between avoidable and unavoidable elements is not entirely clear-cut. The geographical allowance, explained by London Weighting, is unavoidable to the trust. The skill mix effect, explained by workload complexity or teaching/specialist

111

status, may be regarded as an unavoidable response to workload demands. It is distinct from the labour market, however, and should be paid for through higher tariff income. The residual 1.3% appears to be a grade drift effect associated with London acute trusts which may, on the one hand, be interpreted as unavoidable if it is perceived to be a necessary response to aid recruitment and retention in a competitive labour market that is partly determined by the proximity of several teaching hospitals. On the other hand, it is open to interpretation as an avoidable premium which is paid by London trusts simply because their budget allows it. Labour market theory would quantify the 10.7% geographical allowance + 1.3% grade drift as unavoidable labour market responses. The volume variance quantifies three scenarios in which (A) the status quo shows a 35% gap in staff:workload ratios, which is whittled down to (B) 20% and then (C) 15% by applying benchmarking best assumptions. According to these assumptions, up to 20% of the staff:workload ratio may be defined as avoidable, if all acute trusts were to adopt best practice and work at the efficiency level of quintile 1 (scenario C). A differential of 15% is considered to be avoidable at the more moderate assumption of applying peer group averages (scenario B).

Benchmarking Assumptions – How Good Are They? There has been no discussion so far of the validity of these benchmarking assumptions. The use of peer group comparators is well established as a means of measuring performance and setting targets in the health sector (e.g. reviewed in Wait, 2004). The approach taken here of using hospital type as A peer group and standardising inputs and outputs for workload volume, complexity and grade mix is transparent and even-handed, and fairly traditional. Scenario B suggests that the peer group average is a reasonable productivity target for hospitals to achieve. Scenario C indicates that ‘best practice’ demonstrated in Quintile 1 is a reasonable target within the hospital type. There is an underlying assumption throughout the analysis that location, in terms of geographical Quintiles 1 – 5, does not justify productivity differences. The debate within the Reference Panel (Chapter 8) indicated that non-London trusts took this view. The labour market perspective, however, suggests an alternative approach. Here it is necessary to segment peer groups according to geography and, on that basis, it would not be reasonable to compare Quintile 1 with Quintile 5. The differences between them would not be construed as avoidable. Rather, Quintile 5’s poorer productivity would be interpreted as an unavoidable labour market response based on poorer quality of inputs (referring to staff mix rather than individuals). Evidence for this is found in higher vacancy rates associated with higher turnover, higher use of bank and agency, all of which produce fragmentation and an unavoidable inefficiency premium. This labour market justification for differences in acute hospital productivity (Table 9.13) between Quintiles 5 and the rest of England is weakened, however, by the evidence that London teaching hospitals (Table 9.12) perform as well or better than their Quintile 2-4 counterparts in terms of nurse productivity. There is no apparent inefficiency premium here. It has been suggested that, in a competitive London labour market, teaching hospitals have first pick of nursing staff, so that labour market problems are concentrated in the acute hospital sector. We know that London teaching hospitals employer higher levels of junior doctors (Table 9.22), giving a medical weighting to the workforce.

112

Drawing Data Sources Together Turnover rates (% nurse leavers) 2003/4 drawn from national census data (see Table 7.2) rise throughout the MFF range, with the highest rates occurring in Quintile 5 teaching hospitals. Vacancy rates, derived by establishment and in-post figures from the HCC data set, also rise throughout the MFF quintile range (Table 9.20), from 6% in Quintile 1 to 21% in Quintile 5. Table 9.20 Ward Nurse Vacancy (Establishment – In-post) Rates Quintile 1 2 3 4 5 Total

Acute 6% 5% 10% 16% 22% 10%

Teaching 10% 7% 10% 22% 12%

Specialist 9% 8% 5% 12% 22% 13%

Total 6% 7% 9% 14% 22% 11%

Data has been obtained from the medical and non-medical census that shows the numbers of doctors per nurse. There is a high level of consistency in the number of career grades per nurse, both in quintiles and in hospital types, at an average of 0.16:1 (Table 9.21). Total doctors, which include junior grades, show more variability with the highest level of 0.38 doctors per nurse occurring in Quintile 5, compared to 0.32 in Quintile 1, due to a markedly higher ratio of 0.41:1 in Quintile 5 teaching hospitals. A possible explanation is that junior doctors provide London teaching hospitals with a supplementary labour force that allows the sector to function with fewer nurses, (i.e. substituting for nurses). Greater supply of junior doctors allows London teaching hospitals to reduce demand for nurses, explaining their position vis a vis London acute hospitals and teaching hospitals in quintiles 2-4. Another potential explanation is that higher budgets in quintile 5 are spent by teaching hospitals on doctors and by acute hospitals on nursing staff. It is apparent (see Table 12.2) that productivity of junior doctors is very low for house officers (5%), low for senior house officers (12%) and middling for registrars (47%) in relation to consultants who are weighted at 100%. Table 9.21 Career Grade per Nurse wte Quintile

Acute

Specialist

Teaching

Total

1 2 3 4 5 Total

0.16 0.16 0.17 0.16 0.16 0.16

0.23 0.13 0.20 0.14 0.13 0.14

0.15 0.14 0.15 0.15 0.15

0.16 0.15 0.16 0.16 0.16 0.16

Table 9.22 Doctor per Nurse wte Quintile

Acute

Specialist

Teaching

Total

1 2 3 4 5 Total

0.32 0.31 0.33 0.34 0.36 0.33

0.34 0.28 0.38 0.31 0.29 0.30

0.37 0.36 0.34 0.41 0.37

0.32 0.33 0.34 0.34 0.38 0.34

113

A further dimension for consideration draws on our micro study findings by asking ‘who are the bank nurses?’ There is anecdotal evidence (drawn from the qualitative survey and Reference Panel) that nurses in London are working extra shifts to enhance their earnings. It would be reflected here as part of the volume variance (showing as additional bank wte) and would mask the price variance between London and other parts of the country. The tables below show relative wage rates per wte. Wages to individuals will be higher in quintile 5 if individuals are working in excess of 1 wte. Table 9.23 Total Wage Bill per wte Quintile

Acute

Specialist

Teaching

Total

1 2 3 4 5 Total

24,482 24,041 24,332 25,679 27,162 24,892

24,161 22,650 26,568 26,819 33,101 28,495

25,307 24,364 25,885 30,177 26,534

24,480 24,478 24,473 25,815 28,925 25,468

Table 9.24 Total Wage Bill per Std E wte Quintile 1 2 3 4 5 Total

Acute 28,469 27,785 28,208 29,420 30,728 28,724

Specialist 29,085 24,468 28,170 28,507 33,086 29,757

Teaching 28,781 27,082 28,382 31,988 29,144

Total 28,473 28,091 27,933 29,031 31,487 28,904

Conclusion The discussion of benchmarking assumptions highlights the difficulty in making an assessment of avoidable versus unavoidable cost differences. If Scenario A (status quo) were to be adopted, this would be implying that all productivity differences could be explained away by geography. Scenario C gives no latitude to labour market effects on productivity, suggesting that all trusts should be capable of emulating Quintile 1 hospitals. Scenario B takes a middle position, combining an element of avoidable cost difference with an acceptance of geographical variability. Scenarios B and C, when combined with the price variance, show a similar range and gradient to the Staff MFF. Scenario B offers a pragmatic response, caveated by all the preceding analysis. On this basis we conclude that 15% of the volume variance among ward nurse staffing is avoidable, and is associated with usage of bank and agency.

MULTIVARIATE ANALYSES The previous sections used arithmetic techniques to analyse the effect of a range of variables, such as hospital type and location, on nursing costs. It did this by segmenting the data set and producing a series of detailed tables for each category. This section introduces multivariate or multiple regression analysis as a way of being able to examine a range of variables simultaneously in a single model. It also has

114

the capacity to examine qualitative factors, such as rurality and quality markers all at once. Note on Use of Multivariate Regression Multiple regression analysis allows us to discriminate between the effects of the explanatory variables, making allowance for the fact that they may be correlated. A dependent variable (e.g. cost per nurse) may be linked to a range of factors (independent variables) which influence that cost, e.g. type of hospital, number of patients treated. The coefficient of determination, R2, measures the proportion of the variation of the dependent variable explained by the regression. The model specification is developed by adding in the explanatory variables and observing the change in R2 to see what impact the new variables have on the model. (The R2 will never decrease, and generally will increase, if we add another variable to the equation, as long as all the original explanatory variables are retained. The adjusted R2 corrects for increasing number of explanatory variables so may decrease). When variables are inter-correlated then any one of them may mop up the lion’s share of the R2 when they are entered into the model first, i.e. it will demonstrate high levels of explanatory power. The order of entry of dependent variables into the model is a stringent test of robust model specification.

Specifying a Model - Factors Associated with the Staff Market Forces Factor The analysis sets out to try and disentangle the web of associations between staff costs and workload and the staff Market Forces Factor; and specifically: • •

the extent to which the Market Forces factor reflects avoidable or unavoidable staff costs to confirm the trends demonstrated in the previous sections about the impact of MFF on costs and workload

Because of the high level of inter-correlation, it is not clear what is the correct way of specifying the model. There are three sets of alternatives/ issues: 1) Whether (a) to treat the MFF as the dependent variable and explore the extent to which staff costs and workload predict variations in the MFF or (b) to ask the question whether the MFF as an independent variable itself directly predicts unavoidable staffing costs and workload. i.

MFF as dependent variable – this exercise takes the staff MFF, which is an index of general (private) labour market relative prices, and investigates the extent to which hospital responses (in terms of wages and volumes of nursing staff employed) are consistent with the index. It addresses the perception that the staff MFF bears little direct relationship with the NHS workforce. We ask the question ‘is the nursing workforce geographically patterned in the same way as the staff MFF?’

ii.

MFF as independent variable – this exercise builds a model that tries to explain movement in nursing costs through a range of different measures. The aim is to check whether, after taking as many variables as possible into account, the MFF exerts an impact of its own. It addresses two questions. Firstly, “What are the main drivers of nursing costs?” Then,

115

“After accounting for these drivers, is there a residual geographical cost variation that can be explained by the MFF alone?” 2) Whether to separate out skill mix from the costs or to combine them in costs per Standard E variables. 3) Finally, there is the issue of which independent variables to include. In principle this should be a theory-driven exercise but we have to recognise that the high levels of inter-correlation mean that many of the variables may be proxying for others. For this reason, we have tested all possibly relevant variables in the regressions but entered them in what we see as the order of relevance. In particular, in principle: •

When considering the MFF as the dependent variable, then the most important variables are the costs per wte nurse, and there is a query as to whether we should weight the observations (the trust values) by size or leave them unweighted.



When considering costs or workload as the dependent variables, the important issue is whether or not the MFF variable has an effect either on its own or when controlling for type of hospital and location and then for all other possible variables that might affect price or workload. The volume regressions are weighted by admissions, the price regressions by all wte.

Mean house prices have been incorporated into the model as an independent variable, after considering the case for and against inclusion. The argument against inclusion of a house price index is that it does not belong in an economic model of hospital costs since it is nothing to do with the supply of hospital services. It is an environmental factor that affects everybody in the national and local economy. The main argument in favour of inclusion of a measure of house prices is that it addresses the perceptions of the Reference Panel (micro study) that house prices were a strong driver behind NHS labour markets and geographical costs. Some members argued that house prices, rather than local private sector wage rates, should be reflected in the staff MFF. This was proposed for two reasons: (a) a perception that private sector wage rates had little connection with the NHS labour market but that house prices had a strong connection, as a cost of living factor, and (b) a view by some trusts that their house prices were high while local wage rates were low, so that there was a mismatch between the two measures. The reason for including this variable in the regression modeling is therefore to investigate the degree of connectedness between hospital cost behaviour, general labour market (staff MFF) and cost of living (house price) measures. Regression models provide a means of unpicking the relationship between variables. In terms of methodology, there is a rationale for addressing perceptions through the growing influence of action research, in which participants’ subjective views are allowed to broaden the direction of enquiry, even if they do not fit a priori theories (e.g. Crilly & Plant, 2007).

MFF as Dependent Variable Model 1 The staff MFF moves in line with the price of labour. 69% of variation in the MFF can be explained by variation in the pay per wte of nursing staff (standardised grade E

116

wte including geographical allowances). This is a strong and significant finding, which demonstrates that there is a good fit between movement in the cost per wte of nursing and the general labour market (represented by the MFF). A further 8% of explanatory power (taking the adjusted R2 from 69% to 77%) is provided by adding a range of other variables, where the following were significant (at p=0.01): •

Mean house price – the implication is that mean house price explains an element of the staff MFF (accounting for 5% - 6% of variation) that is not directly reflected in spatial pay differences. We know that house prices are strongly correlated with the staff MFF index. The extra explanatory power provided by the variable could be interpreted as an additional cost of living measure that is built into the staff MFF and which would potentially be reflected in other areas of hospital expenditure (since it is funded through the MFF);



Teaching status – there is a spatial pattern to teaching hospital status (with a preponderance in London) that is consistent with a further 2% in variation in the MFF.

The model incorporated a range of other variables which did not have any significant impact, including rurality and workload complexity. Appendix 9.6 (Tables App9.6.9 and App9.6.10) gives details of the model specification. Model 2 We gained further insight by changing the order in which variables were entered into the model. Model 2 introduces geographical pay allowances last of all into the equation. We find that basic pay (for standardised grade E excluding geographical allowances) does not vary in line with the MFF. This is as we would expect, consistent with a nationally negotiated underlying wage structure. Mean house prices by themselves explain 55% of the variation in the staff MFF, implying that mean house prices are a proxy for geographical allowances since they work in step (even though we observed in Model 1 that mean house prices do have a marginal independent effect of their own). Again, teaching status has a significant independent relationship with the MFF. The results of this model highlight the close interaction between general labour market wage rates, house prices and geographical allowances.

Price Regressions Price regressions describe models where we looked at the effect of a range of variables on the cost per wte (standardised grade E). We looked at price including and excluding geographical allowances. The specification was that price per all wte (including bank and agency substitutes) would be affected in decreasing order of likelihood by: a) Mean house price b) Types of hospital c) Complexity and size (represented by FCE) d) Quality markers e) Grade mix and MFF

117

Only 26% of price variation excluding geographical allowances could be explained by the model, compared to 59% of price variation when geographical allowances are included. The inter-correlation between geographical allowances and mean house prices is again evident, since it explains 33% of variation. A further 24% of explanatory power (taking adjusted R squared to 57%) is provided by type of hospital (i.e. teaching, acute and specialist) in relation to London location, all of which might be regarded as unavoidable. The remaining variables (including the MFF) sharpen up the model by a modest 2%. There is a small residual positive effect of grade mix and MFF on price suggesting that the MFF might be providing cash that can be used for buying more expensive nurses (0.6% of variation). This corresponds with the arithmetic price variance analysis earlier which identified 1.3% of price variance as grade drift in London which, depending upon vantage point, is an unavoidable market cost or an avoidable element of budgeted expenditure.

Volume Regressions Volume regressions look at differences in staffing levels and try to explain them by a range of other factors. The dependent here is the ratio of the number of standardised E grade nurses to the number of complexity adjusted admissions. •

The most important finding is that only a small fraction of variance is accounted for, which is surprising given the range of variables that have been included in the equation.



The other important point is that, even when MFF is included after other variables (compare Appendix 9.6, Tables 9.24 and 9.25), it does appear to be playing a significant role: when included first, it accounts for 10% of the variance; when included after all other variables, it still raises the variance accounted for from 16% to 22%. The implication here is that there is an element of variation in staffing levels which is best accounted for by the MFF. This is open to interpretation as an avoidable cost which is spatially distributed. The finding is consistent with the arithmetic analysis in the preceding section.

More importantly, a breakdown (Table App9.6.7A) and plot (Figure App9.6.7B) of the ratio of standardised grade E nurses to the numbers of complexity adjusted admissions shows that there are three clear outliers with an MFF less than 1 but the ratio of standardised grade E nurses to the numbers of complexity adjusted admissions greater than 3.5. When these are excluded, we obtain the results in Table App9.6.8. Here the MFF is playing an even more important role raising the variance accounted for from 22% to 32%. Clearly, the type and location of hospitals are unavoidable in the short-medium term; it is debatable the extent to which the other variables are avoidable. It is also worth noting that the coefficients on both specialist and teaching hospitals in London are negative and significant at a 10% level, which implies that they have fewer staff per admission, whilst acute hospitals have more. Again, this is consistent with the earlier arithmetic analysis.

118

DISCUSSION OF HCC ANALYSIS Spatial Variation We have identified spatial variation in nursing costs and examined their drivers. Across the MFF spectrum, from the midpoint of quintile 1 to midpoint quintile 5, where the uplift in the MFF is 34%, the increase in cost per wte equals 18.3%, described as price variance. Over half of this is associated with basic pay and geographical allowances (10.7%) and the rest is due to differences in grade mix (7.6%). We distinguished between skill mix associated with greater complexity of trust workload (6.3%) and a residual grade drift effect in London, equal to 1.3%. There is also spatial variation in ratios of nurses to workload, defined here as admissions cover ratios of standardised grade E wte per 100 complexity adjusted admissions. Volume variance between quintiles 1 and 5 is 35%, much of which is located in London acute trusts.

Avoidable/Unavoidable Two approaches have been adopted to examine the avoidable-unavoidable element of raised costs. The first was the volume variance approach which applied a benchmarking or best practice model while the second was a multivariate regression modelling exercise. They were consistent in the direction of their findings, but the arithmetic volume variance approach was specific in quantifying potential avoidable costs (15 – 20 percentage points of variation out of 35 points) whereas the regression analysis distinguished between explained (unavoidable) and unexplained (potentially avoidable) cost variation. Variation in price was more easily explained (R squared 59%) by spatial patterns and structural features, such as hospital type, than variation in volume (R squared 32%). The lower R squared implies that there is a larger element of unexplained volume (productivity) variance (68%) and so a higher probability that much of the volume variance is avoidable. This is consistent with the arithmetic benchmarking approach that found that price variance was largely unavoidable whereas approximately half of the volume variance could be interpreted as avoidable. Feasibility This analysis has led us to conclude that a specific cost approach can help to describe the impact of the general labour market on nursing staff costs using everyday trust data. However, it also has limitations which would prevent it from being employed as a general approach. Firstly, the data set on which the analysis is based is a bespoke data set which required a great deal of work on the part of the trusts and the Healthcare Commission to compile. Attempting to recreate this dataset on annual basis, not to mention one that covers all staff groups, would be prohibitively expensive in terms of cost and time. But perhaps the most important limitation is the degree of subjectivity required in the definition of avoidable and unavoidable variances and the impact this definition has on the overall result. In the analysis we have equated “avoidable” with inefficiency, yet there are potential arguments that would identify some element of this inefficiency as unavoidable, for example resulting from a fragmented labour market, or occasioned by higher turnover etc. Overall we believe the approach can be used to help describe, in a general fashion, how costs of care vary with location but it is not suited to defining precisely why such costs vary.

119

CHAPTER 10. MEDICAL STAFFING This chapter considers medical staffing in 173 hospital trusts in England based on the September 2004 census. The data is combined with activity volumes for the period 2004/5 to explore productivity variations between trusts. Evidence relating to costs has been drawn from our micro sample of 14 trusts. We have analysed resource differences on a geographical basis and then considered scope for change in the workforce. The aim was to (a) identify spatial patterns of resource usage, (b) address the question of avoidable versus unavoidable cost differences and (c) consider the implications for developing a Specific Cost Approach to medical staffing in connection with the MFF. Environmental and structural factors, such as professional standards and medical education, have been taken into account. A quantitative model was designed which allows us to combine a range of variables, e.g. type of hospital (e.g. teaching), location, workload volume including outpatients and A&E attendances, complexity, and grade mix. A benchmarking or best practice approach has been used to identify ‘scope for change’ or potential avoidable costs in relation to the volume and mix of medical staff. The approach draws on the practices used in the analysis of ward nursing staff, developed in the previous chapter. National data is presented on a quintile basis, in which trusts are ranked according to their MFF score and where quintile 1 is low. We also examine the extent to which we can apply a price and volume variance approach in examining medical staff. Since part of the study is to test the feasibility of methods, any difference in the behaviour between medical and nursing staff, which limits the usefulness of a price and volume approach, is instructive.

DATA: TRUST PROFILE This section describes the composition of trusts in the sample in terms of geography, trust type and MFF ranking and gives an overview of staffing, workload and grade mix. Geography and Trust Type We have ranked trusts according to their MFF index and divided the country into quintiles, containing broadly similar numbers of trusts 43. Trusts are categorised as acute (128), teaching (25) and specialist (20). Among the 25 teaching hospitals, 9 are in London. Quintile 5, which is dominated by London but contains some nonLondon trusts, has the largest number of teaching hospitals (10) while Quintile 2 has the second largest (7). There are no teaching hospitals recorded in Quintile 1 44. Table 10.1 gives details. London has 18% of the hospital trusts but only 15% of finished consultant episodes and admissions in the country. On the other hand, it accounts for 22% of A&E attendances. Table 10.2 gives details. 43 44

Quintiles here are defined by their MFF index within the ranges described in Table 10.3a. Royal Cornwall Hospitals Trust has recently become a teaching hospital.

120

Table 10.1 Distribution of Trusts Type Acute Acute Total Specialist Specialist Total Teaching Teaching Total All Trusts Grand Total

Location London Non-London London Non-London

Quintile 1

2

3

35 35

22 22

29 29

4 6 17 23

1 1

5 5

3 3

6 6

0 36 36

7 7 0 34 34

3 3 0 35 35

5 5 6 28 34

London Non-London London Non-London

5 12 7 19 5 5 9 1 10 26 8 34

Grand Total 18 110 128 5 15 20 9 16 25 32 141 173

% Trusts 10% 64% 74% 3% 9% 12% 5% 9% 14% 18% 82% 100%

Table 10.2 Balance Between London and Non-London

Number of Trusts Consultant Associate Specialist & Staff Grade Hospital Practitioner & Clinical Assistant Sub-Total Career Grade Registrar SHO HO Sub-Total Junior Doctor Total Doctor wte FCE Admissions Bed days A&E Attendance Outpatients Attendance

London

NonLondon

England Total (Acute)

London as % of England

32 4,695 798

141 19,063 4,568

173 23,758 5,365

18% 20% 15%

135 5,628 3,908 3,634 751 8,293 13,921 1,970,675 1,765,420 7,977,197 3,299,865 7,760,696

701 24,331 10,863 14,159 3,501 28,524 52,855 11,303,774 9,931,715 37,486,611 11,833,692 34,144,261

836 29,959 14,771 17,794 4,252 36,817 66,776 13,274,449 11,697,135 45,463,808 15,133,557 41,904,957

16% 19% 26% 20% 18% 23% 21% 15% 15% 18% 22% 19%

All Trusts – Staffing and Workload •

Table 10.3a contains an overview of the base data. Table 10.3b standardises this by showing the number of doctors per trust and the activity per doctor. There is a clear division between the lowest quintile (1) and the highest (5), with higher productivity in quintile 1.



An aggregated workload measure is introduced that adds together FCE with weighted measures of outpatient attendances and A&E attendances. This is called Volume Adjusted Patients.



A further workload measure is developed by weighting activity (i.e. Volume Adjusted Patients) according to the casemix complexity of the trust. This is described as Complexity and Volume Adjusted Patient (CVAP) activity.

121

Table 10.3a Summary Data for All Trusts 45 Quintile 1 12,719 6,184 6,535 36 0.85 0.91 0.93

Data All Doctors wte Career Grade wte Junior Doctor wte No. Trusts Min of Staff MFF Average of Staff MFF Max of Staff MFF

2 14,151 6,391 7,760 34 0.93 0.94 0.96

3 11,890 5,484 6,406 35 0.96 0.99 1.01

4 12,757 5,695 7,062 34 1.01 1.04 1.10

5 15,259 6,206 9,054 34 1.10 1.18 1.28

Grand Total 66,776 29,959 36,817 173 0.85 1.01 1.28

Table 10.3b Productivity Ratios for All Trusts Data Doctors Per Trust Career Grade Per Trust Junior Doctor Per Trust Activity Per Trust FCE Activity Admissions Activity Bed Days Activity A&E Attendances Outpatient Attendances Volume Adjusted Patients Complexity & Volume Adjusted Patients Activity Per Doctor FCE Activity Admissions Activity Bed Days Activity A&E Attendances Outpatient Attendances Volume Adjusted Patients Complexity & Volume Adjusted Patients Activity Per Career Grade Admissions Activity Complexity & Volume Adjusted Patients

Quintile 1 353 172 182

2 416 188 228

3 340 157 183

4 375 168 208

5 449 183 266

Grand Total 386 173 213

86,819 75,865 281,723 87,066 240,097 108,899

88,919 78,328 280,638 88,568 273,177 113,729

73,604 63,818 254,062 79,958 226,199 94,276

70,015 62,149 278,182 89,764 222,697 91,429

63,795 57,535 218,522 92,275 249,552 87,374

76,731 67,613 262,797 87,477 242,225 99,226

129,652

137,692

116,728

116,251

112,484

122,610

246 215 797 246 680 308

214 188 674 213 656 273

217 188 748 235 666 278

187 166 741 239 594 244

142 128 487 206 556 195

199 175 681 227 628 257

367

331

344

310

251

318

442

417

407

371

315

390

755

733

745

694

616

708

Figure 10.1 Productivity Ratios for All Trusts (scaled to Base 1 at Quintile 1) 1.10 Admission per Doctor

1.00 0.90

Admission per Career Grade

0.80 0.70

Complexity & Volume Adjusted Patients per Career Grade

0.60 0.50 1

45

2

3

4

5

Minimum & maximum points of MFF range are unique to each quintile at 4 decimal places

122

Acute Hospitals Acute hospitals are the major group comprising 128 out of 173 hospitals. Quintile 5, which has the largest number of specialist and teaching hospitals, has the smallest number of acute trusts. A distinguishing feature of this group is that A&E attendances per doctor are highest in Q4 and Q5, due to the high number of A&E attendances in London. Throughput per doctor is lowest in Q5 for all other measures. Table 10.4a Staffing and Workload Summary for Acute Hospitals Quintile 1 12,642 6,132 6,510 35

Data All Doctors wte Career Grade wte Junior Doctor wte No. Trusts

2 7,833 3,804 4,030 22

3 8,933 4,361 4,571 29

4 8,425 3,850 4,575 23

5 6,447 2,727 3,720 19

Grand Total 44,279 20,873 23,406 128

Table 10.4b Productivity Ratios for Acute Hospitals Data Doctors Per Trust Career Grade Per Trust Junior Doctor Per Trust Activity Per Trust FCE Activity Admissions Activity Bed Days Activity A&E Attendances Outpatient Attendances Volume Adjusted Patients Complexity & Volume Adjusted Patients Activity Per Doctor FCE Activity Admissions Activity Bed Days Activity A&E Attendances Outpatient Attendances Volume Adjusted Patients CVAP Activity Per Career Grade Admissions Activity Complexity & Volume Adjusted Patients

Quintile 1 361 175 186

2 356 173 183

3 308 150 158

4 366 167 199

5 339 144 196

Grand Total 346 163 183

88,992 77,732 288,196 89,473 245,044 111,551

87,078 76,208 273,634 89,414 248,880 110,106

73,539 63,505 254,460 80,547 219,278 93,605

73,463 65,442 311,355 109,804 232,756 96,777

64,224 58,011 205,883 105,373 220,535 86,303

78,695 69,111 269,993 93,454 234,020 100,834

132,265

128,627

113,745

118,601

100,246

120,236

246 215 798 248 678 309 366

245 214 769 251 699 309 361

239 206 826 261 712 304 369

201 179 850 300 635 264 324

189 171 607 311 650 254 295

227 200 780 270 676 291 348

444

441

422

391

404

424

755

744

756

709

699

737

Figure 10.2 Productivity Ratios for Acute Hospitals (Base 1 at Quintile 1) 1.10 Admission per Doctor

1.00 0.90

Admission per Career Grade 0.80 0.70

Complexity & Volume Adjusted Patients per Career Grade

0.60 0.50 1

2

3

4

5

123

Teaching Hospitals Admissions per doctor and bed days per doctor are lowest in quintile 5, but outpatient and A&E attendances per doctor are lowest in quintile 4. Productivity across virtually every measure is highest in quintile 2, the lowest quintile in this group (since there are no teaching hospitals in quintile 1). Table 10.5a Staffing and Workload Summary for Teaching Hospitals Quintile 2 5,649 2,275 3,374 7

Data All Doctors wte Career Grade wte Junior Doctor wte No. Trusts

3 2,786 1,044 1,742 3

4 3,596 1,506 2,090 5

5 7,664 2,905 4,759 10

Grand Total 19,694 7,730 11,964 25

Table 10.5b Productivity Ratios for Teaching Hospitals Data Doctors Per Trust Career Grade Per Trust Junior Doctor Per Trust Activity Per Trust FCE Activity Admissions Activity Bed Days Activity A&E Attendances Outpatient Attendances Volume Adjusted Patients Complexity & Volume Adjusted Patients Activity Per Doctor FCE Activity Admissions Activity Bed Days Activity A&E Attendances Outpatient Attendances Volume Adjusted Patients Complexity & Volume Adjusted Patients Activity Per Career Grade Admissions Activity Complexity & Volume Adjusted Patients

Quintile 2 807 325 482

3 929 348 581

4 719 301 418

5 766 291 476

Grand Total 788 309 479

143,171 126,611 466,274 132,295 498,009 186,315

138,935 121,939 472,194 154,223 461,421 181,663

115,931 99,914 379,807 94,291 328,839 145,580

84,396 75,308 316,409 108,523 368,047 118,260

113,705 100,190 389,745 117,817 407,800 150,388

236,520

235,349

199,561

168,250

201,680

177 157 578 164 617 231

150 131 509 166 497 196

161 139 528 131 457 202

110 98 413 142 480 154

144 127 495 150 518 191

293

253

277

220

256

390

350

332

259

324

728

676

663

579

652

Figure 10.3 Productivity Ratios for Teaching Hospitals (Base 1 at Quintile 2) 1.10 1.00 Admission per Doctor

0.90

Admission per Career Grade

0.80

Complexity & Volume Adjusted Patients per Career Grade

0.70 0.60 0.50 2

3

4

5

124

Specialist Hospitals There are 20 specialist hospitals. There is little geographical variation in activity per doctor in terms of episodes, admissions or outpatients. There is no pattern at all in the volume of A&E attendances per doctor. In the later modelling section of this report, specialist hospitals are held separate and there is no attempt to standardise throughput or doctor input. Table 10.6a Staffing and Workload Summary for Specialist Hospitals Quintile 1 77 52 25 1

Data All Doctors wte Career Grade wte Junior Doctor wte No. Trusts

2 669 312 357 5

3 172 79 93 3

4 737 339 397 6

5 1,149 574 575 5

Grand Total 2,803 1,356 1,447 20

Table 10.6b Productivity Ratios for Specialist Hospitals Data Doctors Per Trust Career Grade Per Trust Junior Doctor Per Trust Activity Per Trust FCE Activity Admissions Activity Bed Days Activity A&E Attendances Outpatient Attendances Volume Adjusted Patients Complexity & Volume Adjusted Patients Activity Per Doctor FCE Activity Admissions Activity Bed Days Activity A&E Attendances Outpatient Attendances Volume Adjusted Patients Complexity & Volume Adjusted Patients Activity Per Career Grade Admissions Activity Complexity & Volume Adjusted Patients

Quintile 1 77 52 25

2 134 62 71

3 57 26 31

4 123 57 66

5 230 115 115

Grand Total 140 68 72

10,792 10,502 55,161 2,811 66,950 16,074

21,067 20,059 51,562 23,630 65,320 28,047

8,903 8,714 32,087 0 57,882 13,378

18,534 18,055 66,334 9,174 95,686 25,799

20,964 20,179 70,772 10,007 122,824 29,675

17,943 17,308 58,055 11,302 87,772 24,981

38,178

39,221

26,942

37,815

47,460

38,965

140 136 715 36 868 208

157 150 385 177 488 210

156 152 561 0 1,012 234

151 147 540 75 779 210

91 88 308 44 535 129

128 123 414 81 626 178

495

293

471

308

207

278

201

321

333

319

176

255

732

629

1,029

669

413

575

Figure 10.4 Productivity Ratios for Specialist Hospitals (Base 1 at Quintile 1) 1.90 Admission per Doctor

1.70 1.50

Admission per Career Grade

1.30 1.10

Complexity & Volume Adjusted Patients per Career Grade

0.90 0.70 0.50 1

2

3

4

5

125

Grade Mix The grade mix for England is shown in Tables 10.7a and 10.7b and is consistent with the analysis of grades in our micro study sample (see Appendix 10.1): •

The proportion of staff who are consultants is reasonably constant across England, averaging 35% and with a range of 34% (Quintile 5) to 36% (Quintiles 1-4).



There is a distinction in the proportion of staff employed as career grades, with 48% in quintile 1, c.45% in quintiles 2-4 and 41% in quintile 5. The reason for this variation lies in the proportion of staff who are employed as staff grades, ranging from 11% (quintile 1) to 6% (quintile 5).



The proportion of staff at registrar grade is higher in the high MFF (mainly London) trusts, with 29% in quintile 5 compared to 16% in quintile 1. This effect is at work to some extent in acute hospitals (Table 10.8b) but is heightened by the concentration of specialist and teaching hospitals in Q5 (Tables 10.9b and 10.10b).



There are 2,803 doctors across the 20 specialist trusts. This represents less than 5% of the 66,776 doctors but over 10% of the 173 trusts in our sample. Quintile comparison is weak since Q1 contains only one specialist trust. The trusts are distinguished by having no house officer grades and a high proportion of specialist registrar and consultant grades (Table 10.10b).

Table 10.7a wte Grade Mix in All Trusts Quintile Grades Consultants Associate Specialist and Career Grade Hospital Practitioner & Clinical Assistan Career Grade Sub-Total Registrar Grades SHO HO Junior Doctor Sub-Total Total

1 4,530 1,453 201 6,184 2,022 3,658 856 6,535 12,719

2 5,161 1,069 161 6,391 3,012 3,743 1,005 7,760 14,151

3 4,339 978 167 5,484 2,454 3,164 788 6,406 11,890

4 4,542 1,005 147 5,695 2,876 3,417 769 7,062 12,757

5 Grand Total 5,186 23,758 860 5,365 159 836 6,206 29,959 4,408 14,771 3,811 17,794 834 4,252 9,054 36,817 15,259 66,776

Table 10.7b % Grade Mix in All Trusts Quintile: Consultants Associate Specialist and Career Grade Hospital Practitioner & Clinical Assistan Career Grade Sub-Total Registrar Grades SHO HO Junior Doctor Sub-Total Total

1 36% 11% 2% 49% 16% 29% 7% 51% 100%

2 36% 8% 1% 45% 21% 26% 7% 55% 100%

3 36% 8% 1% 46% 21% 27% 7% 54% 100%

4 36% 8% 1% 45% 23% 27% 6% 55% 100%

5 34% 6% 1% 41% 29% 25% 5% 59% 100%

Grand Total 36% 8% 1% 45% 22% 27% 6% 55% 100%

Table 10.8a wte Grade Mix in Acute Trusts Quintile Grades Consultants Associate Specialist and Career Grade Hospital Practitioner & Clinical Assistant Career Grade Sub-Total Registrar Grades SHO HO Junior Doctor Sub-Total Total

1 4,482 1,449 201 6,132 1,998 3,657 856 6,510 12,642

2 2,852 844 108 3,804 1,205 2,261 563 4,030 7,833

3 3,289 927 145 4,361 1,417 2,499 656 4,571 8,933

4 2,924 819 106 3,850 1,523 2,496 556 4,575 8,425

5 Grand Total 2,107 15,654 545 4,584 75 636 2,727 20,873 1,308 7,450 1,916 12,828 497 3,128 3,720 23,406 6,447 44,279

126

Table 10.8b % Grade Mix in Acute Trusts Quintile: Consultants Associate Specialist and Career Grade Hospital Practitioner & Clinical Assistant Career Grade Sub-Total Registrar Grades SHO HO Junior Doctor Sub-Total Total

1 35% 11% 2% 49% 16% 29% 7% 51% 100%

2 36% 11% 1% 49% 15% 29% 7% 51% 100%

3 37% 10% 2% 49% 16% 28% 7% 51% 100%

4 35% 10% 1% 46% 18% 30% 7% 54% 100%

5 33% 8% 1% 42% 20% 30% 8% 58% 100%

Grand Total 35% 10% 1% 47% 17% 29% 7% 53% 100%

Table 10.9a wte Grade Mix in Teaching Hospitals Quintile Grades Consultants Associate Specialist and Career Grade Hospital Practitioner & Clinical Assistant Career Grade Sub-Total Registrar Grades SHO HO Junior Doctor Sub-Total Total

2 2,026 200 49 2,275 1,565 1,369 440 3,374 5,649

3 978 44 22 1,044 960 650 132 1,742 2,786

4 1,309 163 34 1,506 1,084 793 213 2,090 3,596

5 Grand Total 2,626 6,939 199 606 81 186 2,905 7,730 2,679 6,288 1,742 4,554 338 1,122 4,759 11,964 7,664 19,694

Table 10.9b % Grade Mix in Teaching Hospitals Quintile: Consultants Associate Specialist and Career Grade Hospital Practitioner & Clinical Assistant Career Grade Sub-Total Registrar Grades SHO HO Junior Doctor Sub-Total Total

2 36% 4% 1% 40% 28% 24% 8% 60% 100%

3 35% 2% 1% 37% 34% 23% 5% 63% 100%

4 36% 5% 1% 42% 30% 22% 6% 58% 100%

5 34% 3% 1% 38% 35% 23% 4% 62% 100%

Grand Total 35% 3% 1% 39% 32% 23% 6% 61% 100%

Table 10.10a wte Grade Mix in Specialist Hospitals Quintile Grades Consultants Associate Specialist and Career Grade Hospital Practitioner & Clinical Assistant Career Grade Sub-Total Registrar Grades SHO HO Junior Doctor Sub-Total Total

1 48 4 0 52 24 1 25 77

2 283 25 4 312 242 113 2 357 669

3 72 7 79 78 16

4 309 23 7 339 269 128

93 172

397 737

5 Grand Total 454 1,166 117 176 3 15 574 1,356 421 1,034 153 411 2 575 1,447 1,149 2,803

Table 10.10b % Grade Mix in Specialist Hospitals Quintile: Consultants Associate Specialist and Career Grade Hospital Practitioner & Clinical Assistant Career Grade Sub-Total Registrar Grades SHO HO Junior Doctor Sub-Total Total

1 62% 5% 0% 68% 31% 1% 0% 32% 100%

2 42% 4% 1% 47% 36% 17% 0% 53% 100%

3 42% 4% 0% 46% 45% 9% 0% 54% 100%

4 42% 3% 1% 46% 37% 17% 0% 54% 100%

5 40% 10% 0% 50% 37% 13% 0% 50% 100%

Grand Total 42% 6% 1% 48% 37% 15% 0% 52% 100%

127

METHODS 1: THE MODEL – CURRENT POSITION The profile of medical workforce and workload in the previous section is drawn into a model that attempts to standardise comparisons between trusts. It is apparent that the balance of inpatient, outpatient and A&E attendances varies across the piece (e.g. Table 10.2). We know also (via the complexity index, see Appendix 1) that casemix complexity among trusts varies. The modelling approach reduces all these measures to a single casemix and volume adjusted workload measure, (addressing one of the directives that emerged from the micro study Reference Panel).

Taking Account of Other Ambulatory Activity – The Volume Adjustment Episodes are the common currency of activity. However, in terms of patient flows, outpatients and A&E attendances are measures of ambulatory care that also demand medical staffing time. We have incorporated all elements of activity into the model and weighted them on the basis of HRG national average costs according to the following method: Outpatient Adjustment a) Derive a standard average episode for the sample (by multiplying the national unit costs for each trust’s casemix by trust activity, separating elective, nonelective and day cases). (The resulting average is £1,380 per episode). b) Derive specialty-weighted outpatient attendance costs for each trust based on national average costs by specialty c) Express each trust’s specialty-weighted outpatient cost (b) as a percentage of the average episode cost (a). The resulting percentage varies between 6% and 12% for each trust, depending on the specialty mix, with a sample average of 7%. d) The percentage is applied to each trust’s outpatient volume to produce an outpatient activity adjustment. A&E Adjustment The national average cost of £82 per A&E attendance (Reference Costs 2004/5) is applied to the sample average HRG-adjusted episode cost to yield a percentage of 7%. This is applied to the A&E attendance volume for each trust to produce an A&E activity adjustment. Volume Adjusted Workload The outpatient and A&E adjustments are added to the FCE activity to produce a volume adjusted workload measure.

Taking Account of Casemix – The Complexity Adjustment The casemix of hospitals varies. It is possible to take account of this by use of a ‘complexity index’ which applies the national average HRG costs and to each trust’s activity at HRG level. The product is divided by the trust’s activity (x 1000) to give a resource-based weighting factor. The underlying assumption is that relative resource usage is a proxy for complexity. The complexity index is purely a casemix weighting rather than an efficiency measure (since it is standardised according to national rather than local costs). The complexity index for each trust is applied to the volume adjusted workload to derive a complexity and volume adjusted workload.

128

The Model Structure Four tables describe the model, drawing together geography (in terms of quintiles) and hospital type: • Workforce (Table 10.11) • Grade relationships (Table 10.12) • Workload (Table 10.13) • Doctor:workload ratios (Table 10.14) Composition of the workforce and workload has been described earlier in the chapter. Grade relationships are made explicit as they have a bearing on any attempt to model scope for change (addressed later in the chapter). This section focuses on application of the complexity and volume adjustment to workload which has an impact upon doctor:workload ratios. Workload (Table 10.13) •

There is a 31% difference between quintiles 5 and 1 in workload volume expressed as finished consultant episodes (FCE).



The volume adjustment (accounting for A&E and outpatients) narrows this gap to 24%.



The complexity adjustment narrows the gap further to 18%.



Both volume and complexity adjustments are expressed as indices which rise throughout the MFF range. The composite adjustment (complexity and volume) likewise rises throughout the quintiles, with an adjustment of 1.49 in Quintile 1 and an adjustment of 1.76 in quintile 5.

Doctor: Workload Ratios (Table 10.14) •

There is a 73% difference in productivity between quintiles 5 and 1 in terms of total doctors per 1,000 FCE (workload measure A), part of which can be explained by junior doctors since the career grade variance is lower at 45%.



Application of the complexity and volume adjustments reduce these variances to 46% between quintiles 5 and 1 in productivity across all grades and 22% across variance in career grades.



It is notable that complexity and volume reduces the gap in career grade productivity in acute hospitals to only 8%, with an average of 1.32 per 1000 Adjusted Workload C in quintile 1 and 1.43 in quintile 5.

129

Table 10.11 Medical Model – Current Position Workforce in 173 Trusts at September 2004 (based on 2004 Census) TOTAL

Quintile 1 2 3 4 5 All Quintiles

Workforce wte All Career Consultant Grade WTE WTE 4,530 6,184 5,161 6,391 4,339 5,484 4,542 5,695 5,186 6,206

REG WTE 2,022 3,012 2,454 2,876 4,408

All Junior Doctor WTE 6,535 7,760 6,406 7,062 9,054

Total Doctor wte 12,719 14,151 11,890 12,757 15,259

23,758

29,959

14,771

36,817

66,776

1

4,482

6,132

1,998

6,510

12,642

2

2,852

3,804

1,205

4,030

7,833

3

3,289

4,361

1,417

4,571

8,933

4

2,924

3,850

1,523

4,575

8,425

5

2,107

2,727

1,308

3,720

6,447

All Quintiles

15,654

20,873

7,450

23,406

44,279

2

2,026

2,275

1,565

3,374

5,649

3

978

1,044

960

1,742

2,786

4

1,309

1,506

1,084

2,090

3,596

5

2,626

2,905

2,679

4,759

7,664

All Quintiles

6,939

7,730

6,288

11,964

19,694

SPECIALIST 1 2 3 4 5

48 283 72 309 454

52 312 79 339 574

24 242 78 269 421

25 357 93 397 575

77 669 172 737 1,149

All Quintiles

1,166

1,356

1,034

1,447

2,803

ACUTE

TEACHING

130

Table 10.12 Medical Model – Current Position – Grade Relationships TOTAL

Grade Relationships

Quintile 1 2 3 4 5

Consultant: Registrar 2.24 1.71 1.77 1.58 1.18

Consultant: All Junior Career: Junior 0.69 0.95 0.67 0.82 0.68 0.86 0.64 0.81 0.57 0.69

All Quintiles

1.61

0.65

0.81

Total % Variance (5-1)

-47%

-17%

-28%

1

2.24

0.69

0.94

2

2.37

0.71

0.94

3

2.32

0.72

0.95

4

1.92

0.64

0.84

5

1.61

0.57

0.73

All Quintiles

2.10

0.67

0.89

Acute % Variance (5-1)

-28%

-18%

-22%

2

1.29

0.60

0.67

3

1.02

0.56

0.60

4

1.21

0.63

0.72

5

0.98

0.55

0.61

1.10

0.58

0.65

-24%

-8%

-9%

1

1.99

1.91

2.08

2

1.17

0.79

0.87

3

0.93

0.77

0.84

4

1.15

0.78

0.85

5

1.08

0.79

1.00

1.13

0.81

0.94

-46%

-59%

-52%

ACUTE

TEACHING

All Quintiles Teaching % Variance (5-2) SPECIALIST

All Quintiles Specialist % Variance (5-1)

131

Table 10.13 Medical Model – Current Position - Workload TOTAL

Workload

Workload Adjustment Indices

Volume Complexity Volume Volume & Adjusted & Volume Index Complexity Complexity Workload (B) Adjusted (C) (B/A) Index (C/B) Index (C/A) 3,920,364 4,667,461 1.25 1.191 1.493 3,866,775 4,681,543 1.28 1.211 1.549 3,299,668 4,085,485 1.28 1.238 1.586 3,108,574 3,952,526 1.31 1.271 1.660 2,970,733 3,824,465 1.37 1.287 1.763

Quintile 1 2 3 4 5

FCE (A) 3,125,499 3,023,257 2,576,139 2,380,513 2,169,041

All Quintiles

13,274,449

17,166,112

21,211,481

1.29

1.236

1.598

-31%

-24%

-18%

+9%

+8%

+18%

1

3,114,707

3,904,290

4,629,283

1.25

1.186

1.486

2

1,915,725

2,422,335

2,829,798

1.26

1.168

1.477

3

2,132,624

2,714,542

3,298,611

1.27

1.215

1.547

4

1,689,657

2,225,879

2,727,830

1.32

1.226

1.614

5

1,220,262

1,639,756

1,904,667

1.34

1.162

1.561

All Quintiles

10,072,975

12,906,802

15,390,190

1.28

1.192

1.528

-61%

-58%

-59%

+7%

-2%

+5%

2

1,002,198

1,304,206

1,655,640

1.30

1.269

1.652

3

416,805

544,990

706,048

1.31

1.296

1.694

4

579,653

727,898

997,805

1.26

1.371

1.721

5

843,961

1,182,602

1,682,498

1.40

1.423

1.994

2,842,617

3,759,695

5,041,992

1.32

1.341

1.774

-16%

-9%

+2%

+8%

+12%

+21%

Total % Variance (5-1) ACUTE

Acute % Variance (5-1) TEACHING

All Quintiles Teaching % Variance (5-2) SPECIALIST 1

10,792

16,074

38,178

1.49

2.375

3.538

2

105,334

140,234

196,105

1.33

1.398

1.862

3

26,710

40,135

80,826

1.50

2.014

3.026

4

111,203

154,796

226,891

1.39

1.466

2.040

5 All Quintiles Specialist % Variance (5-1)

104,818

148,374

237,300

1.42

1.599

2.264

358,857

499,614

779,300

1.39

1.560

2.172

+871%

+823%

+522%

-5%

-33%

-36%

132

Table 10.14 Medical Model – Current Position – Doctor:Workload Ratios TOTAL

Quintile 1 2 3 4 5

Doctor : Workload Ratios Career Grade Total Doctor per 1000 per 1000 Total Career Volume Volume Doctor Grade Adjusted Adjusted per 1000 per 1000 FCE (A) FCE (A) Workload (B) Workload (B)

Career Grade per 1000 Complexity & Volume Adjusted Workload (C)

Total Doctor per 1000 Complexity & Volume Adjusted Workload (C)

1.98 2.11 2.13 2.39 2.86

4.07 4.68 4.62 5.36 7.04

1.58 1.65 1.66 1.83 2.09

3.24 3.66 3.60 4.10 5.14

1.32 1.37 1.34 1.44 1.62

2.73 3.02 2.91 3.23 3.99

2.26

5.03

1.75

3.89

1.41

3.15

+45%

+73%

+32%

+58%

+22%

+46%

1

1.97

4.06

1.57

3.24

1.32

2.73

2

1.99

4.09

1.57

3.23

1.34

2.77

3

2.05

4.19

1.61

3.29

1.32

2.71

4

2.28

4.99

1.73

3.78

1.41

3.09

5

2.23

5.28

1.66

3.93

1.43

3.38

All Quintiles

2.07

4.40

1.62

3.43

1.36

2.88

+14%

+30%

+6%

+21%

+8%

+24%

2

2.27

5.64

1.74

4.33

1.37

3.41

3

2.50

6.68

1.92

5.11

1.48

3.95

4

2.60

6.20

2.07

4.94

1.51

3.60

5

3.44

9.08

2.46

6.48

1.73

4.56

2.72

6.93

2.06

5.24

1.53

3.91

+52%

+61%

+41%

+50%

+26%

+34%

1

4.83

7.15

3.24

4.80

1.37

2.02

2

2.96

6.35

2.22

4.77

1.59

3.41

3

2.94

6.42

1.96

4.28

0.97

2.12

4

3.05

6.62

2.19

4.76

1.50

3.25

5

5.48

10.96

3.87

7.74

2.42

4.84

3.78

7.81

2.71

5.61

1.74

3.60

+13%

+53%

+19%

+61%

+77%

+140%

All Quintiles Total % Variance (5-1) ACUTE

Acute % Variance (5-1) TEACHING

All Quintiles Teaching % Variance (5-2) SPECIALIST

All Quintiles Specialist % Variance (5-1)

133

METHODS 2: AVOIDABLE COSTS – SETTING OUT THE ASSUMPTIONS The aim of the study is to explore spatial differences in staff costs and then to consider how much of this difference could be regarded as avoidable or unavoidable. The approach that has been used elsewhere (e.g. analysis of nursing costs based on HCC data) has been to consider (a) price variances and then (b) volume variances. The method is also explored here. Price Variance We do not have evidence that doctors’ pay increases in line with the MFF. Two sets of data are available, based on the micro study, neither of which suggest a positive correlation between pay and MFF at the level of grade: •

The general ledger analysis (Chapter 4) indicated that there was a weak positive geographical association between budgeted pay and the MFF due to grade mix. Within grades there was little apparent difference in budgeted pay per wte but, in the sample of 14 trusts, SpR grades were more expensive than staff grades. The weak positive correlation between pay and MFF arose because low MFF trusts employed more staff grades and high MFF trusts used higher proportions of SpR grades. This grade mix finding has been replicated in the national set due to (a) higher concentration of teaching hospitals in high MFF quintiles and (b) higher use of SpRs in acute trusts in the higher quintile MFF range.



The payroll analysis (Chapter 5) has produced information relating to the actual (rather than budgeted) pay to doctors, excluding agency locums. It points to a negative relationship between MFF ranking and pay, both within grade and as a result of grade mix. (Appendix 10.2)

We can use these contrasting results to draw some conclusions. The first is that medical pay does not behave in the same way as other groups; nursing, professional and technical and administrative and clerical in both the general ledger and payroll analysis display a clear positive association between MFF and cost per wte. The second is that there is a lack of clarity in the correlation between MFF and doctors’ pay which means that we cannot impute any price variance to this group. (This is a substantive conclusion because, in the nursing analysis based on HCC data, the price variance was estimated at +18.3%).

Volume Variance The earlier section pointed to an apparent productivity gap between trusts in high MFF versus low MFF areas. The scope for closing this gap (finding avoidable cost variations) is explored through application of an audit approach by considering volume variance at three levels: A. Baseline position – current variance B. Peer group average – assuming that hospitals operate at or below the baseline average within their acute and teaching hospital type peer group C. Peer group efficiency – assuming that hospitals operate at the level of the most efficient quintile within their acute and teaching hospital type peer group

134

Before modelling the impact of these assumptions, it is necessary to consider whether flexibility in medical staffing numbers is constrained in the NHS environment which is external to trusts. Constraints External to Trusts Constraints operate in the area of (i) training, (ii) professional drivers for growth and (iii) NHS drivers for growth. Their relevance to this analysis is the extent to which they have a geographical bias (in contrast to nursing and other professional standards and drivers which are pan-NHS). (i) Doctors in Training •

There is a geographical bias towards London in the location of medical education since 28% of medical student intakes enter London schools, with clinical placements in London hospitals (see Appendix 10.3). o This is consistent with the position that London comprises 18% of hospital trusts, treats 15% inpatient episodes, but employs 23% of junior doctors in England (Table 10.2)



There is limited local flexibility in the number of junior doctor posts since they are planned and allocated centrally through regional deanery offices, controlled through quota methods. Junior posts are part-funded through the deanery.



The growth in junior doctors in recent years associated with junior doctors’ hours (Calman, 1991) has stimulated growth in consultant numbers since consultants have a supervisory role (Calman, 1993). There is a relationship between consultant and junior doctor numbers which acts as a barrier to optimising service productivity in terms of patient workload per consultant.



The implication of these constraints is that any barrier to reducing junior medical staffing also acts as a brake on reducing consultant medical staffing. Judgements on unavoidable costs are therefore contingent on: o o

o

Unavoidable costs associated with junior doctor numbers; Unavoidable costs associated with the minimum required ratio of consultants per junior doctor. These minimum ratios are not explicitly articulated across all specialties but, as they are linked to Royal College recognition of training placements for junior doctors, form an active barrier to change at hospital level; The historic impact of the Calman reports throughout the 1990s has been to drive up demand for junior doctors and, in response to this, to drive up demand for consultants.

(ii) Professional Drivers - Growth in Demand & Supply of the Consultant Workforce There have been other pressure to increase the consultant workforce in recent years: •

The Department of Health (then NHS Executive), in line with professional demand-led targets, planned significant growth among the consultant grade 2001-2005, moving from a model in which care was consultant-led towards one which was increasingly consultant-provided. The NHSE published a set of supply targets for 2005, based on full uptake of existing training posts (SWAG, 2000).

135



In July 1999 the Joint Consultants Committee in England of the Royal Colleges of Physicians and Surgeons published an influential consultation document about the future of acute services (JCC, 1999). It summarised professional guidelines and policy to date, specifying (a) minimum populations that acute hospitals should serve (stimulating mergers of small hospitals and development of clinical networks among hospitals serving low density urban-rural populations) and (b) recommendations about the consultant workforce. These included statements that: o No specialty or subspecialty should be provided by a single-handed consultant (p16) o The acute on-call rota in the major admitting specialties should be no more than one in five (p16) o A minimum of two physicians in each core specialty of general medicine (i.e. cardiology and coronary care, gastroenterology, respiratory medicine, diabetes and endocrinology and care of the elderly) are required to provide professional support, constant presence of specialist consultant cover throughout the year, regular support to junior doctors, and training of junior doctors (p18) o There should be a minimum of two surgeons with major interests in each of the general surgical subspecialties: breast, colproctology, upper gastrointestinal and hepatobiliary surgery, vascular surgery (p18) o For smaller and isolated district general hospitals, service populations of less than 200,000, “the service should benefit from being largely consultant provided” (p21)

Each Royal College or specialist association has subsequently produced further guidelines on the number of doctors required to provide a service, leading to growth in demand for e.g. general physicians (Royal College of Physicians, 2002a, 2002b, 2004), accident and emergency specialists (British Association for Emergency Medicine, 2005), psychiatrists (Royal College of Psychiatrists, 2006). (ii) Other Drivers - Growth in the Medical Workforce Junior Doctors. The New Deal reduced junior doctors’ hours to 84, then 72 then 56. Hospitals are putting in place structures to meet the European Working Time Directive which brings weekly hours down from 56 to a maximum of 48. Impact of Consultants’ Contracts. The system of 3.5 hour sessions, building up to 10 sessions per maximum part-timer or 11 sessions per full-timer, is being replaced by the new consultants’ contract, built around a core of ten 4-hour programmed activities (PA). The contract was agreed between the BMA and the Department of Health in 2003 and since then it has been up to each trust to agree the position with consultants. Local studies (confirmed by NAO, 2007, p20) suggest that the average number of PAs being worked by consultants is not untypical at 12PA per individual, stimulating demand for consultants in these cases by a further 2 sessions per wte, or +20%.

Summarising the Modelling Assumptions The planning conditions explained above are summarised in Figure 10.5 below. Four scenarios emerge from the modelling process.

136

Figure 10.5 Assumptions Driving the Medical Model Scope for Change Equals Avoidable Cost, taking into account: • • •

Price variance Volume variance Environmental constraints

Assume: • Price variance = 0 •

Volume Variance: Based on Doctor:Workload Ratios A. Status quo B. Peer group (acute and teaching) quintiles which are above average (in A) move to the peer group average C. Peer groups move to efficiency level of the lowest quintile (usually Q1) C. Constrained. Model C but with constraints below



Constraints • Assume that in the short term junior doctor costs are unavoidable • Assume minimum ratios of: o Consultant : registrar = 1.0 (i.e. cannot have more registrars than consultants)

o Consultant : junior

= 0.5

RESULTS Table 10.15 summarises the changes emerging from the detailed scenarios. The net feasible reduction of 1% of the medical workforce, or 3% of career grades (located in non-consultant career grades), is very small in the context of an apparent 73% productivity gap between quintiles (shown in Table 10.16). The implication is that environmental factors cause stickiness in labour-input reductions within the medical workforce. While 1% of the medical workforce is apparently small, its impact is magnified by being concentrated in a specific area. This potential avoidable cost (based on comparative workload per doctor) is located in career grades and, more particularly, in the non-consultant career grade, since we have found that training and supervision requirements restrict any scope for reducing the overall consultant workforce. There are 6,201 non-consultant career grade doctors (mainly staff grades), so that the projected avoidable volume of 972 equates to 16%. As the avoidable component of cost is mainly located in quintiles 4 and 5 where there are 2173 non-consultant career grades in the workforce, the impact of potential reductions in staff grades becomes stronger still.

137

Table 10.15 Medical Model Results Total Doctors wte

Productivity Gap Between Highest And Lowest (Based on FCE)

Productivity Gap Between Highest And Lowest (Adjusted Workload)

Average wte: Adjusted Workload

36,817

66,776

73%

46%

3.15

29,352

36,817

66,171

68%

42%

3.12

23,427

28,614

36,817

65,433

64%

39%

3.08

23,801

28,987

36,817

65,807

68%

42%

3.10

wte Reduction A C Constrained

+43

-972

+0

-969

% Reduction A - C Constrained

+0%

-3%

+0%

-1%

Consultant wte

All Career Grades wte

Junior Doctors wte

A. Current

23,758

29,959

B. Peer Group Average C. Efficient (Unconstrained)

23,707

C. Constrained

Model

SUPPLY SIDE The average 3 month vacancy rate for medical staff in England is less than 3% and lower in quintile 5 than quintile 1. This displays a different pattern to nursing staff where vacancies increase in high wage (high MFF) areas. (Tables 10.16 – 10.18 summarise vacancy rates and their relationship with trust quintile, MFF and hospital type). Doctors are not responding to labour market signals in the conventional way. This is consistent with the micro study finding that there is no spatial pattern in doctors’ wage rates. We observe little correlation between vacancy rates and the staff MFF, and any that does exist is negative (r=-0.1). There is a low but negative correlation between vacancy rates and presence of teaching/specialisation (r=-0.22). In other words, vacancy rates are lower in specialist/teaching establishments which we know are more numerous in high MFF areas such as London. Table 10.16 Average Medical 3 Month Vacancy Rate 31/03/2005 by Quintile Quintile Average Vacancy Rate 1 3.5% 2 2.6% 3 2.3% 4 3.0% 5 2.2% 2.7% England Average Rate Table 10.17 Average Medical 3 Month Vacancy Rate 31/03/2005 by Hospital Type Type London Outside Grand London Total Acute 4.2% 2.9% 3.1% Specialist 0.6% 1.9% 1.5% Teaching 1.5% 1.5% 1.5% Grand Total 2.9% 2.6% 2.7%

138

Table 10.18 Correlation Between Vacancies, Staff MFF and Hospital Type Medical 3 month vacancy rate Medical 3 month vacancy rate Staff MFF Type (0 = acute, 1 = teaching/specialist)

1 -0.10 -0.22

Staff MFF

1 +0.31

Type (0 = acute, 1 = teaching/ specialist)

1

There are two labour market reasons why we might not expect the MFF to reflect variation in medical labour costs. First, the theory of compensating differentials states that trainee doctors will be willing to take lower wages in return for better/more prestigious training associated with the top London hospitals. Junior doctor wages, then, may be lower (relative to cost of living) in London because of this factor. Second, consultants undertake private work, the opportunities for which are greatest in London and the south east (Morris et al, 2007). Those located in this area would be less reliant upon their NHS earnings, so the trusts would benefit from the fact that many consultants have high wages outside the NHS. This also imposes a potential cost on the hospitals that are not explored in this study; namely, that hours recorded as worked in the NHS by individuals with large private practices may be somewhat higher than the hours actually worked. Vacancies are low across England, implying few recruitment problems in the medical profession. They are lowest in high MFF areas due to the preponderance of specialist and teaching hospitals. In line with low vacancies, there is little evidence of wastage out of the NHS. Doctors are less likely to change occupation than other staff groups because of the high investment they have made in training and education (job specific capital).

DISCUSSION OF MEDICAL STAFFING Spatial Variation in Costs We have drawn together evidence from other data sets (general ledger and payroll) which indicates that there is no clear geographical variation in the cost of doctors by grade. This finding is in contrast to other staff groups (nursing, ST&T, administrative and clerical) which appear to show clear spatial differences in wage costs, where the cost per wte rises through the MFF range. Avoidable and Unavoidable Costs An initial analysis pointed to a wide productivity gap between high and low MFF trusts (73% on a basic Doctor:FCE measure), suggesting that the avoidable cost component in medical staffing was potentially large. The gap was narrowed to 46%, however, by refinements to workload measures to take account of variations in outpatient and A&E activity. The size of any avoidable cost difference was further diminished by structural factors which limit the amount of discretion available to trusts. The assumptions applied in the model developed here suggest that, ceteris paribus, avoidable costs are limited to 1% of the medical workforce, equivalent to 3% of career grades, retaining a 42% productivity gap between high and low MFF trusts (on a quintile basis).

139

The net feasible reduction of 1% of the medical workforce, or 3% of career grades is small in the national context of a 46% productivity gap between trusts (but is magnified in the localised context of London hospitals and non-consultant career grades). The implication is that environmental factors cause stickiness in labourinput reductions within the medical workforce. These factors relate to training and distribution of junior doctors and to professional standards applied to the role of consultants. Feasibility The value of this SCA review of medical staffing lies in identifying the constraints which are at work in the medical workforce. It highlights the role of professional regulation and training institutions upon labour market structures (e.g. Shepsle and Weingast, 1981; Eggertsson 1990). We conclude that doctors are effectively functioning in a different labour market and that there is a case for breaking the link with the GLM-based MFF.

Implications for Policy The labour market case for excluding medical staff from the MFF contends that doctors do not respond to private sector wage signals in the same way as nursing and other staff, aided by compensating differentials. The data that we have collected show that doctors’ wages do not display a geographical pay gradient and there is no evidence of shortages (as measured by vacancies) in London and the south east. There is a significant difference in medical productivity which has been analysed as a non-labour market feature, related to number of junior doctors and a professional requirement to maintain ratios of consultants per trainee doctor at a service level. It follows that, if doctors were to be excluded from the general labour market-based MFF, then they would need to be funded through a Specific Cost Approach. This is only an option, rather than a necessity, since the distribution of medical staffing is broadly consistent with MFF geographic patterns, even though not directly caused by labour market factors. We have not spelled out what a SCA for doctors would look like. Further work would be needed to devise a SCA solution for medical staff that fully took into account interaction with existing funding streams, i.e. SIFT, MADEL. We have identified productivity deficits in high MFF areas but, by analysing the structural constraints in the balance between junior doctors and consultants (where more junior staffing demands more consultants), this has been described as largely unavoidable (at a spatial rather than trust level). The productivity differences have not been described here as an equity problem. Any inequity is further upstream, stemming from uneven distribution of medical students and then junior doctor placements. Within the NHS the main criticisms of the MFF are (i) scepticism over the validity of a general labour market index, (ii) inequity caused by the scale of redistribution due to a wide minimum-maximum range of 0.85 – 1.28 (51% difference between the two) and (iii) inequity caused by cliff edges between neighbouring trusts. Exclusion of medical staff from the MFF would address the first two of these problems, justified on the conceptual grounds that it does not belong in a general labour market index, and it would limit the scale of redistribution. (The effect could be more apparent than real,

140

though, since the SCA might emulate the MFF pattern.) The third problem, cliffedges, has nothing to do with medical staff (or staff generally) and would need to be addressed by separate adjustments to the MFF in the future through improved ‘smoothing’ between geographical areas. A reasonable policy response would be to accept the principle of excluding medical staff from the staff MFF, but to effect the exclusion on the basis of a carefully worked SCA alternative, which would require some time to plan. In the meantime, cliff edges could be smoothed more rapidly. There would be some merit in separating the two stages, allowing the impact of each to be visible to the NHS. The advantages that some areas will receive through smoothing of cliff edge effects would be masked by reductions in MFF income if medical staffing were to be removed from the MFF simultaneously. In designing a SCA for medical staff it would be necessary to future-proof the incentive structure, e.g. to take account of changes in the way consultants will organise themselves. The PbR funding stream is intended to allow funding to follow patient activity. In the future this will give medical staff latitude to relocate from general hospitals into ambulatory care settings, as has happened in the US (Berliner, 2006). A SCA alternative to the MFF would need to be mindful of PbR incentive structures and not act as an unintended brake on change.

141

Table 10. 16 Medical Model A – Current Baseline Position TOTAL

Quintile 1 2 3 4 5

Workforce wte All Career Consultant Grade WTE WTE 4,530 6,184 5,161 6,391 4,339 5,484 4,542 5,695 5,186 6,206

Grade Relationships

REG WTE 2,022 3,012 2,454 2,876 4,408

All Junior Doctor WTE 6,535 7,760 6,406 7,062 9,054

Total Doctor wte 12,719 14,151 11,890 12,757 15,259

Doctor : Workload Ratios

Consultant: Consultant: Career: Registrar All Junior Junior 2.24 0.69 0.95 1.71 0.67 0.82 1.77 0.68 0.86 1.58 0.64 0.81 1.18 0.57 0.69

Career Grade Total Doctor Career Total per 1000 per 1000 Grade Doctor Volume Volume per 1000 per 1000 Adjusted Adjusted FCE (A) FCE (A) Workload (B) Workload (B)

Career Grade per 1000 Complexity & Volume Adjusted Workload (C)

Total Doctor per 1000 Complexity & Volume Adjusted Workload (C)

1.98 2.11 2.13 2.39 2.86

4.07 4.68 4.62 5.36 7.04

1.58 1.65 1.66 1.83 2.09

3.24 3.66 3.60 4.10 5.14

1.32 1.37 1.34 1.44 1.62

2.73 3.02 2.91 3.23 3.99

All Quintiles

23,758

29,959

14,771

36,817

66,776

1.61

0.65

0.81

2.26

5.03

1.75

3.89

1.41

3.15

Total % Variance (5-1)

+14%

+0%

+118%

+39%

+20%

-47%

-17%

-28%

+45%

+73%

+32%

+58%

+22%

+46%

1

4,482

6,132

1,998

6,510

12,642

2.24

0.69

0.94

1.97

4.06

1.57

3.24

1.32

2.73

2

2,852

3,804

1,205

4,030

7,833

2.37

0.71

0.94

1.99

4.09

1.57

3.23

1.34

2.77

3

3,289

4,361

1,417

4,571

8,933

2.32

0.72

0.95

2.05

4.19

1.61

3.29

1.32

2.71

4

2,924

3,850

1,523

4,575

8,425

1.92

0.64

0.84

2.28

4.99

1.73

3.78

1.41

3.09

ACUTE

5 All Quintiles Acute % Variance (5-1)

2,107

2,727

1,308

3,720

6,447

1.61

0.57

0.73

2.23

5.28

1.66

3.93

1.43

3.38

15,654

20,873

7,450

23,406

44,279

2.10

0.67

0.89

2.07

4.40

1.62

3.43

1.36

2.88

-53%

-56%

-35%

-43%

-49%

-28%

-18%

-22%

+14%

+30%

+6%

+21%

+8%

+24%

TEACHING 2

2,026

2,275

1,565

3,374

5,649

1.29

0.60

0.67

2.27

5.64

1.74

4.33

1.37

3.41

3

978

1,044

960

1,742

2,786

1.02

0.56

0.60

2.50

6.68

1.92

5.11

1.48

3.95

4

1,309

1,506

1,084

2,090

3,596

1.21

0.63

0.72

2.60

6.20

2.07

4.94

1.51

3.60

5 All Quintiles Teaching % Variance (5-2)

2,626

2,905

2,679

4,759

7,664

0.98

0.55

0.61

3.44

9.08

2.46

6.48

1.73

4.56

6,939

7,730

6,288

11,964

19,694

1.10

0.58

0.65

2.72

6.93

2.06

5.24

1.53

3.91

+30%

+28%

+71%

+41%

+36%

-24%

-8%

-9%

+52%

+61%

+41%

+50%

+26%

+34%

142

Table 10. 17 Medical Model B – Peer Group Average TOTAL

Workforce wte

Quintile 1 2 3 4 5

Consultant WTE 4,530 5,161 4,339 4,542 5,135

All Career Grade WTE 6,184 6,391 5,484 5,555 5,739

All Quintiles

23,707

Total % Variance (5-1)

+13%

Grade Relationships

Doctor : Workload Ratios Career Grade Career Total per 1000 Grade per Doctor Volume 1000 FCE per 1000 Adjusted (A) FCE (A) Workload (B) 1.98 4.07 1.58 2.11 4.68 1.65 2.13 4.62 1.66 2.33 5.30 1.79 2.65 6.82 1.93

Total Doctor per 1000 Volume Adjusted Workload (B) 3.24 3.66 3.60 4.06 4.98

Career Grade per 1000 Complexity & Volume Adjusted Workload (C) 1.32 1.37 1.34 1.41 1.50

Total Doctor per 1000 Complexity & Volume Adjusted Workload (C) 2.73 3.02 2.91 3.19 3.87

REG WTE 2,022 3,012 2,454 2,876 4,408

All Junior Doctor WTE 6,535 7,760 6,406 7,062 9,054

Total Doctor wte 12,719 14,153 11,890 12,617 14,792

29,352

14,771

36,817

66,171

1.60

0.64

0.80

2.21

4.98

1.71

3.85

1.38

3.12

-7%

+118%

+39%

+16%

-48%

-18%

-33%

+34%

+68%

+22%

+53%

+13%

+42%

Consultant: Consultant: Registrar All Junior 2.24 0.69 1.71 0.67 1.77 0.68 1.58 0.64 1.16 0.57

Career: Junior 0.95 0.82 0.86 0.79 0.63

ACUTE 1

4,482

6,132

1,998

6,510

12,642

2.24

0.69

0.94

1.97

4.06

1.57

3.24

1.32

2.73

2

2,852

3,804

1,205

4,030

7,833

2.37

0.71

0.94

1.99

4.09

1.57

3.23

1.34

2.77

3

3,289

4,361

1,417

4,571

8,933

2.32

0.72

0.95

2.05

4.19

1.61

3.29

1.32

2.71

4

2,924

3,710

1,523

4,575

8,284

1.92

0.64

0.81

2.20

4.90

1.67

3.72

1.36

3.04

5

2,107

2,590

1,308

3,720

6,311

1.61

0.57

0.70

2.12

5.17

1.58

3.85

1.36

3.31

All Quintiles

15,654

20,597

7,450

23,406

44,003

2.10

0.67

0.88

2.04

4.37

1.60

3.41

1.34

2.86

Acute % Variance (5-1)

-53%

-58%

-35%

-43%

-50%

-28%

-18%

-26%

+8%

+27%

+1%

+19%

+3%

+21%

3.41

TEACHING 2

2,026

2,275

1,565

3,374

5,649

1.29

0.60

0.67

2.27

5.64

1.74

4.33

1.37

3

978

1,044

960

1,742

2,786

1.02

0.56

0.60

2.50

6.68

1.92

5.11

1.48

3.95

4

1,309

1,506

1,084

2,090

3,596

1.21

0.63

0.72

2.60

6.20

2.07

4.94

1.51

3.60

5

2,574

2,574

2,679

4,759

7,333

0.96

0.54

0.54

3.05

8.69

2.18

6.20

1.53

4.36

All Quintiles

6,887

7,399

6,288

11,964

19,363

1.10

0.58

0.62

2.60

6.81

1.97

5.15

1.47

3.84

Teaching % Variance (5-2)

+27%

+13%

+71%

+41%

+30%

-26%

-10%

-20%

+34%

+54%

+25%

+43%

+11%

+28%

143

Table 10.18 Medical Model C – No Constraint TOTAL

Workforce wte

Grade Relationships

Doctor : Workload Ratios Career Grade Career Total per 1000 Grade per Doctor Volume 1000 FCE per 1000 Adjusted (A) FCE (A) Workload (B) 1.98 4.07 1.58 2.09 4.66 1.64 2.10 4.59 1.64 2.23 5.20 1.71 2.49 6.66 1.82

Total Doctor per 1000 Volume Adjusted Workload (B) 3.24 3.64 3.58 3.98 4.86

Career Grade per 1000 Complexity & Volume Adjusted Workload (C) 1.32 1.35 1.32 1.34 1.41

Total Doctor per 1000 Complexity & Volume Adjusted Workload (C) 2.73 3.01 2.89 3.13 3.78

Quintile 1 2 3 4 5

Consultant WTE 4,530 5,161 4,328 4,542 4,866

All Career Grade WTE 6,184 6,322 5,407 5,307 5,393

REG WTE 2,022 3,012 2,454 2,876 4,408

All Junior Doctor WTE 6,535 7,760 6,406 7,062 9,054

Total Doctor wte 12,719 14,085 11,813 12,369 14,447

All Quintiles

23,427

28,614

14,771

36,817

65,433

1.59

0.64

0.78

2.16

4.93

1.67

3.81

1.35

3.08

Total % Variance (5-1)

+7%

-13%

+118%

+39%

+14%

-51%

-22%

-37%

+26%

+64%

+15%

+50%

+6%

+39%

1

4,482

6,132

1,998

6,510

12,642

2.24

0.69

0.94

1.97

4.06

1.57

3.24

1.32

2.73

2

2,852

3,735

1,205

4,030

7,765

2.37

0.71

0.93

1.95

4.05

1.54

3.21

1.32

2.74

3

3,289

4,361

1,417

4,571

8,933

2.32

0.72

0.95

2.05

4.19

1.61

3.29

1.32

2.71

4

2,924

3,601

1,523

4,575

8,175

1.92

0.64

0.79

2.13

4.84

1.62

3.67

1.32

3.00

5

2,107

2,514

1,308

3,720

6,234

1.61

0.57

0.68

2.06

5.11

1.53

3.80

1.32

3.27

All Quintiles

15,654

20,343

7,450

23,406

43,749

2.10

0.67

0.87

2.02

4.34

1.58

3.39

1.32

2.84

Acute % Variance (5-1)

-53%

-59%

-35%

-43%

-51%

-28%

-18%

-28%

+5%

+26%

-2%

+17%

-0%

+20%

2

2,026

2,275

1,565

3,374

5,649

1.29

0.60

0.67

2.27

5.64

1.74

4.33

1.37

3.41

3

967

967

960

1,742

2,709

1.01

0.56

0.56

2.32

6.50

1.77

4.97

1.37

3.84

4

1,309

1,367

1,084

2,090

3,457

1.21

0.63

0.65

2.36

5.96

1.88

4.75

1.37

3.46

5

2,305

2,305

2,679

4,759

7,064

0.86

0.48

0.48

2.73

8.37

1.95

5.97

1.37

4.20

All Quintiles Teaching % Variance (5-2)

6,607

6,914

6,288

11,964

18,878

1.05

0.55

0.58

2.43

6.64

1.84

5.02

1.37

3.74

+14%

+1%

+71%

+41%

+25%

-34%

-19%

-28%

+20%

+48%

+12%

+38%

-0%

+23%

Consultant: Consultant: Registrar All Junior 2.24 0.69 1.71 0.67 1.76 0.68 1.58 0.64 1.10 0.54

Career: Junior 0.95 0.81 0.84 0.75 0.60

ACUTE

TEACHING

144

Table 10.19 Medical Model C – Constrained TOTAL

Workforce wte

Quintile 1 2 3 4 5

Consultant WTE 4,530 5,161 4,328 4,542 5,240

All Career Grade WTE 6,184 6,322 5,407 5,307 5,767

All Quintiles

23,801

Total % Variance (5-1)

Grade Relationships

Doctor : Workload Ratios Total Career Grade Total Doctor Career Doctor per 1000 per 1000 Grade per Volume Volume per 1000 1000 Adjusted Adjusted FCE (A) FCE (A) Workload (B) Workload (B) 1.98 4.07 1.58 3.24 2.09 4.66 1.64 3.64 2.10 4.59 1.64 3.58 2.23 5.20 1.71 3.98 2.66 6.83 1.94 4.99

Career Grade per 1000 Complexity & Volume Adjusted Workload (C) 1.32 1.35 1.32 1.34 1.51

Total Doctor per 1000 Complexity & Volume Adjusted Workload (C) 2.73 3.01 2.89 3.13 3.88

REG WTE 2,022 3,012 2,454 2,876 4,408

All Junior Doctor WTE 6,535 7,760 6,406 7,062 9,054

Total Doctor wte 12,719 14,085 11,813 12,369 14,820

28,987

14,771

36,817

65,807

1.61

0.65

0.79

2.18

4.96

1.69

3.83

1.37

3.10

+16%

-7%

+118%

+39%

+17%

-47%

-16%

-33%

+34%

+68%

+23%

+54%

+14%

+42%

1

4,482

6,132

1,998

6,510

12,642

2.24

0.69

0.94

1.97

4.06

1.57

3.24

1.32

2.73

2

2,852

3,735

1,205

4,030

7,765

2.37

0.71

0.93

1.95

4.05

1.54

3.21

1.32

2.74

3

3,289

4,361

1,417

4,571

8,933

2.32

0.72

0.95

2.05

4.19

1.61

3.29

1.32

2.71

4

2,924

3,601

1,523

4,575

8,175

1.92

0.64

0.79

2.13

4.84

1.62

3.67

1.32

3.00

5

2,107

2,514

1,308

3,720

6,234

1.61

0.57

0.68

2.06

5.11

1.53

3.80

1.32

3.27

All Quintiles

15,654

20,343

7,450

23,406

43,749

2.10

0.67

0.87

2.02

4.34

1.58

3.39

1.32

2.84

Acute % Variance (5-1)

-53%

-59%

-35%

-43%

-51%

-28%

-18%

-28%

+5%

+26%

-2%

+17%

+0%

+20%

2,026

2,275

1,565

3,374

5,649

1.29

0.60

0.67

2.27

5.64

1.74

4.33

1.37

3.41

Consultant: Consultant: Registrar All Junior 2.24 0.69 1.71 0.67 1.76 0.68 1.58 0.64 1.19 0.58

Career: Junior 0.95 0.81 0.84 0.75 0.64

ACUTE

TEACHING 2 3

967

967

960

1,742

2,709

1.01

0.56

0.56

2.32

6.50

1.77

4.97

1.37

3.84

4

1,309

1,367

1,084

2,090

3,457

1.21

0.63

0.65

2.36

5.96

1.88

4.75

1.37

3.46

5

2,679

2,679

2,679

4,759

7,437

1.00

0.56

0.56

3.17

8.81

2.26

6.29

1.59

4.42

All Quintiles Teaching % Variance (5-2)

6,981

7,288

6,288

11,964

19,252

1.11

0.58

0.61

2.56

6.77

1.94

5.12

1.45

3.82

+32%

+18%

+71%

+41%

+32%

-23%

-6%

-17%

+40%

+56%

+30%

+45%

+16%

+30%

145

CHAPTER 11. SPECIALTY ANALYSIS The Specific Cost Approach review of the staff MFF was motivated by a desire to look at specialty costs as a way of understanding geographical cost drivers. This chapter uses general ledger and HRG costs to derive a specialty analysis.

METHOD The enquiry has been undertaken in two stages. In the first place, we used the 14 trust micro study sample to develop the approach, allowing us to scrutinise the impact in detail against individual trusts. As a second stage, building on the results of the micro sample, we analysed the national data set of 173 hospital trusts. Here we casemix-weighted each trust’s activity and derived a ‘national average cost’ for the trust, based on national unit costs, which was compared against actual trust costs. The distance from average was observed for the trust, and then analysed using a range of arithmetic and statistical regression methods. Finally, the movement in trust cost behaviour was set alongside the MFF index. The methodology employed the following sequence in which we: 1. Summarised FCE volume by HRG chapter and by trust. The largest chapter is F (Digestive System), accounting for 15% of FCEs, while N (obstetrics and neonatal) accounts for 10% and E (cardiac) is a further 10%. Smaller volume chapters included R (spinal, 1%) and K (endocrine and metabolic, 1%). 2. Calculated the trust cost based on trust HRG unit costs and trust volume of FCE (including excess bed days). 3. Calculated the national average unit cost for the HRG, based on the national data set. This was derived by taking the total cost for the FCE (including excess bed days) and dividing by the total number of FCEs in that HRG. The effect was to produce a single average unit cost, rather than a separate average for day cases, elective and emergency inpatients. It would not reproduce the Reference Cost Index as its structure is slightly different, but allows us to model the effect of unit costs at a standard level of ‘efficiency’ in terms of day case/inpatient mix for any given HRG 46. 4. Applied the national average unit cost per HRG to each trust’s activity to produce a total trust cost based on national average costs. 5. Calculated the difference between the trust cost (step 2) and the notional (benchmark) trust cost based on national average HRG unit costs (step 4). 6. Represented these differences (at step 5) as a percentage of trust costs to explore any patterns that may emerge.

46

Several computational strategies were available. The single HRG unit cost approach was selected as a means of managing half a million records in an efficient manner to produce results that could be readily interpreted. The alternative approaches included (a) applying separate elective and non-elective costs to each HRG and (b) applying separate unit costs to day cases, elective in-patient and nonelective inpatient. Appendix 1 indicates that application of these approaches would make little material difference to the spatial pattern of results, lending support to the approach taken here.

146

7. Undertook simple and multivariate regression of casemix average (i.e. output of step 6) against the staff MFF (independent variable), to estimate the variation in costs that can be explained by the MFF. 8. Ranked the R squared statistics to observe which specialty areas (chapters) varied most consistently with geography (the MFF). This was undertaken at chapter and at total trust level. 9. Analysed trust behaviour according to quintiles, where quintiles are defined according to MFF scores, with the 20% trusts in the lowest part of the MFF range in quintile 1 and the 20% with the highest scores in quintile 5. 10. Summarised cost behaviour into an index, described here as the HRG-costindex, and compared it against the movement of the MFF index. 11. Analysed the MFF at quintile level to select the appropriate marker for comparison by quintile (see Appendix 11.1). This was done by comparing mean, median and mid point ((max-min)/2) with weighted averages based on (a) total trust casemix weighted costs and (b) total national casemix average weighted costs. The preferred point of central tendency by quintile was selected as (b) MFF weighted by national average HRG costs.

RESULTS Micro Sample The table below summarises the variation at trust level between costs and the notional (benchmark) national average for the trust (summarising Step 5 in the Methods section for the micro sample). There appears to be an association between distance from average and geography. High MFF (London) trusts are consistently above average national costs while lower MFF trusts are generally below the average (with the exception of Trusts 3 and 4). Within the trust total we found that HRG chapters behaved in different ways (see Appendix 11.2 for summary of micro sample and Table 11.2 for summary of national sample). Geographical cost drivers appear to vary in strength between specialties. The results suggest that maternity and ophthalmology have weak implicit geographical cost variations, both at the level of the micro sample and for England as a whole. Orthopaedics (H, musculoskeletal) has stronger geographical cost variations at the level of HRG but, judging by the analysis of direct costs in Chapter 4, these variations might reside in indirect costs, e.g. radiology and theatre, or in sub-specialist activity. Chapter S (Haematology, Infectious Diseases, Poisoning) and Chapter P (diseases of childhood) have the strongest geographical association. The R squared for the sample as a whole is higher than the R squared for any individual HRG chapter, implying that there is a spatial pattern in the way underlying HRG codes are combined.

147

Table 11.1 Distance Between Trust and Cost Based on National Average Unit Costs (Expressed as % of National Average for Trust) Trust MFF and No.

Trust Cost minus National Average Cost for the Trust as % of Trust Cost

0.8640 Trust 1

-14%

0.9190 Trust 2

-2%

0.9219 Trust 3

+2%

0.9408 Trust 4

+7%

0.9510 Trust 5

-17%

0.9561 Trust 6

-11%

0.9790 Trust 7

-13%

0.9814 Trust 8

-19%

1.0037 Trust 9

-14%

1.1520 Trust 10

+4%

1.1822 Trust 11

+43%

1.1998 Trust 12

+27%

1.2086 Trust 13

+29%

1.2798 Trust 14

+18%

All Trust Average

+4%

Table 11.2 Ranking Comparison of R2 Statistic in National Samples (n=c.170) Including Excess Bed Days Chapter

Chapter Name

Excluding Excess Bed Days

R Squared

Rank

p=

R Squared

Rank

p=

A

Nervous System

11%

11

0.0000

15%

12

0.0000

B

Eyes

1%

3

0.2639

1%

1

0.2637

C

Mouth, Head, Neck and Ears

9%

9

0.0000

7%

4

0.0005

D

Respiratory System

12%

12

0.0000

13%

11

0.0000

E

Cardiac

4%

5

0.0076

8%

5

0.0002

F

Digestive System

13%

13

0.0000

18%

13

0.0000

G

Hepatobiliary and Pancreatic

1%

2

0.3224

11%

9

0.0000

H

Musculoskeletal

23%

17

0.0000

20%

15

0.0000

J

Breast, burns, skin

5%

6

0.0045

11%

8

0.0000

K

Endocrine and Metabolic

8%

8

0.0000

9%

6

0.0000

L

Urological

0%

1

0.8371

18%

14

0.0000

M

Female Reproductive

6%

7

0.0009

5%

2

0.0038

N

Obstetrics and Neonatal

2%

4

0.0447

5%

3

0.0044

P

Children

21%

15

0.0000

22%

16

0.0000

Q

Vascular

14%

14

0.0000

10%

7

0.0000

R

Spinal

10%

10

0.0000

12%

10

0.0000

S

Haematology, Infectious Diseases, Poisoning

22%

16

0.0000

24%

17

0.0000

0.0000

31%

All HRG Chapters

32%

0.0000

148

National Sample Spatial variation in costs was tested with reference to the national sample, organised according to quintiles. Table 11.3 summarises the relationship between trust costs and the national average. The ‘cost distance as % of national average’ (penultimate column in Table 11.3) reflects the difference between trust costs and the national average cost, expressed as a percentage of the national average. This is translated into an index (final column) called the ‘SCA HRG’ Index. The index was generated for each trust and the results are summarised in Figure 11.1. We know that the overall explanatory power between the MFF and this index is 32% (see Table 11.2), since the trust SCA HRG Index tends to oscillate around the MFF. The outliers mainly include specialist trusts (see Figure 11.1d) which, we know from the HCC and medical analyses in previous chapters, behave in an irregular manner.

Table 11.3 Cost Differences as % of Notional National Averages for Trusts

Trust Total Cost

Costs at National Average

Distance Between Trust and National Average

Cost Distance as % of National Average

SCA HRG Index

0.89

£3,126,816,775

£3,397,378,667

-£270,561,891

-8%

0.92

0.94

0.94

£3,181,259,486

£3,317,569,554

-£136,310,068

-4%

0.96

3

0.98

0.98

£3,206,235,720

£3,321,342,906

-£115,107,187

-3%

0.97

4

1.04

1.06

£2,838,539,419

£2,744,307,305

£94,232,114

+3%

1.03

5 Grand Total

1.18

1.19

£2,974,325,575

£2,480,156,923

£494,168,652

+20%

1.20

£15,327,176,974

£15,260,755,354

£66,421,620

0%

1.00

Quintile

Staff MFF Weighted Mean

Midpoint of Staff MFF Range

1

0.91

2

1.00

Figure 11.1a Staff MFF and Trust HRG Cost Index by Individual Trust (All) 1.8 1.6

Index

1.4 1.2 1 0.8 0.6

1

6

11

16

21

26

31

36

41

46

51

56

61

66

71

76

81

86

91

96

101 106 111 116 121 126 131 136 141 146 151 156 161 166 171

Trusts

Staff MFF

HRG Cost Index

149

Figure 11.1b Staff MFF and Trust HRG Cost Index by Trust (Acute) 1.8 1.6 Index

1.4 1.2 1 0.8 0.6

1

5

9

13

17

21

25

29

33

37

41

45

49

53

57

61

65

69

73

77

81

85

89

93

97

101

105

109

113

117

121

125

Acute Trusts

Staff MFF

HRG Cost Index

Figure 11.1c Staff MFF and Trust HRG Cost Index by Trust (Teaching) 1.8 1.6 Index

1.4 1.2 1 0.8 0.6

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

Te aching Trusts

Staff MFF Index

SCA HRG Index

Figure 11.1d Staff MFF and Trust HRG Cost Index by Trust (Specialist) 1.8 1.6 Index

1.4 1.2 1 0.8 0.6

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

Specialist Trusts

Staff MFF Index

SCA HRG Index

150

Figure 11.2 examines the behaviour of different hospital types (acute, teaching and specialist), summarised at a quintile level and compared against the trust aggregate. Bearing in mind that all costs are casemix weighted at the level of individual HRG (before being summed to chapters and then to trust), it is apparent that teaching hospitals are more expensive and acute hospitals are less expensive in Quintile 5, where we observe the greatest amount of variation in cost behaviour overall. In Quintile 2, the net difference between acute and teaching trusts is small whereas it diverges in Quintile 3, converges in Quintile 4 (where teaching hospitals are less expensive than acute hospitals), and then diverges once more in Quintile 5. It is striking that the aggregated position of the total trust index nets off these differences so that there is a close match between the SCA HRG cost index behaviour and the staff MFF (Figure 11.3). Figure 11.2 Summarising Trust Type and HRG Cost Index by Quintile 1.4 1.3

Index

1.2 1.1 1 0.9 0.8

1

2

3

4

5

Staff MFF

0.91

0.94

0.98

1.04

1.18

acute

0.92

0.95

0.91

1.03

1.08

0.98

1.17

0.99

1.32

specialist

1.13

0.99

1.17

1.29

1.37

total

0.92

0.96

0.97

1.03

1.20

teaching

Trusts Quintiles along M FF Range

Figure 11.3 Staff MFF Index and Trust HRG Cost Index by Quintile Around a Base of 1 1.2 1.15 1.1 Index

1.05 1 0.95 0.9 0.85 0.8

1

2

3

4

5

Staff MFF

0.91

0.94

0.98

1.04

1.18

All Trusts

0.92

0.96

0.97

1.03

1.20

Trusts Quintile s along M FF Range

151

MULTIVARIATE RESGRESSION ANALYSIS Trust Level Analysis. Analysis using case-mix adjusted costs as the dependent gives an adjusted R squared of 31.4% with MFF only and 48.2% with location and trust type. It is noticeable that the coefficient of MFF is cut by 60% with the introduction of location and trust type. (The introduction of summary quality variables does not make much difference and so is not reported here). Table 11.4(A) Dependent Case Mix Adjusted Costs: Model Summary Model 1 2

R .564 .703

R Square .318 .494

Adjusted R Square .314 .482

Std. Error of the Estimate 15.45% 13.44%

a Predictors: (Constant), Staff MFF b Predictors: (Constant), Staff MFF, NSPECIAL, NTEACHING, NLONDON

Table 11.4(B) Dependent Case Mix Adjusted Costs: Coefficients Model 1 2

(Constant) Staff MFF (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL

Unstandardised Coefficients B Std. Error -104.686 12.057 106.014 11.861 -45.837 16.051 41.105 16.544 15.434 4.082 21.614 3.290 10.150 3.084

Standardised Coefficients Beta .564 .219 .322 .371 .192

t

Sig.

-8.682 8.938 -2.856 2.485 3.781 6.569 3.291

.000 .000 .005 .014 .000 .000 .001

a Dependent Variable: % distance

Chapter by Chapter. The results of the regressions with MFF only and with MFF but also including trust type and London location are shown in Table 11.4(C). For some chapters (A, L, P) the addition of trust type and location makes hardly any or only a little difference; for all of the other chapters, the location and type of trust makes a substantial difference: for example, in Chapter B, the accounted for variance increases from zero to 10%; and for Chapter S, the accounted for variance increases from 22% to 44%. Table 11.4(C) Dependent Case Mix Adjusted Costs: Model Summary by Chapter CHAPTER A B C D E F G H J K L M N P Q R S

R .333 .086 .294 .343 .203 .355 .077 .478 .215 .277 .016 .254 .158 .456 .366 .312 .473

Model 1 MFF only R2 Adjusted R2 .111 .105 .007 .002 .086 .081 .118 .112 .041 .036 .126 .121 .006 .000 .229 .224 .046 .041 .077 .071 .000 -.006 .065 .059 .025 .019 .208 .203 .134 .129 .097 .092 .224 .219

Model 2 MFF plus London and Trust Type R R2 Adjusted R2 .369 .136 .115 .352 .124 .102 .460 .211 .193 .538 .289 .272 .423 .179 .159 .611 .373 .358 .350 .122 .101 .623 .388 .373 .479 .229 .211 .549 .302 .285 .233 .055 .032 .453 .205 .185 .505 .255 .236 .531 .282 .265 .533 .284 .267 .439 .192 .173 .672 .451 .438

a Predictors: (Constant), Staff MFF b Predictors: (Constant), Staff MFF, NSPECIAL, NTEACHING, NLONDON

152

Being a specialist hospital always has a greater effect on the cost deviations than whether or not the hospital has teaching status; sometimes the difference is large. For example, in the analyses for Chapter B, E, F, G, K, L, M, N, Q and S the standardised Beta for whether or the hospital is specialist is at least four times the size of the standardised beta for whether or not the hospital is in London. (See Table 11.5).

DISCUSSION OF HRG SPECIALTY COSTS Our earlier analysis of specialty costs within trust general ledgers (chapter 4) led us to propose a set of criteria, requiring that SCA data sources should be: • Comprehensive o covering all specialties o covering all functions (direct and indirect) • Credible: locally accepted (i.e. locally generated) • Generically coded – based on a common national coding structure • Capable of reflecting casemix and complexity weightings • Price-based rather than cost-based, netting out the effect of non-HCHS funding • In the public domain The HRG data set fulfils these criteria and we set about considering specialty costs according to HRG chapter (which cuts across doctors’ specialty labels since chapters relate to parts of the body which may be shared by more than one specialist, e.g. skin). We tested HRG unit costs against the 14 trust micro sample and then the national sample with broadly similar results (reinforcing the evidence that the micro sample tells us something about the behaviour of the costs throughout England as a whole). Certain specialties appeared to have no spatial pattern of variation, chief among them maternity (obstetrics and neonatal) and eyes. Other specialties had a stronger spatial cost pattern (in relation to the national average HRG cost), most notably chapter S (haematology, infectious diseases and poisoning), digestive system and paediatrics. The strongest pattern of spatial variation occurred at trust level and not at specialty level, a finding that was replicated in the general ledger analysis of the micro sample. Spatial Variation We mapped cost differences as a percentage of the national average cost for the trust (casemix weighted through the HRG approach) against the MFF index by using the quintile approach. We found a range of -8% in quintile 1 to +20% in quintile 5, giving an index of 0.92 – 1.20 which resembles the staff MFF quintile mean range of 0.91 – 1.18. Avoidable/Unavoidable Figure 11.3 maps the staff MFF and the HRG index derived from the unit cost analysis, set around a base of 1. The initial striking feature is the coincidence between the staff MFF index and the HRG index. The implication is that the MFF ‘works’ in that it reflects cost differentials. However, the circularity element needs to be borne in mind because these cost bases are underpinned by current funding that includes the MFF.

153

The SCA HRG index shows a range between 0.92 and 1.20 in the HRG quintile cost variation, giving a 30% gap (1.2/.92) which is coincident with the distance between quintile averages of the MFF (1.18/.91). The current staff MFF index is consistent with measures of spatial differentials in costs among trusts. Figure 11.1 reflects the same data set as figures 11.2 and 11.3, but displayed at the level of individual trust rather than quintile. The adjusted R squared measure of association between the staff MFF and the HRG index is 31%, which is significant but weak in its explanatory power (since 69% of variation in the cost base remains unexplained by the staff MFF). Figure 11.1 shows us that at the level of individual trust there is considerable variation between the cost base and the distance from national average, represented by the HRG index (where 1 = the national average). The multivariate regression which controls for trust type (i.e. acute, specialist or teaching) and London location explains a further 17%, leaving 52% of variation in costs unexplained. Differences between individual trusts are wider than differences between quintiles which tend to follow the contour of the MFF. In terms of avoidable and unavoidable costs, the implication of this discussion is that the MFF, taken in the round, is a fair reflection of unavoidable cost variations. The HRG Index shows us ‘what is’ rather than ‘what should be’ and we know from the earlier analysis of ward nursing costs that Quintile 1 operates at a higher level of efficiency while acute trusts in Quintile 5 function at a lower level of efficiency.

Feasibility HRG unit costs are the basis of the trust Reference Cost Index and are used to formulate the PbR tariff. The costs satisfy the criteria listed above and the analysis provided a close match between what we dubbed the ‘HRG Index’ (i.e. distance from national average casemix weighted costs) and the staff MFF index. On the face of it there would be some merit in using this approach to dampen the impact of PbR to account for spatial differences. There is a conceptual problem, however, that would lay this approach open to greater criticism than the current GLM method. Rather than being disconnected like the GLM, this SCA would be rather too connected. It would effectively embed the current spatial variations at quintile level. If applied at trust rather than quintile level, it would compensate for distances from national average costs, acting as a countervailing force against the PbR mechanism. This objection to use of the HRG Unit Cost base in formulating a SCA is therefore conceptual rather than practical.

154

Table 11.5 Chapter Details of Regression Models CHAPTER

A

B

C

D

E

F

G

(Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION

Unstandardised Coefficients B Std. Error -93.830 20.809 94.404 20.470

MODEL 1 Standardised Coefficients Beta

t

Sig.

.333

-4.509 4.612

.000 .000

-101.594 118.602

107.478 105.796

.086

-.945 1.121

.346 .264

-76.360 76.189

19.339 19.023

.294

-3.949 4.005

.000 .000

-99.807 104.285

22.216 21.854

.343

-4.493 4.772

.000 .000

-87.044 89.424

33.574 33.083

.203

-2.593 2.703

.010 .008

-147.425 154.541

31.705 31.242

.355

-4.650 4.946

.000 .000

-118.509 153.451

156.732 154.601

.077

-.756 .993

.451 .322

Unstandardised Coefficients B Std. Error -47.406 31.675 44.154 32.649 13.662 8.055 9.033 6.493 5.843 6.087 -76.983 156.801 78.722 161.695 -10.941 40.318 161.894 35.018 16.142 29.846 -4.073 27.747 -2.985 28.601 20.193 7.063 25.465 5.814 8.563 5.332 -71.460 30.793 69.654 31.740 2.452 7.831 39.973 6.312 11.180 5.917 -61.372 47.398 56.424 48.835 2.664 11.957 51.745 9.791 10.086 9.050 -38.955 40.966 33.164 42.208 31.417 10.334 65.109 8.462 8.834 7.822 -163.752 225.767 180.842 232.724 -24.037 57.654

MODEL 2 Standardised Coefficients Beta .156 .189 .103 .073 .057 -.031 .346 .042 -.012 .305 .310 .117 .229 .032 .424 .130 .128 .024 .379 .083 .076 .285 .482 .074 .090 -.047

t

Sig.

-1.497 1.352 1.696 1.391 .960 -.491 .487 -.271 4.623 .541 -.147 -.104 2.859 4.380 1.606 -2.321 2.195 .313 6.332 1.889 -1.295 1.155 .223 5.285 1.114 -.951 .786 3.040 7.694 1.129 -.725 .777 -.417

.136 .178 .092 .166 .338 .624 .627 .786 .000 .589 .883 .917 .005 .000 .110 .022 .030 .755 .000 .061 .197 .250 .824 .000 .267 .343 .433 .003 .000 .260 .469 .438 .677

155

CHAPTER

h

J

K

L

M

N

P

Unstandardised Coefficients B Std. Error SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF

MODEL 1 Standardised Coefficients Beta

t

Sig.

-107.971 109.861

15.689 15.433

.478

-6.882 7.118

.000 .000

-101.516 108.572

38.291 37.667

.215

-2.651 2.882

.009 .004

-103.229 111.110

30.055 29.583

.277

-3.435 3.756

.001 .000

408.851 -755.281

3722.224 3667.816

-.016

.110 -.206

.913 .837

-120.304 126.624

37.988 37.415

.254

-3.167 3.384

.002 .001

-119.124 130.447

65.414 64.461

.158

-1.821 2.024

.070 .045

-171.614 167.842

25.542 25.117

.456

-6.719 6.682

.000 .000

Unstandardised Coefficients B Std. Error 226.562 49.675 2.848 42.746 -28.973 21.581 23.307 22.244 21.249 5.488 22.436 4.424 14.171 4.147 -25.846 53.158 20.795 54.792 13.991 13.519 68.210 10.897 18.427 10.215 -81.924 40.411 83.285 41.668 2.909 10.388 60.210 8.439 -2.382 7.736 3453.254 5522.949 -3656.911 5690.516 1263.665 1394.346 -3326.148 1167.291 52.641 1054.375 -118.051 53.794 118.957 55.449 13.735 12.355 10.276 -26.258 88.081 24.875 90.791 24.832 22.451 160.619 23.807 -5.131 16.888 -119.802 37.571 109.219 38.724

MODEL 2 Standardised Coefficients Beta .339 .005 .101 .362 .315 .219 .041 .109 .436 .130 .207 .028 .475 -.021 -.077 .105 -.220 .004 .239 -.045 .379 .034 .030 .118 .472 -.022 .297

t

Sig.

4.561 .067 -1.343 1.048 3.872 5.072 3.417 -.486 .380 1.035 6.260 1.804 -2.027 1.999 .280 7.134 -.308 .625 -.643 .906 -2.849 .050 -2.195 2.145 -.420 5.312 .458 -.298 .274 1.106 6.747 -.304 -3.189 2.820

.000 .947 .181 .296 .000 .000 .001 .627 .705 .302 .000 .073 .044 .047 .780 .000 .759 .533 .521 .366 .005 .960 .030 .033 .675 .000 .648 .766 .784 .270 .000 .762 .002 .005

156

CHAPTER

Q

R

S

Unstandardised Coefficients B Std. Error LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL (Constant) Staff MFF LONDON LOCATION SPECIALIST HOSPITAL TEACHING HOSPITAL

MODEL 1 Standardised Coefficients Beta

t

Sig.

-147.885 149.338

29.522 29.041

.366

-5.009 5.142

.000 .000

-94.090 91.386

21.687 21.370

.312

-4.339 4.276

.000 .000

-191.378 196.997

28.522 28.057

.473

-6.710 7.021

.000 .000

Unstandardised Coefficients B Std. Error 9.876 9.541 29.609 7.880 16.571 7.211 -34.678 41.443 24.860 42.717 31.717 10.540 45.047 8.496 11.458 7.964 -28.861 31.286 19.065 32.235 15.664 7.892 20.328 6.463 18.966 5.974 -95.848 37.036 89.870 38.175 24.711 9.419 61.202 7.592 8.354 7.117

MODEL 2 Standardised Coefficients Beta .105 .254 .160 .061 .305 .356 .100 .065 .211 .224 .235 .216 .233 .474 .071

t

Sig.

1.035 3.757 2.298 -.837 .582 3.009 5.302 1.439 -.922 .591 1.985 3.146 3.175 -2.588 2.354 2.624 8.061 1.174

.302 .000 .023 .404 .561 .003 .000 .152 .358 .555 .049 .002 .002 .011 .020 .010 .000 .242

a Dependent Variable: % Distance

157

CHAPTER 12. ANALYSIS FOR ALL-STAFF DATABASE This chapter brings together the best staffing and workload measures that have been built throughout the research project, described as the ‘all-staff database’. We have used it to conduct a benchmarking exercise that (a) tries to estimate avoidable costs by examining the scope for efficiency gains based on price (pay) and volume (productivity) comparisons, and (b) generates an index based on specific costs that can be compared with the MFF. It adopts an approach similar to the one used in Chapter 9 for the England ward nursing analysis (HCC data set). Regression modelling is applied to the all-staff database to consider variation in price and volume of staff between trusts. It is also used to test the connectedness between the NHS labour market and the staff MFF.

DATA Staff data is drawn from the medical and non-medical census for September 2004. Standard workload measures have been built up to incorporate the effect of complexity and volumes of outpatients and A&E attendances, in addition to FCEs. (This is the same workload measure that is applied to medical staff in Chapter 10). All data is summarised to trust level. Figure 12.1 lists the data sets employed. Figure 12.2 shows how the data definitions were derived. Figure 12.1 Data Sets Employed The data was supplied by the Department of Health and related to the year 2004/5: 1. Medical Workforce Census by Trust 2. Non Medical Workforce Census by Trust 3. Finished Consultant Episodes (FCE) – by class i.e. Day Case (DC), Elective Inpatient (EI) & Non Elective Inpatient (NEI) plus Bed Days by Trust 4. Outpatient Attendances by Trust 5. A&E Attendances by Trust 6. FCE (DC, EI, NEI) per HRG per Trust 7. Average National Cost per HRG (DC, EI, NEI) 8. Trust Financial Returns – (TFR3A & TFR3B) Wage Costs per staff Group 9. The Staff MFF by Trust 10. Trust Type and Location – Clusters 11. NHS Occupation Code Manual The individual elements were summed into one data set at trust level which contained: 1. 2. 3. 4. 5. 6. 7.

WTEs by staff group NHS wage cost by staff group Non NHS wages cost (agency etc) by staff group FCEs by class plus Bed Days Outpatient attendances A&E attendances Trust Code, Name, Type, Location, Staff MFF

To this data set we appended: 8. The Trust’s Complexity index 9. An Index of rurality 10. An index of mean house prices

158

Figure 12.2 Defining the Staffing and Workload Measures for Each Trust Dealing with Mismatched Definitions. As there was a mismatch between the trusts’ financial returns and their census returns in the definitions of “Unqualified nurses”, “HCAs” and “Ancillary” staff groups we combined the WTEs and the wage costs of these three groups into one group “Unqualified, HCA & Ancillary”. Calculating a Standard workload Measure. To permit inter-trust comparisons of staff to workload etc we created a standardised measure of workload “The Complexity Adjusted FCE Equivalent Patient”. This measure was calculated by weighting each trust’s Outpatient and A&E attendances, with the ratios of the trust’s average cost per outpatient and A&E attendance to the trust’s average cost per FCE. Outpatient attendances X Average Cost per Outpatient = FCE equivalent OP Average Cost per FCE A&E attendances X

Average Cost per A&E Attends = FCE equivalent A&E Average Cost per FCE

By summing the above results with the trust’s DC, EI & NEI FCEs a total figure of FCE equivalent patients (FCE EP) for each trust was calculated. To deal with case mix issues this total FCE EP was weighted by the trust’s complexity index to provide a complexity adjusted FCE EP measure. Calculating the Average Cost per Class of FCE. In order to calculate the standard workload measure we calculated each trust’s Average Cost per FCE using the trust’s FCEs per HRG and the national average cost per HRG. The total cost per class of FCE was calculated by multiplying each trust’s FCEs per HRG with the average national cost for that class and that HRG. The results were summed to give each trust’s total cost for Day Case FCEs, Elective Inpatients FCEs and Non Elective Inpatient FCEs. These total costs were summed to give a total FCE cost then divided by the trust’s total FCEs to give an average cost per FCE per individual trust. Calculating the Cost per Outpatient and A&E Attendance. The average cost per outpatient was calculated using each trust’s total outpatient cost and total outpatient attendances. The national average cost per A&E attendance taken from the Reference costs was used for each trust. Calculating the Equivalent WTEs from Agency etc Payments. As the census data excludes agency WTEs we calculated the equivalent number of WTEs that agency payments would buy by dividing each trust’s total agency payment by staff group with the trust’s average WTE salary for that staff group uplifted by 25% to account for agency cost. Number of Trusts and Exclusions. Due to the lack of complete cost information from Foundation Trusts and obvious data problems from some trusts, the data from 127 trusts was used: • Foundation Trusts. The Foundation Trusts either did not supply or only partially supplied financial information and therefore were excluded from the dataset. • Wage Cost Errors. For some trusts there were obvious mismatches between WTE numbers and reported wage costs resulting in average salaries of between £100 pa to £1m pa. These trusts were excluded from the data set. • Other Regression Variables (Quality and Mean House Price). Note that only 119 observations are included in the regression modelling (described in the second part of the chapter) because the quality markers are only available for 121 of the trusts and the mean house price for only 119.

159

The Number of Trusts and Exclusions Total Dataset Exclude Foundation Trusts (no cost data) Exclude Wage Cost Error Trusts Total Trust Data Used

NUMBER OF TRUSTS 173 (25) (21) 127

BENCHMARKING APPROACH – PRICE & VOLUME Using the price and volume variance analysis approach we calculate a number of specific cost indexes from the All Staff Database and compare these with the Staff MFF index. The objective of the exercise is to consider the extent of any spatial variation in wage costs 47 and whether this variation maps the movement in the Staff MFF index. The Price Variance The price variance was calculated using the average annual wage cost per WTE. (Table 12.1 summarises by staff group, with further details in appendix 12.1). The table shows an overall uplift of 19%, similar to the results of the HCC nursing analysis (Chapter 9) which identified 18% uplift between quintiles 1 and 5 and to the micro-study payroll analysis (Chapter 5) where we found 22% uplift in wage costs London and non-London trusts (n=9). Table 12.1 Average Annual Wage Cost (£'000s) Per WTE

Quintile 1 2 3 4 5 Average Q1 to Q5

Cons & career Junior grades Doctors 121.0 58.6 123.4 60.8 117.7 58.1 121.7 61.2 120.3 64.0 120.9 60.8 -0.59% 9.30%

Qual Nurses ST&T 31.0 28.9 30.6 28.0 30.8 28.7 32.3 30.7 35.5 34.8 32.1 30.2 14.49% 20.56%

Ancillary HCA & A&C + Un qual Movement Mang Nurse Total Staff as % of Q1 24.8 16.1 32.0 24.6 16.2 32.0 -0.07% 26.1 16.3 32.2 0.89% 28.0 16.3 34.7 7.57% 30.8 17.8 38.2 10.97% 26.8 16.5 33.8 24.04% 10.47% 19.36% 19.36%

The trust financial returns do not break the wage cost down into its component parts. However, drawing on the previous analyses, we are able to infer that the uplift is made up (in order of importance) of London Weighting, higher basic for “London economy labour” (non clinical staff) and higher grades for more complex workload (clinical staff) plus an element of grade drift. The payroll analysis indicated that London Weighting and higher basic pay accounted for c18% of the uplift. If we were to assume that this holds for the national dataset, then we would be left with c 1.4% of grade drift. This would be consistent with the HCC analysis where we estimated

47

The data does not contain any information on indirect employment costs i.e. recruitment, induction, and turnover management costs. Examination of trust general ledgers (unreported data from the micro study), however, suggests that these costs, when viewed in relation to a trust’s total wage bill, tend to represent only a small proportion.

160

grade drift (not associated with workload complexity) to account for 1.3% differential in wages. As in the previous analysis we interpret the price uplift to be an unavoidable response to the labour markets in which the trusts operate. It is a significant assumption as the price variance is a major contributor to the overall variance between geographical areas. In view of this, we go on to test the sensitivity of the price variance by flexing the proportion of the variance deemed to be unavoidable.

The Volume Variance To calculate the volume variance we analysed the number of WTEs per complexity adjusted FCE equivalent patient (“the productivity ratio”) by trust type. Weighting the WTEs figure As the productivity ratio reviews the aggregated picture per trust, we have attempted to standardise the WTE figures by weighting their number according to staff group. This is by no means a precise science and the analysis is repeated using unweighted WTE figures, and revealing similar results, in appendix 12.2. The weights applied to the WTE figures were derived from two sources. Weights for the various doctor grades were supplied by the Department of Health and represent an aggregate score combining: 1. Intensity of patient episodes in time worked 2. Skill, expertise & autonomy 3. Contracted time spent in service delivery The remaining staff groups were weighted by scaling the national average salary for the staff group against that of career grade (consultant and non-consultant) medical staff. (For example, from Table 12.1 the weighting for qualified nurses is calculated as 32.1/120.9 = 0.27). Table 12.2 summarises the weights applied. Table 12.2 Weights applied to the WTE figures for each Trust Weighting 1 2 3 5 6 7 8 9 10 11

Consultant Associate Specialist & Staff Grade Hospital Practioner & Clinical Assistant Registrars Senior house Officer House Officer Qualified Nurse ST&T A&C + Management Ancillary, HCAs & unqualified nurses

1.00 0.76 0.50 0.47 0.12 0.05 0.27 0.25 0.22 0.14

Table 12.3 describes the number of WTEs per complexity adjusted FCE equivalent patient by trust type. It represents Scenario A, in which the entire current volume variance between quintiles is assumed to be unavoidable. It shows 15.5% uplift between quintiles 1 and quintile 5 with half of this (7.8%) located in the movement between quintiles 1 and 2.

161

Table 12.3 Volume differences in the number of WTEs Per Complexity adjusted FCE equivalent patient Scenario A ( All unavoidable) Complexity Adjusted FCE Equivalent Patients

WTE Per % of Comp adj FCE EP FCE EP Per Quintile

Wte Pre FCE EP

Movement as % of Quintile 1

Q

Type

1

Acute Specialist Teaching

3,286,816 3,286,816

7.05 7.05

100.00% 0.00% 0.00% 100.00%

7.05 7.05

2

Acute Specialist Teaching

2,153,994 164,367 1,325,816 3,644,177

7.27 8.45 8.02

59.11% 4.51% 36.38% 100.00%

4.29 0.38 2.92 7.60

7.8%

2,173,425 3,481 706,048 2,882,954

6.91 18.80 9.35

75.39% 0.12% 24.49% 100.00%

5.21 0.02 2.29 7.52

-1.0%

2,353,911 156,095 810,408 3,320,414

7.47 9.15 8.21

70.89% 4.70% 24.41% 100.00%

5.30 0.43 2.00 7.73

3.0%

1,834,057 111,858 1,000,504 2,946,419

7.51 8.30 9.26

62.25% 3.80% 33.96% 100.00%

4.68 0.32 3.14 8.14

5.7%

3

4

5

Acute Specialist Teaching

Acute Specialist Teaching

Acute Specialist Teaching

Movement Q1 to Q5

15.5%

As with the earlier analysis relating to the HCC data set (Chapter 9) the assumption of (A) the entire volume variance being unavoidable is loosened by adjusting the base figures to reflect (B) the average performance by trust type and (C) the best performance by trust type. (Within this analysis specialist trusts are also adjusted. Although they represent a range of very different types of trust their overall impact is marginal.) The average productivity ratio for acute trusts is 7.22, for specialist trusts 8.74, and for teaching trusts 8.63. Table 12.4 shows the shift in the volume variance when the workload ratios are adjusted to the average level for each trust type. The volume variance falls from 15.5% to 7.4%. This reflects Scenario B’s assumption that above-average ratios are avoidable and essentially only volume variances caused by differences in trust type are unavoidable. This is a rather stringent assumption as it eliminates the possibility of unavoidable spatial factors that could impact upon trusts’ performance. For example, it assumes that temporary staff (bank and agency) are as effective as full time employees. It nevertheless provides a useful benchmark in the modelling process.

162

Table 12.4 Volume differences in the number of WTEs Per Complexity adjusted FCE equivalent patient Scenario B ( Adjusted to AVERAGE Performance) Complexity Adjusted FCE Equivalent Patients

WTE Per Comp adj FCE EP

% of FCE EP Per Quintile

Wte Pre FCE EP

Movement as % of Quintile 1

Q

Type

1

Acute Specialist Teaching

3,286,816 3,286,816

7.22 -

100.00% 0.00% 0.00% 100.00%

7.22 7.22

2

Acute Specialist Teaching

2,153,994 164,367 1,325,816 3,644,177

7.22 8.74 8.63

59.11% 4.51% 36.38% 100.00%

4.27 0.39 3.14 7.80

8.1%

2,173,425 3,481 706,048 2,882,954

7.22 8.74 8.63

75.39% 0.12% 24.49% 100.00%

5.44 0.01 2.11 7.57

-3.2%

2,353,911 156,095 810,408 3,320,414

7.22 8.74 8.63

70.89% 4.70% 24.41% 100.00%

5.12 0.41 2.11 7.64

0.9%

1,834,057 111,858 1,000,504 2,946,419

7.22 8.74 8.63

62.25% 3.80% 33.96% 100.00%

4.49 0.33 2.93 7.76

1.7%

3

4

5

Acute Specialist Teaching

Acute Specialist Teaching

Acute Specialist Teaching

Movement Q1 to Q5

7.4%

Table 12.5 describes the final model where each type of trust is adjusted to the best performance ratio for their group (Scenario C). Acute trusts are adjusted to 6.91 (matching acute trusts of quintile 3), specialist trusts to 8.3 (matching quintile 5) and teaching trusts are adjusted to 8.02 (matching quintile 2). By adjusting the productivity ratios to the best performance, the overall volume variance falls to 6.2%, the assumption being that any distance from the best is avoidable.

163

Table 12.5 Volume differences in the number of WTEs Per Complexity adjusted FCE equivalent patient Scenario C ( Adjusted to BEST Performance) Complexity Adjusted FCE Equivalent Patients

WTE Per Comp adj FCE EP

% of FCE EP Per Quintile

Wte Pre FCE EP

Movement as % of Quintile 1

Q

Type

1

Acute Specialist Teaching

3,286,816 3,286,816

6.91 7.05

100.00% 0.00% 0.00% 100.00%

6.91 6.91

2

Acute Specialist Teaching

2,153,994 164,367 1,325,816 3,644,177

6.91 8.30 8.02

59.11% 4.51% 36.38% 100.00%

4.08 0.37 2.92 7.38

6.8%

2,173,425 3,481 706,048 2,882,954

6.91 8.30 8.02

75.39% 0.12% 24.49% 100.00%

5.21 0.01 1.96 7.18

-2.8%

2,353,911 156,095 810,408 3,320,414

6.91 8.30 8.02

70.89% 4.70% 24.41% 100.00%

4.90 0.39 1.96 7.25

0.9%

1,834,057 111,858 1,000,504 2,946,419

6.91 8.30 8.02

62.25% 3.80% 33.96% 100.00%

4.30 0.32 2.72 7.34

1.4%

3

4

5

Acute Specialist Teaching

Acute Specialist Teaching

Acute Specialist Teaching

Movement Q1 to Q5

6.2%

The preceding analysis has shown a price variance of 19.36% and three differing volume variances: (A) the 15.5% unadjusted (all unavoidable), (B) the 7.4% adjusted to average (ratios above average are avoidable) and (C) the 6.2% adjusted to best performance (ratios above the best are avoidable). Table 12.6 Workload ratios Q

1 2 3 4 5

Unadjusted / Average / Best WTEs Per Uplift WTEs Per Uplift WTEs Per Uplift 1000 as % 1000 as % 1000 as % Comp adj of Comp adj of Comp adj of FCE EP Q1 FCE EP Q1 FCE EP Q1 Unadjusted Adjusted to average Adjusted to Best 7.05 0.0% 7.22 0.0% 6.91 0.0% 7.60 7.8% 7.80 8.1% 7.38 6.8% 7.52 -1.0% 7.57 -3.2% 7.18 -2.8% 7.73 3.0% 7.64 0.9% 7.25 0.9% 8.14 5.7% 7.76 1.7% 7.34 1.4% 15.5% 7.4% 6.2%

164

Calculating the SCA indices By combining the price and volume variances we can create three indexes which may be compared with the Staff MFF index. The indexes are calculated around a centre point as this allows us to view the overall impact of each SCA index. Tables 12.7 to 12.9 describe the resulting three indexes. Table 12.7

Q 1 2 3 4 5 1 to 5

Price Volume Variance Variance 0.00% 0.0% -0.07% 7.8% 0.89% -1.0% 7.57% 3.0% 10.97% 5.7% 19.36%

15.5%

Unadjusted volume - Index SCA A Staff Combined Index MFF 0.0% 0.890 0.890 7.7% 0.958 0.944 -0.2% 0.956 0.985 10.8% 1.052 1.056 17.3% 1.207 1.191 35.7%

35.7%

33.9%

Inverted Index 0.878 0.946 0.944 1.039 1.191 35.7%

SCA A Index 0.884 0.952 0.950 1.046 1.199 35.7%

Table 12.8

1 2 3 4 5

Price Variance 0.00% -0.07% 0.89% 7.57% 10.97%

1 to 5

19.36%

Q

Volume adjusted to average for Type - Index SCA B Volume Staff Inverted Variance Combined Index MFF Index 0.0% 0.0% 0.890 0.890 0.938 8.1% 8.0% 0.960 0.944 1.013 -3.2% -2.4% 0.939 0.985 0.990 0.9% 8.6% 1.016 1.056 1.071 1.7% 12.8% 1.130 1.191 1.191 7.4%

27.0%

27.0%

33.9%

27.0%

SCA B Index 0.914 0.987 0.965 1.043 1.161 27.0%

Table 12.9

1 2 3 4 5

Price Variance 0.00% -0.07% 0.89% 7.57% 10.97%

1 to 5

19.36%

Q

Volume adjusted to most efficient for Type - Index SCA C Volume Staff Inverted Variance Combined Index MFF Index 0.0% 0.0% 0.890 0.890 0.947 6.8% 6.7% 0.949 0.944 1.011 -2.8% -2.0% 0.932 0.985 0.992 0.9% 8.6% 1.008 1.056 1.073 1.4% 12.5% 1.119 1.191 1.191 6.2%

25.8%

25.8%

33.9%

25.8%

SCA C Index 0.918 0.980 0.962 1.040 1.155 25.8%

Tables 12.7 to 12.9 reveal the impact of our increasingly stringent assumptions on the proportion of the volume variance considered to be avoidable, the price variance being held constant. SCA A describes an index where all of the price and volume variances are considered unavoidable, i.e. there are no spatially distributed net inefficiencies within the system. SCA B shows an index where slightly over half the volume variance is assumed to be avoidable, i.e. all trusts of the same type should be capable of operating at the average workload ratio. Finally SCA C describes an index where, in terms of productivity ratios, each trust is working at their type group’s best performance, i.e. all spatially patterned volume variances are considered avoidable.

165

Figure 12.3 and Table 12.10 reveal the relationship between the current Staff MFF and each of these SCA Index constructions. There is a strikingly close match. SCA A reveals a steeper gradient 48 (35.7%) than the MFF (33.9%) starting at 0.6% below the current index and finishing 0.6% higher. The gradient diminishes as the volume assumptions are tightened: SCA B starts 2.7% higher at quintile 1 and ends 2.6% lower at quintile 5; SCA C is 3.2% higher in quintile 1 and 3% lower at quintile 5. Figure 12.3 Staff MFF & SCA indices A, B & C 1.250 1.200 1.150 Index

1.100 1.050 1.000 0.950 0.900 0.850 0.800 1

2

3

4

5

Quintile SCA A

SCA B

SCA C

Staff MFF

Table 12.10 Comparison of SCA indices with MFF -Price variance 100% unavoidable

Quintile 1 2 3 4 5 1 to 5

Staff MFF 0.890 0.944 0.985 1.056 1.191 33.9%

SCA A 0.884 0.952 0.950 1.046 1.199 35.7%

Variance to MFF -0.6% 0.9% -3.5% -0.9% 0.6%

SCA B 0.914 0.987 0.965 1.043 1.161

Variance to MFF 2.7% 4.5% -2.0% -1.2% -2.6%

27.0%

SCA C 0.918 0.980 0.962 1.040 1.155

Variance to MFF 3.2% 3.8% -2.4% -1.4% -3.0%

25.8%

(where gradient ‘1 to 5’ = Q5/Q1-1)

Which is the Most Representative Volume Variance? A degree of judgement is required in assessing which volume variance reflects a plausible balance between avoidable and unavoidable cost differentials. The two extremes, Scenarios A and C, are judged to be unrealistic since they characterise productivity differentials as being (A) entirely unavoidable or (C) entirely avoidable. The average (B) is also rejected, informed by evidence elsewhere (e.g. Chapter 6) that labour markets differ in their exposure to turnover rates, prompting reliance on labour substitutes (bank and agency), which is likely to induce some unavoidable productivity differentials.

48

Where gradient = Quintile 5 Index divided by Quintile 1 Index minus 1

166

On this basis, a realistic estimate of unavoidable spatial volume variance is pitched between Scenarios A and B, generating a fourth index, SCA D (Table 12.11), that is based on the average of Scenarios A & B. The unavoidable volume variance is estimated to be +11.4% between quintiles 1 and 5, which is three quarters of the current variance (Scenario A) of 15.5%. The implication is that a quarter of the current volume variance is judged to be avoidable.

Flexing the Price Variance Assumption A similar approach is applied to the price variance. Earlier analysis of payroll data (micro study, Chapter 5) indicated that Geographical allowances (London Weighting & COLS) account for approximately 12% 49 of spatial differences in pay costs and that higher basic pay in London trusts, especially for the A&C, Management and ST&T staff groups, where competition for staff is very high, accounts for approximately 6%. If we assume that 10% of the total 19.4% price variance is avoidable, the resulting unavoidable variance of 17.5% can be interpreted as a proxy for restricting the price variance to these effects whilst assuming other influences are avoidable.

SCA D – The Representative SCA Index The flexed price variance is combined with the selected volume variance to produce SCA D. It follows a similar contour to that of the MFF, with a marginally higher minimum and lower maximum position. Table 12.11 and Figure 12.4 describe Index SCA D which starts 1.8% higher at quintile 1 and ends 1.7% lower in quintile 5. Figure 12.4 S taff MFF & S C A D

1.250 1.200 1.150 Index

1.100 1.050 1.000 0.950 0.900 0.850 0.800 1

2

3

4

5

Q u in tile Staff MF F

SC A D

49

Geographic allowances run at c9% with a flow through effect on overtime, other allowances, bank and agency costs and employers on costs.

167

Table 12.11

Q 1 2 3 4 5 1 to 5

Price variance at 90% & Volume variance at average of A & B - SCA D Price Volume Staff Inverted SCA D Variance Variance Variance Combined Index MFF Index Index to MFF 0.00% 0.0% 0.0% 0.890 0.890 0.921 0.905 1.8% -0.06% 7.9% 7.9% 0.959 0.944 0.993 0.976 3.4% 0.80% -2.1% -1.4% 0.947 0.985 0.981 0.964 -2.1% 6.81% 2.0% 8.9% 1.026 1.056 1.063 1.045 -1.0% 9.87% 3.7% 13.9% 1.151 1.191 1.191 1.171 -1.7% 17.43%

11.4%

29.3%

29.3%

33.9%

29.3%

29.3%

“1 to 5” describes (Q5/Q1-1)

REGRESSION MODELLING APPROACH The arithmetic approach above has summarised trust data into quintiles and analysed it around the mid-point of each quintile range. The regression approach in this following section does not summarise into quintiles and so allows us to observe the full minimum-maximum range of the MFF. The aim of the regression analysis is to understand more about the avoidable and unavoidable factors that underlie changes in price and volume variations between trusts. It goes on to exploit the data more fully to consider how well the Staff MFF is patterned or correlated against NHS labour market factors. Detailed results are tabulated at the end of the chapter. Building a robust model (i.e. statistically specified equation) is complicated because of the high level of inter-correlations already remarked upon and because of the relative fine gradations between each trust’s MFF value.

Price & Volume The first approach is to look at variations in the total wage costs and the extent to which these can be related to unavoidable, avoidable and random factors. Having discussed a large number of variants we have decided that the following is perhaps the most appropriate: •

Unavoidable: Size variables (“SIZE”), e.g. number of beds and FCEs; dummy variables showing whether the hospital has acute, specialist or teaching status (“TYPE”).



Avoidable: the extent to which other factors affect volume or price variation in the staff MFF.



Within the analysis we also review location variables (percentage urban, London/non-London, staff MFF index) in relation to the residuals generated by the SIZE/TYPE model.

We consider how these factors account for volume variance by measuring their impact on (a) total wte per occupied bed and (b) total wte per complexity adjusted

168

FCE equivalent patient. The analysis presented here is based on total staff. The general conclusion is that: (a)

When WTEs per occupied beds is the dependent, most of the variance (67%) is accounted for by the SIZE and TYPE factors;

(b)

When the dependent is WTE per complexity adjusted FCE equivalent patient the size and specialty and teaching status dummies account for a smaller proportion of the variance (28%); the Staff MFF has a statistically significant effect (accounting for 13% of the variance; Tables 12.19, 12.20(a) & 12.20(b) in the tables at the end of the chapter).

We consider how the unavoidable factors account for the price variance with reference to two further ratios: (c) unit labour cost and (d) total wage cost per WTE. The overall results are: (c)

When the dependent is unit labour cost the size, specialty and teaching status dummies account for 33% and the Staff MFF and other location factors have a modest (7%) but statistically significant effect. (Tables 12.21 and 12.22).

(d)

When the dependent is total wage cost per WTE, the size, specialty and teaching status dummies account for 26%; and the Staff MFF and other location factors have a strong effect (28%) with strong statistical significance. This measure excludes volume. (Tables 12.23 and 12.24).

The benchmarking approach earlier in this chapter, using judgement, assumed that a larger percentage of variation, i.e. 75% of the volume variance and 90% of the price variance, could be described as unavoidable. Reviewing the results from the models that most closely related to the benchmarking analysis (b & d above) we find the regression models allocate a smaller proportion of the variations in these ratios to our chosen unavoidable factors. The results may be summarised as follows: Table 12.12 Summary of Variation Explained by Regression Models VOLUME PRICE VARIATION VARIATION Size and Type 28% 26% Staff MFF & other 9% 28% location factors Total 37% 54%

At first glance it seems surprising that the volume variance is so poorly explained by regression model factors, against the comparative certainty of the benchmarking exercise. A plausible explanation lies in the degree of aggregation. The regression is dealing with the variations between individual trusts whereas the benchmarking is dealing with variation between quintiles. The implication is that there is considerable variation in productivity between hospitals within individual quintiles. We saw this in Chapter 11 in relation to HRG costs where trust-level performance was uneven against the MFF (R squared = 31%) whereas at the broader spatial (quintile) level there was a close match between MFF and cost variation.

169

Staff MFF & Connectedness with NHS - MFF as Dependent Variable The second approach asks whether the staff MFF is geographically patterned in an appropriate way. It uses a range of variables that we know are connected to the NHS, e.g. pay and turnover, and tests to what extent they are correlated with the MFF. The aim is not to ask what causes the MFF (since nothing within the NHS causes it) but to investigate the connectedness between the NHS and general labour market factors. It addresses the question of whether private sector wage differentials, used to construct the staff MFF through the GLM method, adequately reflect the NHS labour market environment and corresponds to the “MFF as Dependent Variable” approach used alongside the HCC nursing data set in Chapter 9. The current staff MFF is based on an analysis of the costs of employing labour in the private sector; the procedure followed in deriving the staff MFF adjustment has been to search for comparable occupations, their total costs, and how those costs vary across the country. Unsurprisingly it has been difficult for the NHS to understand why it should be relevant for them. The previous ‘arithmetic’ analysis has very clearly showed how a price and volume analysis possess spatial patterning and can be used to reproduce a staffing index that closely resembles the staff MFF (together with some assumptions an avoidable and unavoidable costs). Nevertheless, it has been an analysis based on averages at the mid-points of the five quintiles. It is therefore seen as important to examine the extent to which the same analysis holds up when all the data is taken into account through a multivariate analysis. The purpose of the exercise here is therefore twofold: •



To replicate the arithmetical analysis and in particular o to see whether the same overall trends appear and o to examine the extent to which there are variations around these overall trends; To examine the extent to which characteristics of NHS trust employment and financial behaviour that might be affected by variations in the local labour market are related to the Staff MFF, or to what extent can variations in the Staff MFF be related to financial management characteristics of the trusts.

In both cases, we are therefore looking at a limited set of variables. For the first, it is essentially the volume and price variations in different types of hospital inside and outside London. The volume and price variations are measured through the following two variables (it is important to remember that these are only available for 127 trusts): • •

Average wage cost per WTE (inc Agency) WTEs per 1000 complexity adjusted FCE equivalent patients

For the second purpose, NHS employers will respond to different local labour markets by changes in pay and productivity in their trusts and so we can use the same two variables. It is also plausible to argue that the turnover and vacancy rates in NHS organisations will be influenced by similar characteristics in the local labour market. The only variable easily available is the nurse turnover rate and that has been included. Equally, one would expect employers in organisations of different size to behave differently and we have chosen bed-days as a reasonable proxy for size.

170

In a tight labour market and with a limited budget/ financial restrictions, one might expect that the employers would respond by sacrificing quality. Clearly, in the NHS, this would be a contentious issue but we have included two of the variables as possible predictors. Finally, a modifier of size – the number of finished consultant episodes – has also been included.

Correlations One of the major problems in this area is the high inter-correlation between the variables. It can be seen from Table 12.13 that the staff MFF is very highly correlated with the average wage cost per WTE (.766) but also with turnover (.515); that mean house price is very highly correlated with the Staff MFF but also with average wage cost per WTE and with turnover; and bed days with activity. The latter is obvious – by definition – but the former inter-correlations show that the size of the coefficients will depend crucially on the order in which the variables are entered into the model. Table 12.13 Correlations between Variables B Staff MFF -.135 BED DAYS 1 Avg Wage Cost Per WTE inc Agency D Total WTEs per 1000 Complexity Adj FCE Equivalent Patients E TURNOVER F Key Targets Average G All Average H FCEs I TAFCE_SQ J Mean House Price * = Sig at 0.10; ** = Sig at .005 A B C

C .766** -.074 1

D .120 -.038 .018

E .515** -.141 .345**

F -.118 -.105 -.163

G -.134 -.231 -.145

H -.227* .816** -.208

I -.129 .758** -.095

J .725** -.139 .635**

1

.398**

-.152

.031

-.093

.032

-.014

1

-.165 1

-.091 .720** 1

-.214 .003 -.118 1

-.092 -.036 -.080 .913** 1

.529** -.143 -.171 -.239* -.156 1

Regressions There are such high levels of inter-correlation in the data that almost any set of independent variables will account for a high percentage of variation cost. The approach to analysis here has been informed both by theory and an appreciation of the limits of data to support the theoretical weight of hypotheses. The variables have been entered in the order described below, looking in turn at pay, productivity, labour market indicators (turnover), followed by hospital performance and type. Mean house price does not constitute a direct NHS response but it is retained as a final variable to test whether it has any independent effect on the MFF. 1. 2. 3. 4. 5.

Bed Days 50 Average Wage cost per WTE including Agency Total WTEs per 1000 complexity adjusted FCE equivalent patients Turnover Key Targets and All Average

50

It would be more fitting to enter this variable at Step 6, close to other size variables. However, it has little impact on the model.

171

6. Activity and Activity Squared 7. Dummies for Specialist and Teaching Hospitals 8. Mean House Price The model summary for the eight steps in the regression are included in Table 12.14A and the sets of coefficients obtained at Steps 4, 6, 7 and 8 in Table 12.14B. It can be seen that the major factor associated with variations in the Staff MFF is the Average Wage cost per WTE including Agency followed by turnover. When mean house price is included at the end, it also raises the value of the adjusted R squared. The inclusion of the bed-days, activity and ‘quality’ variables make very little or no difference. Only three of the variables are ever significant: Average Wage cost per WTE including Agency, nursing turnover and mean house price. The only variable that remains significant in all four runs is the average cost per WTE (including agency), although its coefficient drops by over a quarter when mean house price is included. The coefficient on nursing turnover becomes non-significant when mean house price is included. These results lead us to estimate a parsimonious model including only Average Wage cost per WTE including Agency, and nursing turnover. The adjusted R squared is 62% (results Tables 12.15A and 12.15B). It is only just below that reached on Step 7 of the previous model (i.e. prior to entering mean house price) and the coefficients of the two variables are very similar. The model is well specified (with a reset t-test value significant at only 1.427 51). The result leads to the conclusion that employers in NHS organisations in high MFF areas are responding to signals in the non-public labour market by spending more on their staff and by being forced to accept a higher rate of turnover. It demonstrates a connection between the private sector labour market (measured by the MFF) and the NHS labour market. DISCUSSION Spatial Variation The benchmarking quintile analysis identified spatial variation in both the price and the volume of all staff. Wage costs per wte in quintile 5 are 19.36% higher than those in quintile 1. The analysis is consistent with earlier payroll (chapter 5) and nursing (HCC, chapter 9) results. Volume (the number of weighted WTE per complexity adjusted FCE equivalent patient) is 15.5% higher in quintile 5 than in quintile 1. All staff have been weighted in relation to consultant medical staff, based on salaries for non-medical staff and a combination of intensity, expertise and contracted time for medical grades. The volume differential for non-weighted staffing (Appendix 12.2) is 9.1%. The higher number of medical staff in upper quintiles, attracting higher weightings, serve to accentuate the volume differentials.

51

The fact that a very parsimonious model can be developed using only these two indicators prompted the search for a variable to substitute for the Average Cost of WTE including Agency that could be used which would cover all 173 trusts. Using the Average Cost of labour (local) variable we achieved a model with R squared 66%. [R squared 0.619 (Model 1); 0.6563 (Model 2); Reset Test 1.087 (t =2.364)] It is barely specified, however, indicating that deterioration in specification between the 127 trust and the 173 trust samples is due to data quality.

172

Avoidable and Unavoidable Costs Both benchmarking and regression techniques have been applied to the data set to explore avoidable versus unavoidable cost differences. The benchmarking approach applies a series of ‘what if …?’ assumptions to the volume differential and then generates a price-volume SCA index for comparison with the MFF index. The general finding is that most spatial differences are likely to be unavoidable. The ‘representative’ SCA index, using what we judged to be plausible assumptions, was very similar to the MFF index, with only +1.8%/-1.7% difference at either end of the scale. This was based on a scenario in which 75% of volume variance and 90% of price variance was described as unavoidable in relation to geography. The regression analysis, based on 127 individual trusts rather than 5 quintiles, found a less coherent picture, in which hospital size, type and location factors explained only 37% of volume variation and 54% of price variation. The contrast in results highlights the fundamentally different nature of the two approaches. Benchmarking uses judgement to minimise uncertainty and, in this exercise, aggregates 127 trusts to 15 observations (3 hospital types x 5 quintiles). Regression uses judgement only in the selection of variables to be measured and not in producing the results (i.e. determining input rather than output), and is based here on 127 different observations. It throws a spotlight on uncertainty and demonstrates the elusive nature of distinguishing between avoidable and unavoidable costs. Feasibility This data set in some ways offered our best chance of developing a Specific Cost Approach to determining a Market Forces Factor that would reflect unavoidable spatial cost variation (developed further in Appendix 12.3). It is apparent from the statistical results above that this is not a feasible proposition. We examined the connectedness between NHS and general labour market factors by regressing the staff MFF index against labour cost and turnover data. This produced parsimonious models that described over 60% of the variation in the staff MFF, demonstrating the reasonableness of using general labour market forces data in an NHS context.

173

Regression Tables Table 12.14A Model Summary of Basic Regression Model R R Square Adjusted R Square Std. Error of the Estimate 1 .147 .021 .013 0.0942 2 .772 .595 .588 0.0608 3 .775 .600 .590 0.0607 4 .793 .628 .615 0.0588 5 .796 .634 .614 0.0589 6 .797 .635 .609 0.0593 7 .810 .656 .623 0.0582 8 .848 .719 .689 0.0528 1 Predictors: (Constant), BED_DAYS 2 Predictors: (Constant), BED_DAYS, Avg Wage Cost Per WTE inc Agency 3 Predictors: (Constant), BED_DAYS, Avg Wage Cost Per WTE inc Agency, Total WTEs Per 1000 Complexity Adj FCE Equivalent Patients 4 Predictors: (Constant), BED_DAYS, Avg Wage Cost Per WTE inc Agency, Total WTEs Per 1000 Complexity Adj FCE Equivalent Patients, TURNOVER 5 Predictors: (Constant), BED_DAYS, Avg Wage Cost Per WTE inc Agency, Total WTEs Per 1000 Complexity Adj FCE Equivalent Patients, TURNOVER, Key Targets Average, All Average 6 Predictors: (Constant), BED_DAYS, Avg Wage Cost Per WTE inc Agency, Total WTEs Per 1000 Complexity Adj FCE Equivalent Patients, TURNOVER, Key Targets Average, All Average, TAFCE_SQ, FCEs 7 Predictors: (Constant), BED_DAYS, Avg Wage Cost Per WTE inc Agency, Total WTEs Per 1000 Complexity Adj FCE Equivalent Patients, TURNOVER, Key Targets Average, All Average, TAFCE_SQ, FCEs, NTECHING, NSPECIAL 8 Predictors: (Constant), BED_DAYS, Avg Wage Cost Per WTE inc Agency, Total WTEs Per 1000 Complexity Adj FCE Equivalent Patients, TURNOVER, Key Targets Average, All Average, TAFCE_SQ, FCEs, NTECHING, NSPECIAL, Mean House Price Table 12.14B

Coefficients in Basic Regression Run 4 (Constant) .184* BED_DAYS -3.056E-08 Avg Wage Cost Per WTE inc Agency 2.141E-02* Total WTEs Per 1000 Complexity Adj 1.433E-03 FCE Equivalent Patients TURNOVER .444* Key Targets Average All Average FCEs TAFCE_SQ NSPECIAL NTECHING Mean House Price Reset Test Statistic 0.599 t = 1.350 specified

Run 6

Run 7

Run 8

.204 -5.457E-08 2.159E-02* 2.067E-03

.216 -5.355E-08 2.162E-02* 1.998E-03

.300* -5.388E-08 1.601E-02* 2.460E-03

.469* 7.207E-02 -.128 2.684E-07 -3.814E-13

.400* 5.297E-02 -8.187E-02 -1.478E-07 -2.148E-15 -5.179E-02 2.012E-02

.141 3.623E-02 -3.643E-02 8.286E-08 -2.397E-13 -3.603E-02 2.130E-02 4.831E-07*

Table 12.15A Model Summary of Parsimonious Regression Model R R Square Adjusted R Square Std. Error of the Estimate 1 .770 .593 .589 .0608 2 .789 .622 .616 .0587 1 Predictors: (Constant), Avg Wage Cost Per WTE inc Agency 2 Predictors: (Constant), Avg Wage Cost Per WTE inc Agency, TURNOVER Dependent Variable: Staff MFF Table 12.15B Coefficients of Parsimonious Regression Unstandardised Coefficients

Model 1 2

(Constant) Avg Wage Cost Per WTE inc Agency (Constant) Avg Wage Cost Per WTE inc Agency TURNOVER

B .190 2.426E-02 .203 2.176E-02 .457

Std. Error .064 .002 .062 .002 .152

Standardised Coefficients

t

Sig.

2.996 12.991 3.304 10.953 3.010

.003 .000 .001 .000 .003

Beta .770 .691 .190

174

Dependent Variable: Staff MFF Reset test 1.159 t= 1.427 Table 12.16 Coefficients of Parsimonious Regression with ACLE Local Unstandardise Standardised d Coefficients Coefficients MODEL B Std. Error Beta 1 (Constant) .226 .047 ACLE (local wages) 2.326E-05 .000 .788 2 (Constant) .245 .045 ACLE (local wages) 2.054E-05 .000 .696 TURNOVER .460 .109 .211 Dependent Variable: Staff MFF R squared 0.619 (Model 1) 0.6563 (Model 2) Reset Test 1.087 (t =2.364)

t

Sig.

4.782 16.709 5.410 13.909 4.212

.000 .000 .000 .000 .000

Total WTE per Occupied Bed The following regression is with total wte per occupied bed. The specification enters numbers of occupied beds, size (i.e. Complexity adj. FCE Equivalent Patients), and dummies for Specialist and Teaching status Type. These variables account for 67% of the variance between trusts. Table 12.17 Accounting for Variance in WTE per Occupied Bed with Size and Specialist and Teaching Status Dummies: Model Summary Model R R Adjusted R Std. Error of the Estimate Square Square a .346 .120 .112 1.30342 b .624 .389 .373 1.09508 c .832 .692 .672 .79198 a Predictors: (Constant), Occupied Beds (bed days / 365) b Predictors: (Constant), Occupied Beds (bed days / 365), TAFCE_SQ, Complexity Adj FCE Equivalent Patient c Predictors: (Constant), Occupied Beds (bed days / 365), TAFCE_SQ, Complexity Adj FCE Equivalent Patient, NSPECLON, NTECHLON, NTECHNLD, NSPECNLD Dependent Variable: TTWTEPOB The residuals are then used as the dependent variable in another regression where the staff MFF and the proportions rural and town (together with the London/non London dummy) are entered into the equation, followed by the quality variables. The former have no effect, the latter a little. Effectively, this equation/model is exploring the 33% of variation NOT accounted for through the original equation; so the final R squared of 0.055 is 2% of additional variance (0.055*0.33*100). Table 12.18 (A) Residuals from Table 12.17: Model Summary Model R R Adjusted R Std. Error of the Estimate Square Square a .084 .007 -.027 .77526422 b .376 .141 .055 .74380088 a Predictors: (Constant), Proportion rural, Staff MFF, NLONDON, Proportion Town b Predictors: (Constant), Proportion rural, Staff MFF, NLONDON, Proportion Town, Capacity and Capability Focus Average, Clinical Focus Average, Hospital cleanliness, Patient Focus Average, Workforce indicator, Key Targets Average, All Average Table 12.18 (B) Residuals from Table 12.17: Coefficients Unstandardised Coefficients B

Model a

b

(Constant) Staff MFF NLONDON Proportion Town Proportion rural (Constant) Staff MFF

-.105 .161 -4.468E-02 -.749 .670 1.336 .144

Std. Error

1.094 1.089 .270 .955 1.360 1.582 1.083

Standardised Coefficients Beta

.020 -.023 -.141 .085 .018

t

Sig.

-.096 .148 -.166 -.784 .492 .845 .133

.924 .882 .869 .434 .623 .400 .894

175

NLONDON -8.407E-02 Proportion Town -1.076 Proportion rural 1.169 Workforce indicator -5.397E-02 Key Targets Average -14.356 Clinical Focus Average -12.808 Patient Focus Average -20.784 Capacity and -8.515 Capability Focus Average All Average 55.012 Hospital cleanliness 7.396E-02 Dependent Variable: Unstandardised Residual

.270 .952 1.350 .370 3.752 3.215 5.472 2.366

-.043 -.202 .148 -.016 -2.287 -1.468 -3.040 -1.179

-.311 -1.130 .866 -.146 -3.826 -3.983 -3.798 -3.600

.756 .261 .389 .884 .000 .000 .000 .000

14.514 .416

5.312 .018

3.790 .178

.000 .859

Total WTE per Complexity Adjusted FCE Equivalent Patient The following regression is with total wte per complexity adjusted FCE equivalent patient. The specification enters size, dummies. These variables account for 28% of the variance between trusts. Table 12.19 Accounting for Variance in WTE per complexity Adjusted FCE: Model Summary Model R R Adjusted R Std. Error of the Square Square Estimate a .160 .025 .009 .63076 b .564 .318 .282 .53683 a Predictors: (Constant), TAFCE_SQ, Complexity Adj FCE Equivalent Patient b Predictors: (Constant), TAFCE_SQ, Complexity Adj FCE Equivalent Patient, NTECHLON, NSPECLON, NTECHNLD, NSPECNLD Dependent Variable: Total WTEs (inc Agency) per 1000 Complexity Adj FCE Equivalent Patients If we separate out the addition of MFF and rurality variables from the London dummy, then we obtain the following results, shown in Table 12.20. Essentially, the Staff MFF accounts for 13% of the variance and the geographical factors detract 1% from that explained variance. Table12.20 (A) Residuals from Table 12.19: Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate a .370 .137 .130 .48571157 b .384 .147 .125 .48684486 c .385 .148 .119 .48868043 d .554 .306 .237 .45489510 a Predictors: (Constant), Staff MFF b Predictors: (Constant), Staff MFF, Proportion rural, Proportion Town c Predictors: (Constant), Staff MFF, Proportion rural, Proportion Town, NLONDON d Predictors: (Constant), Staff MFF, Proportion rural, Proportion Town, NLONDON, Capacity and Capability Focus Average, Clinical Focus Average, Hospital cleanliness, Patient Focus Average, Workforce indicator, Key Targets Average, All Average Table12.20 (B)

Residuals from Table 12.19: Coefficients Unstandardised Coefficients

Model a

d

B (Constant) Staff MFF NLONDON Proportion Town Proportion rural (Constant) Staff MFF NLONDON Proportion Town Proportion rural Workforce indicator Key Targets Average

-1.567 1.595 5.959E-02 -.531 .274 .137 1.366 7.289E-03 -1.013 .754 .375 -9.398

Standardised Coefficients

Std. Error .689 .686 .170 .602 .857 .967 .663 .165 .582 .826 .226 2.295

T

Sig.

-2.273 2.324 .350 -.882 .320 .141 2.062 .044 -1.741 .913 1.656 -4.095

.025 .022 .727 .380 .750 .888 .042 .965 .085 .363 .101 .000

Beta

.291 .044 -.147 .051 .249 .005 -.280 .141 .160 -2.200

176

Clinical Focus Average -8.084 Patient Focus Average -13.446 Capacity and Capability -6.477 Focus Average All Average 35.729 Hospital cleanliness -.163 Dependent Variable: Unstandardised Residual

1.966 3.347 1.447

-1.361 -2.890 -1.318

-4.111 -4.018 -4.477

.000 .000 .000

8.877 .255

5.069 -.057

4.025 -.639

.000 .524

PRICE VARIANCE Unit Labour Cost per Complexity Adjusted FCE Equivalent Patient The following regression is with unit labour cost per complexity adjusted FCE equivalent patient. The specification enters size, dummies. These variables account for 33% of the variance between trusts. Table 12.21 Accounting for Variation in Unit Labour Cost: Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate a .136 .019 .002 178.35337 b .603 .363 .329 146.19561 a Predictors: (Constant), TAFCE_SQ, Complexity Adj FCE Equivalent Patient b Predictors: (Constant), TAFCE_SQ, Complexity Adj FCE Equivalent Patient, NTECHLON, NSPECLON, NTECHNLD, NSPECNLD Dependent Variable: Total Wage Cost Per 1000 Complexity Adj FCE Equivalent Patients The residuals are then used as the dependent variable in another regression where the staff MFF and the proportions rural and town are entered into the equation, followed by the quality variables. The former account for 11% of the variance in the residual and the latter for a further 14%. Table 12.22 (A). Residuals from Table 12.21: Model Summary Model R R Adjusted R Square Std. Error of the Square Estimate a .378 .143 .113 134.05227659 b .566 .320 .251 123.16041289 a Predictors: (Constant), Proportion town, Proportion Rural, Staff MFF, NLONDON b Predictors: (Constant), Proportion town, Proportion Rural, Staff MFF, NLONDON, Hospital cleanliness, Clinical Focus Average, Workforce indicator, Patient Focus Average, Key Targets Average, Capacity and Capability Focus Average, All Average Table 12.22 (B). Residuals from Table 12.21: Coefficients Unstandardised Coefficients

Model a

B (Constant) -145.888 Staff MFF 158.487 NLONDON 65.365 Proportion Town -131.679 Proportion rural -61.622 b (Constant) 264.658 Staff MFF 129.648 NLONDON 52.435 Proportion Town -238.906 Proportion rural 68.090 Workforce indicator 40.396 Key Targets Average -3002.330 Clinical Focus Average -2609.165 Patient Focus Average -4343.378 Capacity and Capability -1922.325 Focus Average All Average 11474.014 Hospital cleanliness -18.476 Dependent Variable: Unstandardised Residual

Standardised Coefficients

t

Sig.

.442 .402 .164 .427 .794 .314 .471 .243 .132 .761 .511 .000 .000 .000 .000 .000 .789

Std. Error 189.122 188.292 46.662 165.192 235.219 261.905 179.397 44.692 157.600 223.587 61.294 621.290 532.409 906.108 391.689

Beta

.086 .143 -.242 .046 .063 -2.570 -1.607 -3.415 -1.431

-.771 .842 1.401 -.797 -.262 1.011 .723 1.173 -1.516 .305 .659 -4.832 -4.901 -4.793 -4.908

2403.287 68.944

5.954 -.024

4.774 -.268

.106 .178 -.133 -.042

177

Total Wage Cost per WTE The following regression is with total wage cost per WTE. The specification enters size, dummies. These variables account for 26% of the variance between trusts. Table 12.23: Accounting for Variance in Total Wage Cost per WTE: Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate a .151 .023 .006 2.98867 b .542 .294 .256 2.58528 a Predictors: (Constant), TAFCE_SQ, Complexity Adj FCE Equivalent Patient b Predictors: (Constant), TAFCE_SQ, Complexity Adj FCE Equivalent Patient, NTECHLON, NSPECLON, NTECHNLD, NSPECNLD Dependent Variable: Total Wage Cost Per 1000 Complexity Adj FCE Equivalent Patients The residuals are then used as the dependent variable in another regression where the staff MFF and the proportions rural and town are entered into the equation, followed by the quality variables. The former account for 38% of the variance in the residual and the latter for a further 8%.

Table 12.24 (A). Residuals from Table 12.23: Model Summary Model R R Square Adjusted R Square Std. Error of the Estimate a .631 .398 .377 1.97169004 b .710 .504 .454 1.84653719 a Predictors: (Constant), Proportion town + Rural, Staff MFF, NLONDON b Predictors: (Constant), Proportion town + Rural, Staff MFF, NLONDON, Hospital cleanliness, Clinical Focus Average, Workforce indicator, Patient Focus Average, Key Targets Average, Capacity and Capability Focus Average, All Average Table 12.24 (B). Residuals from Table 12.23: Coefficients Unstandardised Coefficients

Model a

b

B (Constant) Staff MFF NLONDON Proportion Town Proportion rural (Constant) Staff MFF NLONDON Proportion Town Proportion rural Workforce indicator Key Targets Average Clinical Focus Average Patient Focus Average Capacity and Capability Focus Average All Average Hospital cleanliness

Standardised Coefficients

t

Sig.

.003 .007 .000 .454 .694 .932 .034 .000 .794 .777 .231 .041 .003 .010 .006 .014 .268

-8.306 7.560 2.715 1.825 -1.363 -.338 5.762 2.900 .617 -.952 1.108 -19.269 -24.269 -35.727 -16.470

Std. Error 2.782 2.769 .686 2.430 3.460 3.927 2.690 .670 2.363 3.352 .919 9.315 7.982 13.585 5.873

Beta

.219 .450 .036 -.037 .099 -.940 -.852 -1.600 -.698

-2.986 2.730 3.955 .751 -.394 -.086 2.142 4.328 .261 -.284 1.205 -2.069 -3.040 -2.630 -2.805

89.646 -1.150

36.032 1.034

2.651 -.085

2.488 -1.113

.287 .421 .105 -.053

178

SECTION D. ECONOMETRIC MODELLING Section D includes two chapters that use econometric approaches. Chapter 13 explores theoretical relationships between variables in developing econometric models. It uses a data set that was assembled and provided by the Department of Health. Some of the early findings of this work were used to inform the analysis in preceding chapters, e.g. significance of trust type in analysing staffing and workload. The study was designed to look at the feasibility of applying econometric methods to development of a staff MFF. Chapter 14 also applies econometric techniques to data compiled by the Department of Health with the aim of understanding cost behaviour in hospital trusts. It includes a long list of variables, supported by hypotheses as to their likely impact, and then uses the empirical results to achieve parsimony by eliminating variables that have no significant effect on the cost measures.

13.

ECONOMETRIC APPROACHES : THEORY-DRIVEN

INTRODUCTION The original aim of this part of the project was to investigate the feasibility of using an econometric approach to estimation of the market forces factor, by identifying specific items of cost that are unavoidable by trusts and are affected by local market conditions. If this specific cost approach were adopted and proved amenable to estimation, econometric analysis would enable the market forces factors to be not only identified but quantified. However, following an initial report which began to explore this, the Department of Health modified the aim to investigating the impact of the existing MFF on trust costs. Although this is obviously a different aim, the core of the approach remains the same – estimation of a properly specified econometric model using appropriate techniques. In order to understand the methods used in this report, it is important to understand that econometrics is the application of statistical theory and methods to economic models. It is not simply the application of regression analysis to economic data, which can of course be used to explore relationships between empirical variables but with no real means of interpreting these other than rationalisation. In a well-known paper, Breyer (1987) characterised studies of hospital costs as either ad hoc or based on economic theory. Ad hoc studies essentially use statistical techniques, such as regression analysis, to explore factors associated with the level of cost, typically making use of whatever indicators representing these factors are at hand and relying entirely on the relationships revealed by the data. Proper econometric studies are based on economic theory and take full account of the multi-product nature of hospitals. The advantages of the econometric approach have been argued and demonstrated by many papers, which are reviewed by, inter alia, Scuffham et al (1996) and Scott and Parkin (1995): ad hoc studies are prone to severe misspecification bias and produce misleading results that do not reflect the true underlying structure of hospital costs. The approach taken in this chapter is therefore to operationalise relevant economic theory, with existing empirical findings, which are quite clear and consistent on many key issues, taken as a further guide. The aim is to estimate a cost function, which will be defined below according to economic theory.

179

HOSPITAL COST FUNCTIONS AND THEIR ESTIMATION As stated above, the econometric approach requires an economic model based on economic theory. The relevant economic theory in this case is the theory of costs and production and the aim is therefore to estimate a cost function. This relates the level of costs of production to factors that affect and determine it. These factors reflect choices about and constraints on the levels and mix of inputs and outputs, including the level of output, size of capital stock, level of input prices and case mix factors. A very general formulation is:

C = F + C (Y , P , K ) where C represents cost, F is a fixed element associated with the size of fixed inputs, Y is the level of output or activity, P is input prices and K is capital stock. The cost function can be employed to explore a number of standard issues including economies of scale and scope and cost inefficiency. The precise nature of these will depend on the exact form of the function, and the econometric method is used to produce estimates of that. If it is possible to purge costs of the fixed element F, then a short run variable cost function can be estimated in which the expectation is that, if there are economies of scale, average cost will be negatively related to output. Average cost will also be negatively related to the amount of capital input. However, in practice average costs include fixed costs, with the result that we expect a positive relationship of costs with the amount of capital 52. It is also expected that cost will be positively related to input prices. This approach has been successfully used in many contexts. (See, for example, Butler, 1995; Barnum and Wagstaff, 1992.) Nevertheless, there remain some difficult issues in the estimation of cost functions, many of which are discussed below. Functional Form One of the key elements of the econometric approach is the specification of functional form for the cost function. A characteristic of the ad hoc approach is to admit to non-linearity in the relationship and deal with this by some simple procedure such as including squared values for individual variables. However, this is a very specific form which does not in general have desirable properties. Two functional forms are widely used in econometric studies of cost functions. Each has advantages, but data limitations often dictate which is feasible and appropriate. The first is a very traditional form, the generalised Cobb-Douglas function, which is linear in logarithms. This has well-known mathematical properties in terms of the cost relationships that it assumes, but it is also consistent with an L-shaped cost function, which is one of the most widely observed empirical findings for hospitals in all times and in all countries (Lave and Lave, 1970). An L-shaped curve is characterised by decreasing levels of long run average costs as output increases, 52

This point sometimes causes confusion, because both the amount of capital (e.g. beds) and throughput can be regarded as measures of the size of a hospital. However, they measure different things, a stock and a flow. For a given amount of activity, additional capital stock obviously increases average costs because it is an addition to fixed costs, but additional activity reduces average costs partly because it shares fixed costs over a larger output.

180

until a point is reached, sometimes known as the minimum efficient scale, where average cost remains constant with increasing output. There is no evidence that in practice hospitals experience increasing average costs. The estimation method therefore constrains the data to fit into this form by transforming variables into logarithms. However, the advantages that this offers – ensuring that the results have easily interpretable and theoretically desirable properties – is bought at a heavy cost; if the true cost function does not fit this form, the results will be misleading. Attempts have been made to overcome this problem, and amongst them is the second form that must, because of its widespread application, be discussed: the transcendental logarithmic, or translog, function. This is one of a class of functions known as flexible functional forms, because they do not place strong restrictions on the underlying structure of costs. This means that the results can more confidently be interpreted. Unfortunately, all such methods place great demands on data, because they require many terms to be included in the estimating equation – for example, in the case of the translog, products and cross-products of variables – with consequent reduction in the degrees of freedom available for analysis. The translog is in fact one of the least demanding in this respect, but is nevertheless problematic. The problem is obviously compounded when there are very many output variables to be included. Despite the advantages of the translog, explored some years ago in the context of NHS hospitals by Scott and Parkin (1995), the analysis reported here uses the CobbDouglas specification. The reasons for this are, first, that some of the specifications use a large number of output variables and the translog would not leave sufficient degrees of freedom to undertake any plausible hypothesis testing; and, secondly, we wish to use a technique called stochastic frontier analysis and the use of a translog functional form in that context has not, to our knowledge, been investigated. Avoidable and Unavoidable Cost Factors and Their Detection If the cost function is well-specified, all cost factors will be included in it and the only problem that remains is of labelling them as avoidable or unavoidable. The problem arises where there are unobserved, or at least unquantified, factors that cannot be incorporated in the equations. If we are confident that we have identified all of the avoidable cost factors, then we might infer that variations of actual costs from those predicted are due to unobserved unavoidable cost factors. Unfortunately, these cannot be identified from variations between expected and actual costs, known in regression analysis as residuals, because these may also be due to inefficiencies and to stochastic error. It is therefore important to examine these variations, to separate out the inefficiency element and explore the nature of the residuals. A test of whether or not a cost is avoidable is whether there is a clustering of residuals by group, for example, if certain types of trusts have similar variations, suggesting that these are not random. It will be apparent from this that the explanatory power of any cost function and the randomness of residuals form important tests of whether the model could be used. However, as explained earlier this should be achieved using properly specified models rather than via a “fishing expedition” to ensure that the model meets these criteria.

181

Stochastic Frontier Analysis The usual way in which cost functions are viewed is that there is an underlying cost relationship which applies to all hospitals. Applying ordinary estimation techniques to this assumes that this function may be represented by the average relationship amongst different hospitals. Any deviation from this, if the equation is well-specified, is due to random fluctuations or measurement errors or both. However, this average relationship will incorporate any inefficiency that exists in hospitals. In considering inefficiency, it is not helpful to think of this as an absolute concept such as the deviation from what is technically possible, since we are unlikely to have any observations of hospitals that employ such a technology. Instead, the most useful concept is of relative efficiency, a comparison with best practice. We are therefore interested in deviations from what is observably most efficient – if a hospital obviously out-performs another in the sense of having lower cost for the same output or higher output for the same cost. However, as explained, the deviations from the average are conventionally viewed as random – in fact this is a requirement for a correct specification - and take both positive and negative values with a mean of 0. In order to interpret the deviations in terms of relative inefficiency, it is necessary to make one of two assumptions. The most extreme, which is now rarely used, is that in practice there is little scope for random variation in costs, and therefore any unexplained residual elements must be due largely to inefficiency. Relative efficiency can therefore be measured by comparison with hospitals that have the greatest negative distance between their observed and expected costs. The second is that the residual variation can be partitioned into two elements One of these has the usual interpretation as due to random fluctuations and measurement errors. The other has a specified distribution that takes only positive values, with hospitals demonstrating best practice being assigned an efficiency score of one. The technique that makes this assumption is Stochastic Frontier Analysis (SFA), which is described in, for example, Wagstaff (1989) and Kumbhakar and Knox Lovell (2003). Although SFA is widely used, the assumption on which it is based – that inefficiency can be detected by residuals that have a particular pattern – is unproven and largely unexplored. There is evidence of convergent validity, in that other inefficiency measures are found to be correlated with SFA-based measures, but the correlations are rarely high enough to be confident that efficiency is being measured with any precision. However, this method does enable us to have a further check on the randomness of residuals and a method of correction if smoothness amongst residuals is detected. Output and Case Mix Hospitals are multi-product firms, with the range of products being very wide. The definition of output is very complex for such firms and requires special techniques for analysis. In the context of hospitals, this problem is known as the issue of case mix. It should be noted that although case mix is often raised in the context of adjustments made to costs, it is really a concept that relates to the measurement of output. But calculations of average cost obviously require total costs to be divided by output, so any case mix adjustment to output will affect estimates of average cost. There are essentially four ways in which case mix can be taken into account: adjusting costs so that they take account of the case mix of output; retaining unadjusted costs but adjusting output so that this reflects case mix; retaining unadjusted costs but using an indicator of case-mix complexity as an independent

182

variable along with an unadjusted output measure; and using the activity of different health care products as independent variables, directly taking account of the multipleoutput nature of hospitals, which will incorporate case mix directly. In this chapter we have examined all of these. We have also examined a specification which does not adjust for case mix at all, to examine the difference that case mix makes. Case Mix Adjusted Costs The basis of the calculation of case mix adjusted average costs (CMAC) is that the output over which cost is averaged is adjusted by weighting the different components of output according to the intensity with which they consume resources. If we attach a baseline value of 1 to the weight of an output which has average resource use, other outputs take weights above 1 if they require more resources and less than 1 if they require fewer resources. If we denote each of the i outputs as Oi and their resource weight as Wi, then the case mix adjusted output is the sum over all i of OiWi. Therefore, case mix adjusted average cost can be defined as

CMAC =

C ∑ OiWi i

An early example of this procedure using NHS data was carried out by Söderlund, Milne, Gray and Raftery (1995), using DRGs as the unit of output and US-derived DRG cost weights. However, this was not used in estimation of a cost function, rather in an empirical investigation of cost variations in a sample of Oxfordshire hospitals. In our data, the adjustment to output is undertaken indirectly via the Reference Cost Index (RCI), which is the ratio of the total cost calculated using trust-specific unit costs to the total cost calculated using average national unit costs. Multiplying this by the national average unadjusted cost gives an estimate of CMAC. There are many problems with this calculation. The weights used are in effect determined by nationally average rather than minimum costs and are therefore contaminated by the distribution over trusts of inefficiency in the way that resources are converted into outputs. If a CMAC variable is used as the dependent variable in a cost function, there is then a potential inconsistency between the definition of output used as the denominator of average cost – adjusted for case mix, and that used in calculating output as an independent variable – unadjusted for case mix. We therefore calculated case mix adjusted output (CMAO) in a similar way as for the denominator of the CMAC and used this as an alternative output measure. There are two problems with this approach, two of general application and one related to the data being used. First, a general criticism of this approach is that because the weights represent resource use, what is supposed to be a measure of output is contaminated by a measure of the resource inputs that produce it (Ellis, 1992). Secondly, a specific problem with the data being used is that both the numerator and denominator of the CMAC in effect contain HRG-specific activity and also combinations of these activities and unit costs to a degree that it is difficult fully to work out the origins of variations in the CMAC. In other words, the CMAC is a complex indicator and interpretations of any analysis of it must be made with great caution. This is particularly the case when using CMAO, since the estimating

183

equation contains HRG activity in three places, twice in the dependent variable and once in the independent variables. Case Mix Adjusted Output An alternative is to retain the straightforward definition of average cost, but to include CMAO as the independent variable representing output. This has the advantage of reducing the number of times that HRG-specific activity enters the estimating equation, but has the disadvantage of being inconsistent with the measure of average cost. Case Mix Complexity Index Another alternative is to retain the straightforward definitions of average cost and output, but to include an independent variable representing case mix. The complexity index (CI) 53 is suitable for this. The main problem with this is in determining the precise way in which the complexity index is hypothesised to affect the cost function, which will affect the specification of the functional forms appropriate to its relationship with cost and its interactions with other variables. As an indicator, rather than a variable representing an economic entity, it is not easy to characterise in this way. Multiple Output Models A final alternative is to take account of case mix by segmenting overall activity into different health care “products”. This deals directly with the problem that case mix refers to and is therefore the most valid approach from a theoretical point of view. Unfortunately, as there is such a huge range of health care products, it is necessary in practice to aggregate, and the main problem with this approach is of choosing the level of aggregation. Highly aggregated data might obscure true variations in cost, but low levels of aggregation will produce too many output variables to be analysed 54. Therefore, although this is an obviously appealing and logical approach, it does have a cost in terms of statistical modelling if the aggregation level is to be meaningful. First, it reduces the number of degrees of freedom available for analysis. Secondly, there is the possibility of introducing multicolinearity into the model if many trusts operate production technologies in which case mix is similar for different overall activity levels. Trust Types An important question in estimating a cost function is whether all of the hospitals use the same general production technology. This is a reasonable assumption in most cases, but there are two types of hospital that may differ from others. First, specialist hospitals may differ from general hospitals in the types of case that they see, their size and the fact that they are not able to share inputs over different outputs. A model with an appropriate specification that includes size and product mix should take care of most of this. However, it is possible that there is an additional element of case mix which will not be dealt with by use of HRGs, namely the severity of cases 53

The complexity index is essentially output (the total number of FCEs) divided by case mix adjusted output (CMAO). 54 It would be possible to estimate separate cost functions for each type of output. However, this is not a valid approach because hospitals are multiple-output production units, not a collection of separate production units. Such an approach would not be able to deal with shared inputs and the possibility of economies of scope.

184

or differential complexity of cases within HRGs cases. In the absence of a severity index, it will be useful to examine the impact that status as a specialist trust has on costs. Secondly, teaching trusts may also be affected by the same factor and will also be affected by the fact that they produce two other types of output, education and research, that are not reflected in output measures such as FCEs. The ideal approach for the second factor would be explicitly to include variables representing education and research and ensure that total costs also covered these activities, but that was not possible with the data available. As a result, we will simply examine the impact that status as a teaching trust has on costs.

DATA AND METHODS Data on 173 trusts in England for 2004/5, compiled from a number of official sources, were provided by the Department of Health. Although a number of other variables were examined in the exploratory stage of this project, we will only describe the variables used in the final econometric analyses. Table 13.1 describes the variables used. Table 13.1 Variables Used VARIABLE AC CMAO CMAC Beds ACLE Acute Teaching All specialist MFF London CI HRG A – HRG T

MEANING AND DEFINITION Average cost. The sum of activity in each HRG, measured in FCEs, multiplied by trust’s HRG-specific unit costs, divided by total activity measured in FCEs. Case mix adjusted output. The sum of activity in each HRG, measured in FCEs, multiplied by national average HRG-specific unit costs, adjusted so that its national mean is equal to the national mean total number of FCEs. Case mix adjusted average cost. The sum of activity in each HRG, measured in FCEs, multiplied by trust’s HRG-specific unit costs, divided by total activity measured as CMAO. The total number of beds Average cost of labour employed. “Calculated by dividing the paybill figure (net of agency spend) by the total number of staff FTEs” Acute teaching trust. Equal to 1 if an ATT, 0 otherwise. Specialist trust. Equal to 1 if an ST, 0 otherwise. The market forces factor. Equal to 1 if trust is based in London, 0 otherwise. Complexity index Number of FCEs in HRG Chapter

Average cost was measured in two ways: as total cost divided by the number of FCEs (AC) and as total cost divided by weighted number of FCEs (CMAC), as described earlier. The second of these was adjusted so that the mean of the two variables was the same. Output was measured in three ways: a single variable representing the total number of FCEs, a single variable representing the weighted number of FCEs (CMAO) and 18 variables representing the number of FCEs in each HRG chapter. It is recognised that cases are not homogeneous within HRG chapters, but using a lower level of aggregation would increase the number of independent variables, thereby reducing the degrees of freedom available and as a result adversely affecting precision and identification. This is not an entirely eradicable problem - even at the level of individual HRGs there is some variability, so even if we had a data set large enough to incorporate them all, there would remain some doubt about this. It should also be noted that the problem of non-homogeneity affects all of the means of taking account of case mix used here, not simply the multiple-output model.

185

Capital was represented by the number of beds and the input price for labour was represented by the Average Cost of Labour Employed (ACLE), calculated by dividing the paybill figure (net of agency spend) by the total number of staff FTEs. These data were not ideal, in particular on input prices. We did not have a variable representing the input prices other than labour. Moreover, the labour input variable was a single indicator rather than disaggregated to labour types. The general modelling approach used was to explore the data using simple graphical and other methods to determine an appropriate specification before estimation of the econometric models. Most of this is not reported here, but some key elements are included. Following this, a model was specified which contained standard elements such as the functional form and the capital and input price variables, and also elements whose precise form was explored, namely the cost and output case variables, as modified by case mix, and a locality variable. The diagnostic tests used to judge the regression models were as follows. R2 is a measure of how well the model fits the data; it is directly interpretable as the proportion of the variation between trusts in the dependent variable that is explained by the variation between trusts in the independent variables. RESET is a test for misspecification of the model, but does not tell us exactly how the equation is misspecified. A model that is misspecified should never be used, as the results may be misleading, so we will regard failing the RESET test as fatal for any model. The VIF (Variance Inflation Factor) is a test of multicolinearity, which means that the independent variables are so correlated that it is not possible to estimate the coefficients of the model with an acceptable degree of precision. There is no “significance test” applicable to the VIF, but a rule of thumb is that if any single variable has a VIF over 10 it may be causing problems, or if the mean VIF is very high. Multicolinearity in any case has other obvious symptoms such as a high R2 along with insignificant coefficients. For the stochastic frontier models, the most important test statistic is the Likelihood Ratio test, which is of the significance of the efficiency element of the residuals.

RESULTS Exploratory Analyses Simple exploratory data analyses are valuable in illustrating some of the issues, and are anyway an essential pre-requisite to modelling. The top half of Figure 13.1 is a scatter diagram showing average costs compared to activity for all trusts in the data set, and the bottom half shows the same using case mix adjusted costs and output.

186

0

1000

Average Cost 2000 3000

4000

Figure 13.1 Average Cost by Activity, by Hospital Type & Effect of Case Mix Adjustment

0

50000

Acute Teaching Orthopaedic

150000

Children's Small acute

200000

Medium acute Large acute

Case mix adjusted average cost 1000 1500 2000

Acute specialist Multiservice

100000 Activity (Total FCEs)

0

50000 Acute specialist Multiservice

100000 Case mix adjusted output Acute Teaching Orthopaedic

150000

Children's Small acute

200000 Medium acute Large acute

Taking all of the data points together, Figure 13.1 seems to demonstrate a typical and theoretically justified pattern, with decreasing average costs with increased activity at a relatively short range of lower levels of activity, followed by a relatively large range of higher activity levels at which minimum efficient levels of cost are achieved and no evidence of rising average cost at activity levels that are too high. This is much less obvious in the case mix adjusted figure than the unadjusted, but the pattern does still exist.

187

Closer examination shows that the classification of trust type is clearly important in understanding the cost/activity relationship. The sub-groups of small, medium and large acute trusts, presumably based on number of beds rather than activity, form homogeneous groups that are in a reasonable relation to each other and could be argued to be simply one group, all with a similar average cost whatever the level of output. Multi-service trusts are similar to medium acute trusts. Teaching trusts lie in the activity range of large acute trusts, but have higher average costs and may have decreasing average costs over the range. Specialist trusts - children’s services, orthopaedic and acute - are all within the range that would be identified as decreasing average cost. Acute specialist trusts need further unpicking, which is as might be expected for a group that is unlikely to be homogeneous. There are three outlier low cost trusts, which are outliers with respect both to this group and to trusts as a whole, and three outlier high cost trusts with respect to the group but not to all trusts. The three high cost trusts form a well-defined group, as they are all of the cardiothoracic centres (Liverpool Cardiothoracic Centre, Papworth and Royal Brompton and Harefield). The low cost trusts are less homogeneous since they comprise two of the oncology centres (Clatterbridge Centre for Oncology and Christie Hospital) plus the Royal National Hospital for Rheumatic Diseases; the other oncology centre (The Royal Marsden) is not an outlier in this sense. The remaining seem to form a reasonable group, at least with respect to the cost-activity relationship, despite their dissimilarities in terms of specialty – two women’s hospitals (Liverpool and Birmingham), Neurology and Neurosurgery (Walton Centre), Plastics (Queen Victoria Hospital) and Eyes (Moorfields). Exploratory analyses also examined the issue of location, dividing the trusts between London and elsewhere 55. London hospitals do in general have higher costs, but at least some of this is mediated by the type of hospitals and their size. For small, medium and large acute and multi-service trusts, London costs appear very similar to elsewhere and there is a (very) weak suggestion that relatively small activity levels may also contribute to higher average cost (e.g. in orthopaedics). The most obvious higher costs are in acute teaching trusts. Functional Form and Standard Errors The Cobb-Douglas form was used throughout, with numeric variables transformed to their natural logarithm. Although this was used for reasons of theory, estimates using untransformed data were made. These consistently failed a specification test, RESET, which in this case almost certainly implies non-linearity. However, the results in terms of direction, significance and importance of variables are very similar to those for the untransformed data. All specifications - transformed and untransformed - produced models which failed a heteroscedasticity test (BreuschPagan), so for this reason and others robust standard errors were used throughout. Location and the MFF In order to explore the role of local market factors, we estimated a model without any variables representing these, examined the results and compared them with a model which does include such variables. The example that is used is a model that relates case-mix adjusted costs to case-mix adjusted outputs (the CMAC/CMAO model - see

55

This division was suggested by simply looking at the mean values of cost and activity variables, which were different for London. However, this later proved to be a justifiable division.

188

section below); however, the analysis was repeated for models which used other ways of adjusting for case mix and similar results were found. Table 13.2 summarises the models that were tested. Model 1 is without any locality variables; Model 2 includes the MFF; and Model 3 includes a dummy variable representing London trusts. The rationale for this choice of models will be explained as the equations are discussed. In Model 1, all of the variables have significant coefficients of the expected sign and there is no evidence of serious multicolinearity. However, it has a small R2 and fails the RESET test. The residuals were examined for evidence of misspecification due to locality factors; Table 13.3 shows the mean values of residuals by region, ordered by size. An analysis of variance showed that the residuals are not independent of region (F=7.66, p= 0.0000), and multiple comparisons, using the Bonferrroni procedure, found that in every comparison London was significantly different to every other region, and no other regions were significantly different from any other. We therefore conclude that a cost equation that does not take account of locality is misspecified and it is possible that this is adequately accounted for by London alone. We therefore test both a London dummy variable and the MFF itself, since that is a more sensitive indicator of locality Table 13.2 Comparison of Models: MFF and Locality

Log CMAO Log beds Log ACLE Acute Teaching All specialist Log MFF London Constant R2 F

MODEL 1 Coefficient Standard Error -.1150952 .0431518 .1265931 .0418644 .9458754 .2110521 .079374 .0215742 .1288375 .048239

-1.921171 0.3652 12.05

2.096644

MODEL 2 Coefficient Standard Error -.1072181 .0348466 .1406608 .0344519 .451768 .1857608 .0362736 .0177888 .158451 .0466346 .7644362 .1031156 2.865654 0.5315 34.78

MODEL 3 Coefficient Standard Error -.0893737 .0344087 .112387 .0337735 .4224008 .1810328 .0611085 .0166101 .1626754 .0446397

1.849724

RESET F 4.19 1.71 Highest VIF 9.13 9.17 Mean VIF 4.35 3.91 Coefficients/test statistics marked in grey are not significant at 5% level.

.1423852 3.122573 0.5324 33.16

.0194669 1.806275

3.67 9.17 3.19

Table 13.3 Residuals by Location of Trust GOVERNMENT OFFICE LOCATION London South East Eastern Yorkshire and The Humber West Midlands North West South West North East East Midlands

RESIDUAL .0944611 .01105019 -.00162644 -.01052291 -.01601709 -.02406351 -.04949088 -.05638014 -.08230883

189

Model 2 includes the MFF 56. Again, all of the variables have significant coefficients of the expected sign and no evidence of serious multicolinearity. R2 is improved, although it is still not very high, and the equation passes the RESET test. Inclusion of the MFF has little impact on the output, beds and specialist variable coefficients. However, it has a large impact on the labour cost and teaching variable coefficients, in each case reducing them by half, presumably as these variables have some locality element which is attributed to them in an equation without a locality variable. As might be expected, an ANOVA on the residuals by locality showed no relationship (F=1.09, p = 0.3734). Model 3 includes the London dummy. Again, all of the variables have significant coefficients of the expected sign and no evidence of serious multicolinearity. R2 is very slightly, though not significantly, higher than that for the MFF model. However, it fails the RESET test. Again, it has little impact on most variable coefficients, but as with the MFF has a large impact on the labour cost variable coefficient. Again, as expected, an ANOVA on the residuals by locality showed no relationship (F=1.52, p = 0.1540) The conclusion is that locality is indeed an important factor independently of others, and that, although this factor is mainly associated with London, it is best modelled using the MFF.

Alternative Case Mix Models Table 13.4 shows the results of five model specifications: COLUMN 1 2 3 4 5

MODEL AC/FCE AC/CMAO AC/CI CMAC/FCE CMAC/CMAO

DEFINITION Average cost with FCE output Average cost with case mix adjusted output Average cost with FCE output and the complexity index Case mix adjusted average cost with FCE output Case mix adjusted average cost with case mix adjusted output

56

In this specification we used the overall MFF. However, the staff MFF performed in general almost as well in most specifications and even slightly better in some. We concluded that as locality variables it does not matter which is used in these aggregate analyses.

190

Table 13.4 Comparison of Models: Case Mix Adjustments DEPENDENT AVERAGE COST AVERAGE COST AVERAGE COST CASE MIX ADJUSTED CASE MIX ADJUSTED VARIABLE: AVERAGE COST AVERAGE COST Model: AC/FCE AC/CMAO AC/CI CMAC/FCE CMAC/CMAO Coef. Std. Err. Coef. Std. Err. Coef. Std. Err. Coef. Std. Err. Coef. Std. Err. Log FCE -.3579167 .0932868 -.1340971 .038277 -.1303758 .0280579 Log CMAO .053688 .1184661 -.1072181 .0348466 Log beds .4252819 .0693386 .0647644 .1300577 .1557058 .0368476 .152608 .0272836 .1406608 .0344519 Log ACLE 1.019823 .386154 1.019179 .5145002 .3245556 .208945 .3790456 .1725385 .451768 .1857608 Teaching .0860224 .032531 .0814889 .0389505 .0394652 .0211458 .0305062 .0175603 .0362736 .0177888 Specialist .3316106 .1038937 .4241802 .1219469 .0059288 .0850311 .1238219 .0437988 .158451 .0466346 Log MFF .6687251 .2278594 .6984798 .2652039 .7120382 .1392624 .7727067 .1028163 .7644362 .1031156 Log CI .6336524 .0571468 Constant -1.996558 3.755253 -4.171262 5.146454 3.522588 2.040498 3.776347 1.721952 2.865654 1.849724 R2 0.5148 0.3716 0.8252 0.5605 0.5315 F 35.84 15.90 103.50 36.20 34.78 RESET F 0.62 5.10 1.68 0.91 1.71 Highest VIF 7.76 9.17 9.53 7.76 9.17 Mean VIF 3.67 3.91 3.94 3.67 3.91 Coefficients/test statistics marked in grey are not significant at 5% level

191

For the AC/FCE model, all of the variables have significant coefficients of the expected sign, there is no evidence of serious multicolinearity, R2 is reasonable although not very high and the equation passes the RESET test. The AC/CMAO model is not good; neither output nor beds are significant, R2 is low and it fails the RESET test. The values of the other variables’ coefficients are, however, very similar to those of the unadjusted model. The AC/CI model has a much higher R2 than all of the other models, however, the labour cost, teaching and specialist variables are not significant and it has very different coefficients for the output and beds variables. The CMAC/FCE model has very similar coefficients and diagnostic test characteristics to the AC/CI model, although the labour cost and specialist variables are now significant and the R2 is much lower. Finally, the CMAC/CMAO model has very similar coefficients and diagnostic test characteristics to the CMAC/FCE model, although the teaching variable is now significant and the R2 is slightly lower. Models 3, 4 and 5 are clearly superior to models 1 and 2. It is difficult to judge which the best is amongst models 3, 4 and 5, though the AC/CI model does have a much better R2. Of interest is the fact that the MFF variable is very stable over all specifications. Table 13.5 HRG Model COEFFICIENT

STANDARD ERROR .0334785 .0081345 .0137737 .0301633 .0209727 .0227806 .0247693 .0263667 .0240902 .0305854 .0208218 .0157945 .0099146 .0125821 .02104 .0226986 .0323005 .0174017 .0331002 .2627662 .0296067 .1247137 .1443518 2.665216

LOG HRG A -.0772203 LOG HRG B -.0286903 LOG HRG C .0304006 LOG HRG D -.0245819 LOG HRG E .1289251 LOG HRG F -.1187834 LOG HRG G .0492154 LOG HRG H .1162219 LOG HRG J -.0853013 LOG HRG K -.1267313 LOG HRG L .0532271 LOG HRG M -.0063734 LOG HRG N -.0134486 LOG HRG P -.0170612 LOG HRG Q .0548759 LOG HRG R .0112436 LOG HRG S -.0492033 LOG HRG T .0052503 Log beds .146021 Log ACLE .6104632 Teaching .1025451 Specialist .2824676 Log MFF .8846816 Constant .7124924 R2 0.8485 F 41.25 RESET F 1.32 High VIF 43.26 Mean VIF 14.42 Coefficients/test statistics marked in grey are not significant at 5% level

192

Table 13.5 shows the model with separate HRG outputs as the independent variables. All of the non-output variables are significant and of the expected sign, the R2 is high and the equation passes the RESET test. However, the output variables are not all significant and some have an unexpected sign. Moreover, there is clear evidence of multicolinearity amongst the output variables. There is some similarity with the single output models in the values of the coefficients of the beds and MFF variables. However, this specification is clearly not useable and more work would be needed to find a reasonable specification using this approach. Stochastic Cost Frontier Analysis Two stochastic cost frontiers were estimated, the CMAC/CMAO model and the HRG specific model. The results are in Tables 13.6 and 13.7. These are estimated using Maximum Likelihood rather than Ordinary Least Squares, so the values of the coefficients are very slightly different – though not to meaningful numbers of decimal places – and the standard errors are also slightly different, sometimes having an impact on the significance of variables. The important test statistic is the Likelihood Ratio test of the value marked sigma_u. In each case it is insignificant, which means that there is no evidence of relative inefficiency between trusts. Table 13.7 HRG Stochastic Frontier Model

Table 13.6 CMAC/CMAO Stochastic Frontier Model COEFFICIENT

STANDARD ERROR .0255112 .0288601 .1937424 .0219086 .0310364 .0975717 1.97165

Log CMAO Log beds Log ACLE Teaching Specialist Log MFF Constant

-.1072181 .1406607 .4517631 .0362738 .1584511 .764437 2.865187

/lnsig2v /lnsig2u Wald chi2

-4.965883 -14.68507 196.24

.1087269 741.4645

sigma_v sigma_u sigma2 Lambda

.0834973 .0006474 .0069722 .0077536

.0045392 .2400146 .0007754 .2407301

Coefficients/test statistics marked in grey are not significant at 5% level

COEFFICIENT

STANDARD ERROR .0284542 .0060677 .0126374 .0247619 .0198511 .0187296 .0254202 .0229058 .0209651 .0230684 .0177139 .0162876 .0102072 .0119406 .0176214 .0199699 .0260104 .0167524 .0333007 .2377725 .0275496 .1096459 .1299135 2.462936

LOG HRG A LOG HRG B LOG HRG C LOG HRG D LOG HRG E LOG HRG F LOG HRG G LOG HRG H LOG HRG J LOG HRG K LOG HRG L LOG HRG M LOG HRG N LOG HRG P LOG HRG Q LOG HRG R LOG HRG S LOG HRG T Log beds Log ACLE Teaching Specialist Log MFF Constant

-.0772204 -.0286903 .0304006 -.0245819 .1289252 -.1187834 .0492154 .1162219 -.0853013 -.1267313 .0532271 -.0063734 -.0134486 -.0170612 .0548759 .0112436 -.0492033 .0052502 .1460208 .6104601 .1025452 .2824677 .884682 .7112425

/lnsig2v /lnsig2u Wald chi2

-4.71852 -12.86858 968.92

.1303026 702.0761

sigma_v sigma_u sigma2 lambda

.0944901 .0016055 .008931 .0169917

.0061562 .5636074 .0015002 .5671069

193

DISCUSSION AND CONCLUSIONS OF ECONOMETRIC APPROACH As an exploratory study of feasibility, the analyses reported here successfully demonstrate the problems with conducting proper econometric studies in this area and what would be required for a definitive study. Unfortunately, some of the data used were not quite appropriate, particularly the input price and capital measures, so great care should be exercised in interpreting the results. The conclusions from this study are therefore mainly about how feasible it would be to conduct a definitive econometric study in this area. • Better data should be used to resolve the data deficiencies. • Taking account of case mix is very important, but it is unclear how this should be done. From an economic and econometric point of view, the appropriate method is to estimate a multiple output model, but the analyses reported here demonstrate that this approach is totally compromised by the reduction in degrees of freedom and the loss of precision caused by multicolinearity. Amongst the single output methods, the CMAC/CMAO model is theoretically the best, but it was, arguably, out-performed by a more ad hoc model that took account of case mix by inclusion of a case mix index that has no real economic meaning in the context of cost functions; moreover, other models with less justifiable combinations of unadjusted and adjusted variables performed equally as well. • The Cobb-Douglas specification appears to work well, but should really be replaced by a flexible functional form such as the translog. An earlier analysis of NHS data using the translog (Scott and Parkin, 1995) found this to be promising but compromised by problems with NHS data; it is likely that NHS data will have improved sufficiently to make this feasible. However, this was not tested in the MFF study because of the potential overload of variables compared with sample size in the multiple output models. This would remain a problem for the future as the sample size of NHS Trusts is unlikely to increase significantly; panel data methods might be used but these do require additional assumptions about behaviour over time. • The analyses show that locality is clearly a factor that influences costs, though the way in which it does so could be explained in two very different ways. It could be that the MFF is acting as a locality indicator, suggesting that it is to a great extent performing its function in the resource allocation formula. Alternatively, it could be that the greater income available via the MFF is simply spent, thereby increasing costs. However, there is no evidence from the analyses reported here that this leads to differential cost inefficiency, although the next point will discuss the reliability of that finding. If that is the case, then the test would be whether higher levels of the MFF simply allow trusts to provide more services to their local populations, which is outside of the scope of this study. In the analyses reported here, MFF, Staff MFF and a dummy for London performed equally well as locality indicators. More detailed exploration of this issue would be justified. • The finding of no relative inefficiency may be surprising to those who believe that there is considerable variation across trusts in efficiency, but it should be borne in mind that this analysis is at a high level of aggregation which may dilute

194

variations of efficiency in specific areas. The stochastic frontier method is not completely accepted by many, particularly in the context of cross-sectional data, so the finding of no inefficiency is not demonstrably reliable. For a proper test of this, we would require panel data, although again we then require some assumptions about the behaviour of relative efficiency over time. The method has, however, succeeded in that it did not find evidence of certain types of smoothness in the residuals that are related to trust-specific factors. • We did not identify a model that we would recommend for development of a Specific Cost Approach to predicting spatial cost differentials.

195

14. ECONOMETRICS: HYPOTHESIS & EMPIRICALLY DRIVEN This chapter is the final stage in our enquiry into hospital costs and their relationship with the staff MFF. It uses different data sets and lines of enquiry to those employed in the previous chapter but the aim here, similarly, is to look for patterns in hospital costs (through multivariate regression models). Once we have isolated these factors, we go on to look at whether the staff MFF itself makes any difference to hospital costs. If the answer is ‘yes’, we then need to consider whether it is because the staff MFF over-compensates for spatial differences. The regression models are specified on the basis of a range of hypotheses about how individual variables will contribute to costs. The more we are able to explain differences in costs (measured through the R squared statistic, which shows the percentage of variation explained), the more robust is the model. A bottom-line of whether the model has any general validity is provided by the Reset Test that gives a marker of how well the model is specified. The Reset Test gives higher approval ratings to models that are parsimonious, i.e. use as few variables as possible to provide any given level of explanation. This chapter is therefore aiming to do two things. It is both trying to give insight into cost behaviour and it is also trying to build a statistical model that has a semblance of rigour, while drawing on a wide range of variables. On the whole, we found that the exercise did not produce the desired result. We did not emerge with models that contained high levels of explanatory power. Our best result (R squared = 61%) came from applying empirical parsimony rather than any particular hypotheses. We were nevertheless able to observe something about cost behaviour in relation to the staff MFF that we could connect to findings in earlier chapters.

What Have We Learned? We have found that: (a) the staff MFF is associated with higher hospital costs; (b) most of these higher costs can be linked to specific cost drivers, e.g. teaching status, size, amount of specialist work (all of which are correlated with the MFF), bed occupancy rates; cost drivers have not been parsed between avoidable and unavoidable; and (c) a slight variation in costs can be attributed to the staff MFF alone. This ties in with earlier work (Chapters 9 and 12) where we found a pattern of higher costs associated with the staff MFF and identified a small component that could be attributed to over-compensation of resources through the staff MFF (in the scale of redistribution from low to high MFF areas). Through benchmarking we inferred that this relatively small component could be regarded as ‘avoidable’. It should be noted that questions of avoidable and unavoidable costs are being addressed at a geographical or spatial level where we are seeking patterns. The conclusions do not imply that the financial position in each individual trust results from unavoidable cost pressures. In practice (Chapter 11), we have found that trusts’ individual performances tend to offset each other so that the net spatial effect is smoothed into line with the MFF.

196

SUMMARY Approach Unit Labour Costs (ULC) and the Reference Cost Index (RCI) are two measures of relative cost behaviour. A single ULC and RCI is available for each trust (n=173). In the basic models estimated here, Unit Labour Cost and Reference Cost Index are the dependent variables. In both cases, there are two models: •

Model 1. A simple model not including efficiency measures but including Staff MFF, cluster dummies, activity (and/or activity squared), age/need variables, local competitors, bed numbers, complexity, rurality, concentration of activity, percentage of activity that is elective.



Model 2. A second model in which the staff MFF is excluded initially. It includes all other variables in Model 1 but also brings in efficiency measures such as quality variables, case mix adjusted average length of stay, case mix adjusted day case rates, follow-up outpatient appointments, agency spend as a percentage of the pay bill, bank spend, vacancy rates, average cost of labour at local wages.

These basic models were developed by further routines, so that in the end we produced 16 models (2 x 8), a summary of which is presented in table 14.1: •

The starting model included a full specification, as described in Models 1 & 2 above



The end models 1 & 2 included a smaller number of variables by omitting those that performed badly (were insignificant) in the starting model



Model 1 was extended by introducing inner and outer London dummy variables and the average cost of labour



Model 2 was extended by introducing dummies for inner and outer London and the staff MFF



A final Model 1 and Model 2 was adopted by reducing the variables in empirical fashion to achieve a parsimonious specification

How the Models Performed No model was fully specified throughout. None of the ULC models passed the Reset Test (Ramsey test for general specification) and so failed to be specified. The impact of the staff MFF was slight, once known cost drivers had been taken into account. With the model as defined, the Staff MFF enters both the Unit Labour Cost and Reference Cost Model Ones showing that trusts with higher MFF have higher unit costs.

197

Table 14.1 Summary of Model Performance

Model 1

Model 2

Start

Unit Labour Cost (ULC) R Is Model Squared Specified? 47.0%

Reference Cost Index (RCI) R Is Model Squared Specified? 52.1%

End

47.8%

52.6%

Include Inner/Outer London

49.8%

55.9%

Parsimonious

49.6%

55.7%

Start

47.0%

58.6%

End

47.8%

58.1%

Include Staff MFF Include Staff MFF + Inner/Outer London

52.5%

59.1%

53.9%

60.6%

53.7%

60.0%

Parsimonious

Variables RCI/ULC without land and buildings. The Reference Cost Index (RCI) and a measure of Unit Labour Cost (ULC) were used as two independent variables. The land and buildings elements of the MFF have been removed from the two dependent variables in order to reduce ambiguity of interpretation. •

RCI: this is taken directly from the reference costs website for the year 2005. It is a case-mix adjusted measure of provider efficiency, in that it measures the provider-specific unit costs relative to what would be the case if the provider operated at national average unit costs.



ULC: this is a derived measure of hospital efficiency, and represents how effectively providers convert inputs (as measured by labour costs) into outputs (as measured by activity, taken from the reference costs database). It is based on the volume and mix of labour and does not have a price effect (as prices are standardised to the national average).

The following variables were used at some point in the process as independent variables (more information is available in Annex 14.1 to this chapter): Population Measures •

Age and need indexes are drawn from the resource allocation formula at PCT level. They are generated for PCTs in the resource allocation formula, and then estimated for providers using the purchaser-provider matrix. Hypothesis: that higher age or need variables are associated with higher unit costs.



Rural percentage/population density has been generated from PCT data, assigned to the provider according to where the trust headquarters are located. Hypothesis: that more rural areas may be associated with higher costs of provision, or lower quality of service.

198

Activity, Specialisation and Efficiency •

Average length of stay (ALOS) is a case-mix adjusted indexed average length of stay. Hypothesis: that a higher ALOS pushes up both the RCI and the ULC. (This is a complex variable that is affected by the quality of primary and secondary care, the availability of social care, and other factors).



Day-case index has been constructed based on elective procedures only. It is a case-mix adjusted index expressing the ratio of actual to predicted day-case rates. Hypothesis: that more efficient providers will be able to undertake more activity than predicted as day-cases, so that a higher index is associated with lower unit costs.



Elective index measures the proportion of a provider’s activity that has been undertaken as elective procedures. Again, this is case-mix adjusted and constructed in a similar way to the day-case index. Hypothesis: providers that face higher demand fluctuations associated with unplanned or emergency work will experience greater costs; conversely, higher elective rates will be associated with lower costs.



OP actual/predicted follow-up appointment ratio is a case-mix adjusted measure. Hypothesis: higher follow up rates will be associated with higher costs.



Concentration index measures the proportion of activity (as measured by FCEs) undertaken in the 20 most prevalent HRG codes within the trust. Hypothesis: concentration will be associated with lower unit costs.



Specialist top-up (STU) gives the proportion of provider funding over and above the tariff that pays for specialist activity. Specialist Top Up is measured as a proportion of tariff income from all activity within the scope of PbR and also as a proportion of tariff income from admitted patient activity only. Hypothesis: that more complex care is associated with higher unit costs. The reference costs index does not take this into account, and so it must be included as a measure of (unavoidable) complexity.



Bed occupancy rate measures the percentage of available beds occupied by patients. Hypothesis: that higher occupancy rates are associated with lower costs since the proportion of unused beds is reduced.



Activity (FCEs) and activity squared are two measures associated with throughput and size of the hospital. Hypothesis: economies of scale in hospitals with more activity would reduce costs; similarly, more activity for a given level of capacity resulting in higher throughput would reduce costs.



Acute Beds is a measure of capacity and scale. Hypothesis: unit costs reduce with economies of scale.

Other •

Quality targets are described under the headings key targets, clinical focus, patient focus, capacity and capability.

199



Trusts within 20 miles is used as a proxy for competition. competition drives down cost.



Trust type and location dummies include teaching status, foundation status, specialist hospital, London location. Hypotheses: higher costs are associated with teaching status, specialist hospital, London location; lower costs are associated with foundation trust status.



Labour market measures include medical 3-month vacancy rates, non-medical 3 month vacancy rates and percentage of pay that is paid to agency, average cost of labour at local wages.

Hypothesis:

Descriptives and Correlations Descriptive Statistics and a correlation matrix have been provided in Annex 14.II. (Inter-correlations played a role in refining the model specification). The two main dependent variables (ULC and RCI) are highly correlated with each other (more than 50% common variance) and with two of the independent variables (population density and specialist top up as a % of inpatient activity). The Staff MFF is highly correlated with several variables: the age index, population density, trusts within 20 miles, the average cost of labour employed both national and local. The competition variable (trusts within 20 miles) was found to be positively associated with cost, contrary to expectation. The high correlations, however, between proxies for competition variables, costs of labour and the Staff MFF have constrained the models being estimated here to exclude a priori the competition variables. There are some very high correlations between the independent variables but these are mostly logical consequences of the definitions of the variables: for example, activity with activity squared and with bed occupancy variables; population density with trusts within 20 miles; the two specialisation indexes. Some slightly less obvious are the age index with trusts within 20 miles; and the average local cost of labour with population density and trusts within 20 miles. Otherwise there are only a few correlations above 0.5. Performance Against Hypothesis Predictions Trust Type. The Specialist dummy is eliminated at the 5% level in all models; the Teaching dummy is significant in both models associated with higher reference costs, and being a Foundation Trust was associated with lower reference costs in Model One. Economies of Scale. The activity variable is not significant in either of the ULC models but both the activity and activity squared variables are statistically significant in the RCI models with negative and positive coefficients respectively suggesting economies of scale. Catchment Population. The age index and the need index are never statistically significant. The rural percentage is negative and statistically significant in the RCI Model Two but nowhere else. That is to say, as the rural percentage increases, the RCI falls, associating rurality with lower costs. Concentration of Activity and Specialisms. The RCI Concentration Index is never statistically significant. The specialist top up variable as a proportion of all tariff income is always statistically significant and positive.

200

Efficiency Measures. The efficiency measures are only considered in the Model Twos; and even then most of them are statistically insignificant. Average length of stay, follow-up outpatient appointments, the medical and non-medical 3 month vacancy rates never enter the models; electives as a proportion of all activity and the OP Actual/ predicted FUA enter only the ULC Model and the average of all quality measures enter only the RCI Model. Add Ons. Adding in London dummies or the average costs of labour employed had no effect on the coefficients with either dependent in Model One except on obvious confounders like the Staff MFF; it also decreases the significance of the need variable. With ULC Model Two, including the Staff MFF which is statistically significant knocks out the ACLE (national) and the percent of pay to agency; adding in London dummies decreases the significance of the teaching hospital and the proportion of acute beds that are occupied variables. With RCI Model Two, including staff MFF is statistically significant but affects only the ACLE (local); adding in London dummies makes the Staff MFF and the rural percentage variable insignificant. Conclusions. The hypotheses derived from economic theory have been tested and found wanting in the face of real awkward data: the characteristics of the catchment population have hardly any effect; the Concentration Index is never significant; and very few of the proxies for efficiency measures make an impact. The only hypothesis that was sustained was the importance of the specialist top-up, representing (unavoidable) complexity. Rigorous empirical parsimony on the other hand generates improvements in the model specifications with the variables shown in Table 14.2. In other words, in both ULC and RCI models: • Staff MFF retains its statistical significance except in one case with a positive coefficient; • Tariff income as a % of PBR activity is always significant with a positive coefficient; • Occupancy Rates is always statistically significant with a negative coefficient; • One of the London dummies is always significant with a positive coefficient; and, in addition, in the RCI models: • Teaching hospital dummy is significant with a positive coefficient; • Activity is significant with a negative coefficient and activity squared with a positive coefficient. In terms of the dependent variables: • the differentiation between ULC and RCI was not entirely clear; • neither is very sensitive to characteristics of the catchment populations nor to apparent incentives for efficiency; • the only sign of difference between them is that the ULC appears to be insensitive to levels of activity. The presentation in the text of the chapter gives the unstandardised coefficients and their t value only. Note that the conventional thresholds for t values are as follows: t > 1.640, p < 0.10; t>1.96 p < 0.05; t> 3.0, p< 0.001. Full results including standard errors, standardised coefficients and parsimonious models are given in Annex 14.III. Reset test t