Pitfalls of Converting Practice Guidelines Into Quality Measures

14 downloads 120 Views 95KB Size Report
had a documented life expectancy of less than 6 months. Documentation by ..... JAMA. 2001;285:2750-2756. 16. Sox HC. Screening for disease in older people.
SPECIAL COMMUNICATION

Pitfalls of Converting Practice Guidelines Into Quality Measures Lessons Learned From a VA Performance Measure Louise C. Walter, MD Natalie P. Davidowitz, BA Paul A. Heineken, MD Kenneth E. Covinsky, MD, MPH

T

HERE IS GREAT DEMAND FOR ACcurate measures of quality to determine whether consistent high-quality medical care is being provided across health care systems. The US Department of Veterans Affairs (VA) manages the largest health care system in the United States and is considered a leader in quality measurement.1 To assess quality of care at each of its medical centers, the VA calculates scores on a broad range of clinical performance measures based on standardized chart review by external auditors.2 These measures are generally based on adherence rates to practice guidelines. A recent study concluded that higher adherence rates to guidelinebased performance measures were evidence of improved quality of care across the VA health care system.3 At the local VA level, there is considerable pressure to score well on these measures, which are widely publicized.3,4 This pressure was recently illustrated at the San Francisco VA Medical Center (SFVAMC) when clinicians were told that the medical center’s rate of colorectal cancer screening in 2002 of 58% failed to meet the national VA target rate of 65%. Clinicians were strongly encouraged to screen more patients and were told that failure to increase screening rates could 2466

The Department of Veterans Affairs (VA) manages the largest health care system in the United States, and the Institute of Medicine has recommended that many practices of VA quality measurement be applied to the US health care system as a whole. The VA measures quality of care at all of its sites by assessing adherence rates to performance measures, which generally are derived from evidence-based practice guidelines. Higher adherence rates are used as evidence of better quality of care. However, there are problems with converting practice guidelines, intended to offer guidance to clinicians, into performance measures that are meant to identify poorquality care. We suggest a more balanced perspective on the use of performance measures to define quality by delineating conceptual problems with the conversion of practice guidelines into quality measures. Focusing on colorectal cancer screening, we use a case study at 1 VA facility to illustrate pitfalls that can cause adherence rates to guideline-based performance measures to be poor indicators of the quality of cancer screening. Pitfalls identified included (1) not properly considering illness severity of the sample population audited for adherence to screening, (2) not distinguishing screening from diagnostic procedures when setting achievable target screening rates, and (3) not accounting for patient preferences or clinician judgment when scoring performance measures. For many patients with severe comorbid illnesses or strong preferences against screening, the risks of colorectal cancer screening outweigh the benefits, and the decision to not screen may reflect good quality of care. Performance measures require more thoughtful specification and interpretation to avoid defining high testing rates as good quality of care regardless of who received the test, why it was performed, or whether the patient wanted it. www.jama.com

JAMA. 2004;291:2466-2470

result in financial penalties for the medical center. Intrigued with the interpretation that lower screening rates indicated a quality-of-care problem, we critically assessed the colorectal cancer screening performance measure by re-

JAMA, May 26, 2004—Vol 291, No. 20 (Reprinted)

Author Affiliations: Division of Geriatrics, San Francisco VA Medical Center, and the University of California, San Francisco (Drs Walter and Covinsky and Ms Davidowitz); and San Francisco VA Medical Center (Dr Heineken). Corresponding Author: Louise C. Walter, MD, VA Medical Center 181G, 4150 Clement St, San Francisco, CA 94121 ([email protected]).

©2004 American Medical Association. All rights reserved.

CONVERTING PRACTICE GUIDELINES INTO QUALITY MEASURES

viewing the medical records of all patients previously audited for colorectal cancer screening at SFVAMC in 2002. Our findings illustrate conceptual problems with the conversion of practice guidelines into quality measures, which have potentially important implications for quality measurement in the VA and other health care systems. The national VA performance measure for colorectal cancer screening was developed from practice guidelines based on trials demonstrating decreased colorectal cancer mortality in patients offered screening.5-8 The adoption of practice guidelines based on strong scientific evidence lends clinical credibility that higher screening rates could reflect improved quality of care in the VA system. However, performance measures are not the same as practice guidelines (TABLE).9,10 Practice guidelines are intended to offer guidance by providing information to clinicians to help them decide how best to care for patients, allowing for clinical judgment and patient preferences.9 In contrast, performance measures are quantitative tools, such as rates or percentages, used to set standards that, if not met, almost certainly identify poor-quality care.10 As we conducted our in-depth review of the VA’s 2002 audit of colorectal cancer screening at SFVAMC, we identified several pitfalls that can occur when converting practice guidelines into performance measures. These include problems selecting the appropriate target population, determining target screening rates, and measuring screening performance. Such problems seem particularly important to understand given the recent recommendation of the Institute of Medicine to apply many VA performance measurement and reporting practices to the US health care system as a whole.3 Target Population: Does Severity of Illness Matter?

Multiple evidence-based practice guidelines strongly recommend screening persons aged 50 years or older for colorectal cancer with fecal occult blood

Table. Characteristics of Practice Guidelines vs Performance Measures Characteristics Definition

Practice Guidelines Sources of recommendation to be applied prudently based on clinical experience

Intention

Consolidate information to reduce gaps between scientific knowledge and clinical practice Flexible: acknowledge the “gray zone” of uncertain appropriateness Acknowledge medical complexity and patient preferences

Language Complexity

Accountability

Advisory

testing annually, sigmoidoscopy every 5 years, or colonoscopy every 10 years.11,12 The VA translated this guideline into a national performance measure for all VA medical centers that computes the percentage of eligible patients receiving timely colorectal cancer screening by any of these methods.13 The target population included all patients aged 52 years or older who were seen at least once during the fiscal year of the audit in 1 of 8 outpatient medical clinics. No upper age limit was designated. Patients were excluded from the eligible target population only if they (1) had a documented diagnosis of cancer of the esophagus, liver, or pancreas; (2) were enrolled in a hospice program; or (3) had a documented life expectancy of less than 6 months. Documentation by the clinician that colorectal cancer screening was not medically indicated for other reasons was not considered an exclusion criterion by the auditors.13 A sample of 229 patient charts was selected by auditors during fiscal year 2002 to calculate the SFVAMC’s rate of colorectal cancer screening. However, this sample was not a simple random sample of those meeting inclusion criteria for the colorectal cancer screening performance measure. Since the VA was also auditing adherence to performance measures for chronic diseases, such as chronic obstructive pulmonary disease and ischemic heart disease, the VA decided that as a national policy it would be more efficient and cost-effective to audit patient charts in which multiple per-

©2004 American Medical Association. All rights reserved.

Performance Measures Quantitative tools (eg, rates, percentages) that indicate performance related to a specific process or outcome Measure the quality of medical care

Rigid: provide specific criteria for which practices are “right” and “wrong” Simplistic algorithms that provide clear scoring instructions for processes that can be measured practically Mandatory: assign penalties or rewards based on performance

formance measures could be evaluated simultaneously. Therefore, auditors searched national VA inpatient and outpatient databases for specific International Classification of Diseases, Ninth Revision (ICD-9) codes to oversample charts of patients with specific comorbid illnesses.13 We found that the minimal exclusion criteria and sampling strategy that oversamples chronically ill patients resulted in a skewed target population that was elderly with multiple comorbid illnesses. For example, of the 229 patient charts audited at the SFVAMC during fiscal year 2002 for performance of colorectal cancer screening, 35% were aged 75 years or older and 24% had a severe comorbid illness (FIGURE 1). Some examples of patients in this sample included an 89-year-old woman with severe Alzheimer disease–related dementia (Mini-Mental State Examination score, 7/30), a 94-year-old man with metastatic prostate cancer, and a 76-yearold man with end-stage renal disease who was receiving dialysis, who had had a recent leg amputation, and who died 2 months prior to the audit date. Sixtyseven percent of all patients had a Charlson comorbidity-age combined risk score of 4 or more, which is associated with greater than 50% 10-year mortality.14 Since cancer screening offers potential benefits only to patients who are likely to live long enough to develop symptoms from an asymptomatic screendetectable cancer, patients with severe comorbid illnesses associated with a life expectancy of less than 5 to 10 years are

(Reprinted) JAMA, May 26, 2004—Vol 291, No. 20 2467

CONVERTING PRACTICE GUIDELINES INTO QUALITY MEASURES

Figure 1. Severity of Comorbid Illness by Age of Patients Audited for Colorectal Cancer Screening 100

Percentage

80

Comorbidity Severe Moderate Mild

60

40

20

0 52-64 (n = 83)

65-74 (n = 65)

75-84 (n = 68)

≥85 (n = 13)

Age, y

Severe comorbidity was defined as having at least 1 of the following: severe liver disease, class III or IV congestive heart failure, metastatic cancer, oxygendependent chronic obstructive pulmonary disease, severe dementia, or end-stage renal disease requiring dialysis. Moderate comorbidity was defined as having at least 1 of the following: class II congestive heart failure, symptomatic coronary artery disease, stroke with residual deficits, diabetes with end organ damage, moderate dementia, or symptomatic chronic obstructive pulmonary disease. Mild comorbidity was defined as not having any moderate or severe comorbidity.

unlikely to benefit from screening.15 Furthermore, the risk of complications from colorectal cancer screening increases with increasing comorbidity.15,16 As a result, the sample that the VA selected in which to measure performance of colorectal cancer screening is not the population in which failure to screen almost certainly identifies poor-quality care. In fact, for a substantial proportion of patients chosen for the audit, the risks of screening would outweigh the potential benefits.15 Severity of illness is rarely considered when comparing rates of processes of care such as cancer screening.17,18 It is generally assumed that process-based measures are targeted to patients in whom an intervention has been shown to be efficacious or there are strong reasons to extrapolate evidence of efficacy. However, the VA did not properly consider illness severity or prognosis when implementing its colorectal cancer screening performance measure, which was targeted to patients who were much sicker than the healthy volunteers who participated in randomized trials of screening.5-7 2468

Target Screening Rates: Are Higher Rates Always Better?

Performance measures often attempt to address that most interventions are not clinically appropriate for all patients in a target population by setting performance goals at less than 100%. This provides an allowance for exceptions without the expense and labor required to audit charts for reasons when an intervention is not indicated. However, the target rates that are ultimately set are not derived from evidence-based processes or practice guidelines. At the VA, the Office of Quality and Performance sets target screening rates by (1) evaluating goals and results from other systems, such as Healthy People 201019 and HEDIS (Health Plan Employer Data and Information Set),20 and (2) using a continuous quality improvement model to reset target goals each year to the screening rate of the top 20% of VA networks from the previous year.21 This is based on the principle of benchmarking, in which a health care system tries to model the performance of its top-performing facilities by making it the standard with which others are compared.22 Based on these methods, the VA set the “successful” goal for colorectal cancer screening at 65% for fiscal year 2002 and the “exceptional” goal at 71%.13 However, there are several problems with this approach to setting target rates. First, the tests used to screen for colorectal cancer are also frequently used as diagnostic tests to work up anemia, hematochezia, etc. Since auditors do not consider the indication for the test, the target “screening” rate may be driven by high rates of diagnostic testing at some VA medical centers. For example, while auditors found that SFVAMC had a colorectal cancer screening rate of 58% (132/ 229) in fiscal year 2002, we found that 42% (55/132) of the “screened” patients were tested for diagnostic rather than screening purposes (FIGURE 2). Therefore, despite the high illness severity of the audited patients at the SFVAMC, the “screening” rate was as high as it was because these patients often had gastrointestinal problems requiring colonoscopy in the prior 10 years. While the per-

JAMA, May 26, 2004—Vol 291, No. 20 (Reprinted)

centage of diagnostic tests would likely decrease if the audited population was younger and healthier, the methods for setting target rates should consider test indication since tests performed for diagnostic purposes are not informative about the quality of screening. Second, even if tests performed only for the purpose of screening were used to set target screening rates, the process of resetting rates according to the top 20% of VA networks carries the implicit judgment that higher screening rates reflect better-quality care. However, a more clinically sensible goal would be to optimally target screening to those most likely to benefit rather than achieve as high a screening rate as possible, since cancer screening should vary according to prognosis and patient preferences.23 In the extreme, if high target goals push clinicians to screen patients with limited life expectancies, performance measures can lead to unnecessary procedures and distract from more important aspects of care.24,25 Measuring Performance: Should Patient Preferences and Clinical Judgment Be Considered?

Process-based performance measures are usually scored as a simple percentage in which the number of persons who receive an intervention (numerator) is divided by the number of eligible patients (denominator). Patient preferences and clinical judgment are sometimes addressed by the way these measures are scored when a patient declines an intervention or when a clinician documents that an intervention is “not medically indicated.” However, some performance measures ignore these issues despite practice guidelines that recommend that screening decisions consider patient preferences and severity of comorbid disease.11,12 For example, patients who decline or do not adhere to screening, as well as patients in whom their clinician documents that screening is not medically indicated, are viewed by the national VA performance measure as eligible patients who failed to get screened; ie, they are counted in the denominator but not

©2004 American Medical Association. All rights reserved.

CONVERTING PRACTICE GUIDELINES INTO QUALITY MEASURES

the numerator. There are several justifications for this approach. First, while patient preferences against screening “count against” a medical center, this is partially taken into account by setting target goals at less than 100%.13 Second, from a system perspective, performance measures are intended to give an accurate picture of the relative quality of care delivered by a facility, even if they are not accurate for some individuals.26 However, when a performance measure does not validate responsible clinical decision making, it may undermine clinicians’ confidence and trust in performance measurement. It also minimizes the importance of tailoring medical recommendations to the needs and preferences of individual patients, an essential component of patient-centered care.27,28 These concerns are especially acute for colorectal cancer screening, a procedure that depends heavily on patient preferences, adherence, and severity of illness. Our experience at the SFVAMC illustrates the importance of these issues. Of the 81 patients who did not receive colorectal cancer screening, 38 (47%) had documentation that they declined screening, 10 (12%) failed to complete or show up for screening tests, and 9 (11%) had notes detailing why screening was not medically indicated (Figure 2). While this clinical detail cannot be captured when performance measures are based on ICD-9 codes from administrative data, VA auditors already review medical charts that contain this information. Sometimes medical charts may lack details about why patients refuse or do not adhere to screening, making it difficult to determine when a patient’s decision against screening reflects problems with patient-clinician communication or the patient’s informed preference. However, while a patient’s decision against screening may not always equate with good-quality care, it is currently labeled unequivocally as poor-quality care. Another, more obvious measurement error can occur when auditors simply miss the documentation of screening in the medical records.29 Since documentation of screening can oc-

Figure 2. Flow Diagram of Patients Audited for Adherence to the Colorectal Cancer Screening Performance Measure in 2002 at the SFVAMC 229 Audited Patients

81 Not Tested 4 Misclassified (Counted as Screened by VA Auditors)†

148 Tested∗ 20 Misclassified (Documentation of Test Missed by VA Auditors)†

38 Refused Test 10 Failed to Complete or Show up for Test 9 Clinician-Documented Test Was Not Medically Indicated 24 No Documented Reason 16 Had Charlson Comorbidity-Age Risk Score ≥4

91 Had Test for Screening‡ 57 Had Test for Diagnosis

VA indicates Department of Veterans Affairs; SFVAMC, San Francisco VA Medical Center. *Patients were counted as tested for colorectal cancer if there was chart documentation of fecal occult blood testing within 1 year, sigmoidoscopy within 5 years, or colonoscopy within 10 years. †National VA auditors missed documentation of colorectal cancer testing in 20 persons and counted 4 persons as screened for whom we could not find documentation of testing. Therefore, we counted 148 patients as “tested” and 81 as “not tested” whereas VA auditors counted 132 as “tested” and 97 as “not tested.” ‡A test was categorized as “diagnostic” rather than “screening” if there was documentation in the medical chart that the test was performed to work up a gastrointestinal complaint or follow up a previously abnormal examination. The indication for a test is routinely documented in all endoscopy reports at the SFVAMC.

cur in multiple places in the electronic medical record and documentation can occur as far back as 10 years ago for a colonoscopy, it is not surprising that auditors missed some documented screening (Figure 2). Two authors (L.C.W. and N.P.D.) spent an average of 20 minutes per chart searching for screening documentation. Extensively reviewing charts is timeconsuming and suggests that problems with accuracy may increase when the time available to search for documentation progressively decreases as the list of performance measures grows. Recommendations

Despite the pitfalls that may occur in converting a practice guideline into a performance measure, the potential benefits of performance measures derived from evidence-based guidelines should not be ignored. The quality of medical care can be improved in many areas, and performance measures are tools that can help achieve selected goals. 30 Certainly, the national VA colorectal cancer screening performance measure spotlights the importance of colorectal cancer screening, which is often underused in patients who may benefit.

©2004 American Medical Association. All rights reserved.

However, pitfalls of converting practice guidelines into performance measures must be recognized and avoided. If one is not careful, performance measures can become scientifically insupportable algorithms that classify high rates of a procedure as good care regardless of who received it, why it was performed, or whether the patient wanted it. In addition, other authors have suggested that such measures may have harmful unintended consequences to patients (adverse outcomes from unnecessary procedures and diversion from beneficial uses of time with their clinician), clinicians (decreased morale and professionalism), and organizations (consumption of scarce resources).24,31,32 Since performance measures are meant to distinguish between good- and poor-quality care rather than to be advisory, they need to be applied to a more narrowly defined population than practice guidelines. The target population for a performance measure should include only those for whom good evidence exists that the benefits significantly outweigh the harms and those who do not refuse the intervention. These specifications are especially critical when defining the quality of cancer screening, because a patient’s health status and pref-

(Reprinted) JAMA, May 26, 2004—Vol 291, No. 20 2469

CONVERTING PRACTICE GUIDELINES INTO QUALITY MEASURES

erences appropriately influence the receipt of cancer screening more than they influence some treatment interventions. For example, health status and preferences are more likely to influence receipt of colorectal cancer screening than receipt of a ␤-blocker after a myocardial infarction. Therefore, rather than measuring the percentage of patients screened for cancer during a recommended interval, a better measure of the quality of screening would be to measure thoughtful decision making that includes prognosis and patient preferences. Veterans Affairs auditors already review medical charts and could easily abstract this information if clinicians documented their discussions and recommendations in a standardized way. If there is concern that clinicians would inappropriately document a screening test as “not medically indicated,” a few randomly chosen charts could be reviewed in depth to ensure that the reasoning behind a screening decision was clinically sound. In fact, in-depth chart reviews and even patient interviews should regularly supplement most quantitative measures of quality to ensure that such measures are capturing meaningful information and that problems with the measures are identified and fixed. While this will require more resources to be invested into an individual performance measure, it is better to measure a few important dimensions of quality well than to measure a limitless number less optimally. In addition, detailed clinical information is required to determine what should be done to improve the quality of care, since simply reporting screening rates conceals the details needed to inform policy changes. After auditing the VA audit, the SFVAMC has focused on encouraging thoughtful screening discussions and documentation, rather than blaming clinicians for not reaching target screening rates. The VA health care system is rightly viewed as a leader in quality improvement because of its impressive improvement in the care processes and outcomes of the veterans it serves.1,3 In 2470

addition, the VA has created an environment where clinicians and researchers are encouraged to think critically about measures of quality, to identify potential problems with these measures, and to work to implement needed changes.33 The fact that this project was funded by a VA Career Development Award attests to the commitment of the VA for moving the science of quality measurement forward. The difficulties measuring the quality of colorectal cancer screening demonstrate that even well-intentioned performance measures can have pitfalls and emphasize that performance measurement is a young technology that requires further evaluation before it can achieve its full role in improving patient care. Author Contributions: Study concept and design: Walter, Heineken, Covinsky. Acquisition of data: Walter, Davidowitz. Analysis and interpretation of data: Walter. Drafting of the manuscript: Walter, Davidowitz, Heineken, Covinsky. Critical revision of the manuscript for important intellectual content: Walter, Heineken, Covinsky. Obtained funding: Walter. Administrative, technical, or material support: Davidowitz, Heineken. Funding/Support: Dr Walter is a recipient of the Veterans Affairs Career Development Award in Health Services Research and Development. Dr Covinsky was supported in part by independent investigator award K02HS00006-01 from the Agency for Healthcare Research and Quality and is a Paul Beeson Faculty Scholar in Aging Research. Role of the Sponsors: The funding sources had no role in the design, conduct, analyses, or decision to publish this study. REFERENCES 1. Kizer KW, Demakis JG, Feussner JR. Reinventing VA health care. Med Care. 2000;38(suppl I):I7-I16. 2. Kaufman A. Veterans Administration launches external peer review program. QRC Advis. 1996;12: 1-3. 3. Jha AK, Perlin JB, Kizer KW, Dudley RA. Effect of the transformation of the Veterans Affairs health care system on the quality of care. N Engl J Med. 2003; 348:2218-2227. 4. VA Office of Quality and Performance. External Peer Review Program (EPRP) reports. 2002. Available at: http://vaww.oqp.med.va.gov/oqp_services /performance_measurement/eprp.asp. Accessed October 17, 2003. 5. Hardcastle JD, Chamberlain JO, Robinson MH, et al. Randomized controlled trial of faecal-occultblood screening for colorectal cancer. Lancet. 1996; 348:1472-1477. 6. Kronborg O, Fenger C, Olsen J, et al. Randomized study of screening for colorectal cancer with faecaloccult-blood test. Lancet. 1996;348:1467-1471. 7. Mandel JS, Bond JH, Church TR, et al. Reducing mortality from colorectal cancer by screening for fecal occult blood. N Engl J Med. 1993;328:13651371. 8. Selby JV, Friedman GD, Quesenberry CP, Weiss NS.

JAMA, May 26, 2004—Vol 291, No. 20 (Reprinted)

A case-control study of screening sigmoidoscopy and mortality from colorectal cancer. N Engl J Med. 1992; 326:653-657. 9. Woolf SH. Practice guidelines: a new reality in medicine. Arch Intern Med. 1993;153:2646-2655. 10. Wenger NS, Shekelle PG; ACOVE Investigators. Assessing care of vulnerable elders: ACOVE project overview. Ann Intern Med. 2001;135:642-646. 11. US Preventive Services Task Force. Screening for colorectal cancer: recommendation and rationale. Ann Intern Med. 2002;137:129-131. 12. Winawer SJ, Fletcher RH, Miller L, et al. Colorectal cancer screening: clinical guidelines and rationale. Gastroenterology. 1997;112:594-642. 13. VA Office of Quality and Performance. FY 2002 VHA Performance Measurement System Technical Manual. 2001. Available at: http://vaww.oqp.med .va.gov/oqp_services/performance_measurement /tech_man.asp. Accessed October 17, 2003. 14. Charlson ME, Pompei P, Ales KL, MacKenzie CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation. J Chronic Dis. 1987;40:373-383. 15. Walter LC, Covinsky KE. Cancer screening in elderly patients. JAMA. 2001;285:2750-2756. 16. Sox HC. Screening for disease in older people. J Gen Intern Med. 1998;13:424-425. 17. Jencks SF, Cuerdon T, Burwen DR, et al. Quality of medical care delivered to Medicare beneficiaries. JAMA. 2000;284:1670-1676. 18. Palmer RH. Process-based measures of quality. Ann Intern Med. 1997;127:733-738. 19. US Department of Health and Human Services, Office of Disease Prevention and Health Promotion. Healthy People 2010 Leading Health Indicators. Available at: http://www.healthypeople.gov/LHI/Priorities .htm. Accessed October 17, 2003. 20. HEDIS 2000: Health Plan Employer Data and Information Set. Washington, DC: National Committee for Quality Assurance; 1999. 21. VA Office of Quality and Performance. Performance measurement. 2003. Available at: http:// vaww.oqp.med.va.gov/oqp_services/performance _measurement/faqs.asp. Accessed October 17, 2003. 22. Weissman NW, Allison JJ, Kiefe CI, et al. Achievable benchmarks of care: the ABC’s of benchmarking. J Eval Clin Pract. 1999;5:269-281. 23. Harris R. What is the right cancer screening rate? Ann Intern Med. 2000;132:732-733. 24. Walter LC, Eng C, Covinsky KE. Screening mammography for frail older women: what are the burdens? J Gen Intern Med. 2001;16:779-784. 25. Eddy DM. Performance measurement: problems and solutions. Health Aff (Millwood). 1998;17:725. 26. Derose SF, Petitti DB. Measuring quality of care and performance from a population health care perspective. Annu Rev Public Health. 2003;24:363384. 27. Hlatky MA. Patient preferences and clinical guidelines. JAMA. 1995;273:1219-1220. 28. Greer AL, Goodwin JS, Freeman JL, Wu ZH. Bringing the patient back in: guidelines, practice variations, and the social context of medical practice. Int J Technol Assess Health Care. 2002;18:747-761. 29. Accuracy of Data Used to Compute VA’s Chronic Disease Care and Prevention Indices for FY 2001. Washington, DC: Dept of Veterans Affairs Office of Inspector General; 2003. Report 01-01544-88. 30. Vijan S. Are we overvaluing performance measures? Eff Clin Pract. 2000;3:247-249. 31. Casalino LP. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med. 1999;341:1147-1150. 32. Sheldon T. Promoting health care quality. Qual Health Care. 1998;7(suppl):S45-S50. 33. Kizer KW. The “new VA.” Am J Med Qual. 1999; 14:3-20.

©2004 American Medical Association. All rights reserved.