Medical Decision Making

3 downloads 0 Views 540KB Size Report
Oct 10, 2011 - University Hospital Centre; Doug Coyle, University of Ottawa; Cheri ...... Wilson EC, Peacock SJ, Ruta D. Priority setting in practice: what is the ...
Medical Decision Making http://mdm.sagepub.com/

Bridging Health Technology Assessment (HTA) and Efficient Health Care Decision Making with Multicriteria Decision Analysis (MCDA) : Applying the EVIDEM Framework to Medicines Appraisal Mireille M. Goetghebeur, Monika Wagner, Hanane Khoury, Randy J. Levitt, Lonny J. Erickson and Donna Rindress Med Decis Making 2012 32: 376 originally published online 10 October 2011 DOI: 10.1177/0272989X11416870 The online version of this article can be found at: http://mdm.sagepub.com/content/32/2/376

Published by: http://www.sagepublications.com

On behalf of:

Society for Medical Decision Making

Additional services and information for Medical Decision Making can be found at: Email Alerts: http://mdm.sagepub.com/cgi/alerts Subscriptions: http://mdm.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav

>> Version of Record - Mar 27, 2012 OnlineFirst Version of Record - Oct 10, 2011 What is This?

Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

Bridging Health Technology Assessment (HTA) and Efficient Health Care Decision Making with Multicriteria Decision Analysis (MCDA): Applying the EVIDEM Framework to Medicines Appraisal Mireille M. Goetghebeur, PhD, Monika Wagner, PhD, Hanane Khoury, PhD, Randy J. Levitt, PhD, Lonny J. Erickson, PhD, Donna Rindress, PhD

Background. Health care decision making is complex and requires efficient and explicit processes to ensure transparency and consistency of factors considered. Objectives. To pilot an adaptable decision-making framework incorporating multicriteria decision analysis (MCDA) in health technology assessment (HTA) with a pan-Canadian group of policy and clinical decision makers and researchers appraising 10 medicines covering 6 therapeutic areas. Methods. An appraisal group was convened and participants were asked to express their individual perspectives, independently of the medicines, by assigning weights to each criterion of the MCDA core model: disease severity, size of population, current practice and unmet needs, intervention outcomes (efficacy, safety, patient reported), type of health benefit, economics, and quality of evidence. Participants then assigned performance scores for each medicine using available evidence synthesized in a ‘‘by-criterion’’ HTA report covering each of the MCDA CORE model criteria. MCDA estimates of perceived value were calculated by combining normalized weights and scores. Feedback on the approach

was collected through structured discussion. Results. Relative weights on criteria varied widely, reflecting the diverse perspectives of participants. Scores for each criterion provided a performance measure, highlighting strengths and weaknesses of each medicine. MCDA estimates of perceived value ranged from 0.42 to 0.64 across medicines, providing comprehensive measures incorporating a large spectrum of criteria. Participants reported that the framework provided an efficient approach to systematic consideration in a pragmatic format of the multiple elements guiding decision, including criteria and values (MCDA core model) and evidence (HTA ‘‘by-criterion’’ report). Conclusions. This proof-of-concept study demonstrated the usefulness of incorporating MCDA in HTA to support transparent and systematic appraisal of health care interventions. Further research is needed to advance MCDA-based approaches to more effective healthcare decision making. Key words: health care decision making; multicriteria decision analysis; health technology assessment. (Med Decis Making 2012;32:376–388)

D

that systematically analyze, structure, and provide access to the relevant evidence involved in a decision, thus reducing the risk of overlooking or ignoring important information.1 Beyond focusing on costeffectiveness, such approaches need to integrate all factors considered by decision makers in practice,9–13 spanning clinical, economic, social, organizational, ethical, and legal dimensions.14–17 Such systematic and explicit approaches would enhance transparency and consistency in decision making and hold promise for improved decisions.18–20 There have been a number of advances in this direction.10–13,21–25 For example, Mullen24 reviewed

ecision making in health care requires consideration of a wide range of scientific, medical, economic, social, and ethical elements1 and calls for both objective scientific judgment2,3 as well as value judgment.3–6 This process demands transparency and must be based on relevant reasons, two conditions recognized as essential for legitimacy within the Accountability for Reasonableness (A4R) framework.7,8 The number of factors involved and the complexity of the decision-making process call for approaches

DOI: 10.1177/0272989X11416870

376  MEDICAL DECISION MAKING/MAR–APR 2012 Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

EVIDEM PROOF-OF-CONCEPT STUDY WITH MEDICINES

initiatives by regional UK Health Authorities that aim to enhance transparency and accountability in prioritization decision making. The field of health technology assessment (HTA), in particular, views itself as a bridge between evidence and decision making by improving synthesis, communication, and dissemination of information.26,27 In this context, the need for clearly structured HTA reports has been recognized and sparked development of a format that provides full access to the underlying evidence (i.e., HTA core model28) and a checklist that aims to improve transparency and consistency of HTA reports.29 Multicriteria decision analysis (MCDA) is a tool that facilitates the systematic and explicit consideration of multiple factors that may influence decisions.1,30–33 In MCDA, the decision problem (e.g., the choice of intervention) is analyzed to identify all the factors (i.e., criteria) that may affect the decision and thus develop the full set of decision criteria. Decision makers assign weights to each criterion, thereby making their values and objectives explicit to themselves and others.34 Then decision makers score the performance of each health care intervention with respect to each criterion, a step that prompts explicit consideration of the advantages and disadvantages of each option and fosters discussion within the decision-making group.1,30–32

Weights and scores are combined to produce an aggregate measure of each health care intervention. Although MCDA may be perceived as not intuitive and potentially usurping decision-making authority, if kept simple and designed to measure the value of health care interventions, it facilitates an important dialog and forces decision makers to think hard about what they value, why they value it, and in what context they value it. The Evidence and Value: Impact on DEcisionMaking (EVIDEM) framework was developed to bridge HTA with MCDA.35 It was designed to provide a core MCDA model adaptable to the context of decision makers using a contextual tool, combined with a ‘‘by-criterion’’ HTA report methodology to provide synthesized evidence at the criteria level.36 This approach is intended to facilitate knowledge transfer, to support the deliberative process through systematic consideration of all decision criteria, to prioritize health care interventions, and to enhance communication of the decisions. The objective of this study was to perform a proof-of-concept test of this adaptable approach with a group representing the type of health care stakeholders often involved in making health care policy decisions, using case studies from several therapeutic areas.

METHODS Received 12 August 2010 from BioMedCom Consultants, Inc., Dorval, Quebec, Canada (MMG, MW, HK, RJL, LJE, DR) and Centre Hospitalier Universitaire de Montre´al—McGill University Hospital Center (CHUM-MUHC) Technology Assessment Unit, Montreal, Quebec, Canada (LJE). This study was funded by an unrestricted research grant from Pfizer Canada. The authors acknowledge the contributions of the participants in the appraisal group: Jean-Francxois Bussie`res, CHU Sainte-Justine Research Center; Benoit Cossette, Sherbrooke University Hospital Centre; Doug Coyle, University of Ottawa; Cheri Deal, University of Montre´al; Roland Grad, McGill University; Christine Lee, McMaster University; Mitchell Levine, McMaster University; Diane Lowden, McGill University; G. B. John Mancini, University of British Columbia; Paul Oh, University of Toronto; Genevieve Tousignant, McGill University; Wendy Ungar, University of Toronto; and MarieClaude Vanier, University of Montre´al. Participation does not imply agreement with the content of this article. The authors thank Andrew Smith, PhD, for independent assessment of the quality of some pieces of evidence as well as Mark Legault, Frederic Lavoie, David Fortier, and Donna Marchetti from Pfizer Canada for providing access to submission dossiers and useful suggestions and comments. Supplementary material for this article is available on the Medical Decision Making Web site at http://mdm.sagepub.com/supplemental. Revision accepted for publication 4 June 2011. Address correspondence to Mireille M. Goetghebeur PhD, BioMedCom Consultants, Inc, 1405 Transcanada Highway, Suite 310, Dorval, Que´bec, Canada, H9P 2V9; telephone: 514-421-1515; fax: 514-421-1551;e-mail: [email protected].

Description of the Framework The EVIDEM framework is available free of charge through a not-for-profit organization (The EVIDEM Collaboration36), which is supporting its collaborative development with researchers and end users from around the world. These multipurpose tools consist of an MCDA module and an HTA module. Decision criteria on which the framework is built were identified based on an extensive analysis of the literature and detailed analysis of drug overage decision-making processes from more than 20 jurisdictions around the world.35 The MCDA model was then built by selecting a set of criteria that fulfills the principles of MCDA modeling (i.e., completeness, nonredundancy, operationality, and mutual independence). The MCDA model thus consists of 15 criteria (quantifiable/intrinsic criteria) that are universally operationalizable (i.e., low and high end of scales are universally agreed upon) (Table 1). Although the cost-effectiveness criterion does not fulfill the requirement of nonredundancy, it was included in the framework because of its major

377

ORIGINAL ARTICLES Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

GOETGHEBEUR AND OTHERS

Table 1

Decision Criteria Included in the Multicriteria Decision Analysis (MCDA) Modela

Criteria

Scoring Scale and Anchors

Disease impact D1: Disease severity D2: Size of population Context of intervention C1: Clinical guidelines C2: Comparator interventions limitations (unmet needs) Intervention outcomes I1: Improvement of efficacy/effectiveness

I2: Improvement of safety and tolerability I3: Improvement of patient-reported outcomes (PRO)

Type of benefit T1: Public health interest T2: Type of medical service (cure) Economics E1: Budget impact on health plan E2: Cost-effectiveness E3: Impact on other spending Quality of evidence Q1: Adherence to requirements of the decision-making body Q2: Completeness and consistency of reporting evidence Q3: Relevance and validity of evidence

0 3 0 3

= = = =

Not severe (minor inconvenience) Very severe Very rare disease Common disease

0 3 0 3

= = = =

No recommendation Strong first-line recommendation No or very minor limitations Major limitations

0 = Lower efficacy/effectiveness than comparators presented 3 = Major improvement in efficacy/effectiveness 0 = Lower safety/tolerability than comparators presented 3 = Major improvement in safety/tolerability 0 = Worse PRO/lower convenience/lower adherence than comparators presented 3 = Major improvement 0 3 0 3

= = = =

No risk reduction Major risk reduction Minor service Major service

0 3 0 3 0 3

= = = = = =

Substantial additional expenditures Substantial savings for health plans Not cost-effective Highly cost-effective Substantial additional other spending Substantial savings

0 = Low adherence 3 = High adherence 0 3 0 3

= = = =

Many gaps/inconsistent Complete and consistent Low relevance/validity High relevance/validity

a Additional contextual decision criteria are listed in Web Appendix A as well as definitions and rationales for design and inclusion for each decision criterion.

role in current decision-making practices.35 Contextual criteria, which were deemed operationalizable only in a specific context (e.g., defining the priorities of a specific health plan), were not included in the MCDA core model but were subsequently organized into a contextual tool designed to trigger reflection on these criteria and contextualize the framework (see Web Appendix A for detailed definitions and rationales for each criterion). Indeed, some contextual criteria can be integrated into the MCDA model after they have been operationalized for

a specific context. The framework also includes detailed protocols for the collection, analysis, assessment, synthesis, and presentation of evidence for each decision criterion (HTA module) to develop HTA reports that are directly integrated into the MCDA model. (The MCDA model with an integrated HTA report is referred to as an MCDA matrix.) Results of the MCDA model can be used to rank health care interventions for reimbursement decisions/prioritization, and the contextual tool provides a means to capture nonquantifiable considerations

378  MEDICAL DECISION MAKING/MAR–APR 2012 Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

EVIDEM PROOF-OF-CONCEPT STUDY WITH MEDICINES

Investigators

Literature review for each medicine  Public domain & proprietary data HTA report for each medicine Synthesized data organized into the MCDA matrix

Panel  Decision makers  Specialists  General practitioners  Nurses  Pharmacists  Health economists/ epidemiologists

Panel perspective Importance of each MCDA decision criterion (weighting) Appraisal of medicines Scoring medicines with respect to each MCDA decision criterion Discussion  Feedback on process  Potential applications  Exploration of

qualitative considerations

Figure 1 Study plan. MCDA, multicriteria decision analysis; HTA, health technology assessment.

that may affect the overall appraisal. In this pilot study, the usefulness of the framework was tested for formulary decision making with a pan-Canadian panel of health care stakeholders. Study Design Testing of the decision framework was performed with a group of representative health care stakeholders who appraised the medicines as case studies from several therapeutic areas; it involved the following (Figure 1):  Development of ‘‘by-criterion’’ HTA reports: Investigators performed an extensive review of the literature to develop HTA reports and synthesized the data to inform 15 decision criteria organized into an MCDA matrix.35 In addition to publicly available data, proprietary data such as is normally required by decision makers were used, provided by a manufacturer (Pfizer) for 10 medicines previously submitted to Canadian drug reimbursement advisory bodies (Common Drug Review and the Conseil du Me´dicament) between 2005 and 2007. HTA reports for these 10 medicines from 6 therapeutic areas (cardiovascular disease, endocrinology, infectious disease, neurology, ophthalmology, and oncology) were thus developed.  Appraisal group: First, the relative importance of each criterion of the MCDA core model was elicited from the participants (weighting), independent of

the medicines to be appraised. Then, participants appraised the medicines by scoring each criterion based on the synthesized evidence presented in the MCDA matrix. MCDA estimates for each medicine were calculated by combining weights and scores.  Feedback on the approach was collected during a structured discussion.

Development of Health Technology Assessment Reports An extensive analysis of the literature was performed to identify available data for each medicine according to a standard protocol developed for the EVIDEM framework. Databases and sources searched included PubMed, Centers for Review and Dissemination, Cochrane, trial registries, disease association Web sites (for each disease covered), and Web sites of the Agency for Healthcare Research and Quality (AHRQ), the National Institute for Clinical Excellence (NICE), the Canadian Agency for Drugs and Technologies in Health (CADTH), and the World Health Organization (WHO), completed by hand searching of bibliographies. Search terms included names of the medicines and all their comparators, names of the specific diseases covered, clinical, efficacy, safety, quality of life/QoL/HRQoL, epidemiol*/prevalence/ incidence, mortality, guidelines, recommendations, clinical practice, treatment patterns, patient reported

379

ORIGINAL ARTICLES Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

GOETGHEBEUR AND OTHERS

outcome*/PRO, cost*, econom*, and productivity. For components that are country specific (e.g., epidemiology, clinical guidelines), data in the Canadian setting were sought first, and when not available, data from other countries were reported. Data from the public domain were supplemented by proprietary data to provide information for each MCDA decision criterion (e.g., data on budget impact of medicine on Canadian drug plans). Data thus identified and collected were analyzed and synthesized following EVIDEM methodology35 and HTA best practices recommendations37 with the aim of providing necessary, concise, and sufficient information to allow participants to reach an opinion on each decision criterion. Standard of care or comparators in available studies were used as a baseline for all decision criteria involving comparators (i.e., C2, comparative intervention limitations; I1, improvement of efficacy/effectiveness; I2, improvement of safety; I3, improvement of PRO; E1, budget impact [cost of intervention]; E2, costeffectiveness; E3, economic impact on other spending [hospitalization, disability]). For each of the 15 criteria of the framework, a systematic and standardized approach was used, including a template for reporting and instructions. For example, for the criterion ‘‘improvement of clinical efficacy/effectiveness,’’ instructions indicated that priority is given to peer-reviewed published comparative studies (randomized controlled trial [RCT] and observational) and meta-analyses (e.g., Cochrane reviews), whereas other types of studies may be included if evidence for certain outcomes of interest is limited. If head-to-head clinical trials were lacking, data from trials of key comparators were also synthesized and presented in the MCDA matrix. In a first step, comprehensive evidence tables were developed complying with best practices in HTA.37 These comprehensive tables were then used as a basis to synthesize most critical data for each outcome measure in a format suitable for the MCDA matrix (i.e., easy to read, concise summary tables). The standard format of these summary tables includes study information (authors, year, design, number of patients and age, treatment, and duration) and data (results for treatment arm and comparators, difference across study arms, and statistical significance). For the criteria related to the quality of evidence and to uncertainty (Q1, adherence to requirements of the decision-making body; Q2, completeness and consistency of reporting evidence; Q3, relevance and validity of evidence), a critical analysis of studies

and available evidence (e.g., clinical trials, epidemiological studies, economic studies) was performed using semi-quantitative instruments described previously.35 Studies were thus critically appraised on dimensions outlined in the instruments by one investigator who had to provide structured critical comments on the study and a grade on a 4-point scale from 0 (no data) to 3 (no gaps/no critical issues). In this pilot study, a second investigator reviewed the critical analyses; the 2 investigators discussed their findings and reached consensus. These data were used to feed the criteria of the MCDA core model related to the quality of evidence (Q1, Q2, and Q3). Appraisal Group Inclusion of a broad range of stakeholders has been recognized as a key element in legitimizing resource allocation decisions.38 Thus, the appraisal group was designed to include stakeholders from 6 different categories: specialists, general practitioners, nurses, pharmacists, policy decision makers, and health economists/epidemiologists. Although they represent a key group of stakeholders, patients or patient groups were not included in this exploratory pilot study. (Patient representatives were included in subsequent studies applying the framework.39) To be included, specialists, general practitioners, pharmacists, and nurses had to be experienced in their field (specialists were required to be authors of prominent reviews or guidelines), practicing in a clinical setting, and have professional interest/experience in health care decision making. Policy decision makers and health economists/epidemiologists had to have an active research program relevant to the field of decision making and/or be members of advisory committees. Thirty-nine experts fulfilling these criteria were contacted from across Canada with an invitation letter describing the project and offering identical minimal honoraria and coverage of expenses; 13 agreed to participate: 3 health policy decision makers, 3 clinical specialists, 1 general practitioner, 2 nurses, 2 clinical pharmacists, and 2 health economists/ epidemiologists. During the workshop, participants first weighted the importance of each criterion of the MCDA core model independent of the interventions to be appraised. They were asked to assign weights to reflect their own personal perspective in the context of a decision-making committee with the goal of optimizing health at the societal level. A simple weight elicitation technique on a 5-point scale was

380  MEDICAL DECISION MAKING/MAR–APR 2012 Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

EVIDEM PROOF-OF-CONCEPT STUDY WITH MEDICINES

selected for this pilot testing, with 1 representing the least and 5 the most important criteria.40 To appraise the medicines, participants were asked to score on a 4-point scale each criterion using synthesized HTA information provided within the MCDA matrix. Minimum (0) and maximum (3) scores were defined for each criterion. Medicine appraisals were randomly allocated to participants, while ensuring that each medicine would be appraised by each category of participants. Specialists were nonrandomly assigned the medicines associated with their specialty, then randomly assigned to other medicines. Each participant had a minimum of 5 medicines to appraise, and appraisal was performed in random order. Feedback from the appraisal group was collected during discussion periods that were structured by key topics (i.e., weighting process, additional decision criteria, HTA content, scoring process, MCDA estimate, and potential applications in professional environments). Analyses and MCDA Estimates of Perceived Value Analyses were performed in Microsoft Excel. Descriptive statistics—means, medians, maxima, minima, and standard deviations (SD)—were calculated for weights and scores and are reported on their original scales. To calculate MCDA estimates of perceived value using a linear model, the weights (Wx) were normalized (i.e., distributed across the 15 criteria to sum up to 1 for each participant), and the scores (Sx) were standardized by dividing them by the maximum score 3. Applying the MCDA model thus resulted in MCDA estimates of perceived value (V) on a scale of 0 to 1 for each medicine as the sum of combined normalized weights and scores for all decision criteria (n = 15)35: 0 V5

1

n B X C Wx C: B n A @P x51 Wx 3 S3x x51

For an MCDA estimate close to 1, an intervention would have to approach the ideal—for example, it would have to have such characteristics as curing an endemic severe disease with no alternative interventions; providing massive improvement in efficacy, safety, and quality of life; and producing major health care savings and public health benefits (such as a very inexpensive and effective cure for malaria globally).

RESULTS Weighting and Perspectives of Participants It took on average 16 (SD 3) minutes for participants to assign weights to the 15 criteria of the MCDA core model. Weights thus collected are shown in Figure 2. Relative weights varied widely among participants, although there appeared to be good consensus on the importance of ‘‘I1, improvement of efficacy/effectiveness’’ (SD 0.7) and ‘‘Q3, relevance and validity of evidence’’ (SD 0.6). Largest variations in weights across the group were observed for the criteria ‘‘D2, size of population’’ (SD 1.2) and ‘‘Q1, adherence to requirements of decision-making body’’ (SD 1.3). Participants found the criteria ‘‘Q3, relevance and validity of evidence’’ and ‘‘I1, improvement of efficacy/effectiveness’’ most important to decisions, with mean weights of 4.7 and 4.5, respectively. The lowest weights were assigned to the criteria ‘‘C1, clinical guidelines’’ (3.0 [SD 1.0]) and ‘‘E1, budget impact’’ (3.0 [SD 0.9]). During the discussion following the weighting exercise, participants raised the following question: Which perspective should be adopted when assigning weights? At the onset, participants were asked to express their values in the context of a decisionmaking committee with the objective of optimizing health at the societal level. A point of discussion was whether ‘‘weights should be based on what we think the situation is or what we think the situation should be or what we believe to be realistic.’’ It was felt that stakeholders should provide their ‘‘own unique perspective’’ and ‘‘their individual feelings.’’ There was also discussion around the issue that value systems might differ depending on type of patient and type of disease. The weighting exercise was a means to stimulate reflection in a concrete manner and to explore actual values of committee members. Participants also pointed out that other aspects, not included in the MCDA core model, could influence the decision. Several of these aspects were identified by participants, such as political and historical context, priorities of funding bodies and societies regarding access and equity (e.g., for orphan diseases), stakeholder pressures, and the capacity of the health care system to make appropriate use of a health care intervention. These aspects may carry sufficient weight to change the results of an appraisal that was based on the scientific and quantitative aspects alone.

381

ORIGINAL ARTICLES Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

GOETGHEBEUR AND OTHERS

Mean Weights (±SD)

Mean

SD

3.9

1.0

4.0

2

5

D1 Disease severity

Median Min

Max

D2 Size of population affected by disease

3.2

1.2

3.0

1

5

C1 Clinical guidelines for intervention

3.0

1.0

3.0

2

4

C2 Comparative interventions

3.6

1.0

3.0

2

5

I1 Improvement of efficacy/effectiveness

4.5

0.7

5.0

3

5

I2 Improvement of safety & tolerability

4.3

0.8

4.0

3

5

I3 Improvement of patient reported outcomes

3.4

1.0

4.0

2

5

T1 Public health interest

4.2

0.7

4.0

3

5

T2 Type of medical service

4.1

1.0

4.0

2

5

E1 Budget impact on health plan

3.0

0.9

3.0

2

5

E2 Cost-effectiveness of intervention

4.1

0.7

4.0

3

5

E3 Impact on other spending

3.9

0.9

4.0

3

5

Q1 Adherence to requirements of decision making body

3.1

1.3

3.0

1

5

Q2 Completeness and consistency of reporting evidence

4.1

1.0

5.0

2

5

Q3 Relevance and validity of evidence

4.7

0.6

5.0

3

5

1 Low

2

3

4

5 High

Figure 2 Mean weights of participants assigned to the decision criteria of the multicriteria decision analysis (MCDA) core model. A 5point weighting scale was used with 1 for lowest weight and 5 for highest weight.

Health Technology Assessment Reports of Medicines in MCDA Format To appraise medicines using the framework, a ‘‘by-criterion’’ HTA report in the MCDA matrix format was produced for each medicine and distributed to participants. Development of these reports was guided by a template with instructions that facilitated data analysis and synthesis and ensured standardized presentation for each criterion of the MCDA core model, as described previously.35 Open-access examples of such HTA reports can be found on the EVIDEM Web site, collaborative registry section.41 Synthesizing information in a standard fashion and providing sufficient information for stakeholders to grasp the issues was challenging because of the great variability in the body of data available for each medicine. Although information was available for most decision criteria, it was limited for some, such as patient-reported outcomes, which were available for 4 of 10 medicines only. Participants indicated that integrating the data from HTA reports into the MCDA matrix was an efficient approach to organization and synthesis of data, facilitating knowledge transfer. It was felt that presenting

the data using this ‘‘evidence-by-criteria’’ format was most helpful to appraise interventions and that it helped to evaluate data needs. Regarding the extent of information provided, some participants expressed a desire to have more details on the studies synthesized, which was not available in this proof-of-concept study, but the framework was designed to be Web based, and such features were developed in subsequent studies for which Web prototypes were designed with hyperlinks to several levels of detail, including source abstracts or full documents (if open access).39,41 Appraisal of Medicines Using the HTA reports in MCDA format, the participants scored the medicines for each decision criterion. This process required on average 31 (SD 15) minutes per medicine. A wide range of scores was recorded, capturing differences between the medicines appraised (Table 2). Variations in scores among participants were noticed: Among the 150 medicinespecific scores (10 medicines 3 15 criteria), 59 (39%) had an SD between 20% and 30%, and 14 (9%), including scores for ‘‘cost-effectiveness’’ for 5 medicines, had an SD of 30% or greater.

382  MEDICAL DECISION MAKING/MAR–APR 2012 Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

383

Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

1.4 (0.5) 2.5 (0.8)

2.4 (0.5) 1.5 (0.8) 2.0 (0.5) 1.1 (0.8) 1.9 (0.6)

0 0.9 (0.6) 1.8 (0.7) 2.0 (1.2) 1.0 (0.5) 2.3 (0.5) 2.3 (0.5) 2.1 (0.6)

2.4 (0.7) 0.9 (0.9) 1.1 (0.4) 1.0 (0) 1.2 (0.4)

1.8 (1.3) 0.6 (0.9) 2.0 (0.5) 1.4 (1.0) 1.6 (0.5) 2.2 (0.4) 1.4 (0.5) 1.8 (0.4)

B

1.9 (0.6) 2.8 (0.7)

A

C

1.9 (0.4)

2.0 (0.6)

2.3 (0.5)

0.4 (0.5)

0.6 (0.8) 1.4 (0.5)

0 1.3 (0.8)

1.1 (0.7)

0.5 (0.5)

1.2 (0.8)

2.9 (0.4)

0

2.0 (0.6) 2.4 (0.5)

A 4-point scoring scale was used (0 for lowest score and 3 for highest score).

Disease impact D1: Disease severity D2: Size of population affected by disease Context of intervention C1: Clinical guidelines for intervention C2: Comparative interventions Intervention outcomes I1: Improvement of efficacy/ effectiveness I2: Improvement of safety and tolerability I3: Improvement of patientreported outcomes Type of benefit T1: Public health interest T2: Type of medical service Economics E1: Budget impact on health plan E2: Cost-effectiveness of intervention E3: Impact on other spending Quality of evidence Q1: Adherence to requirements of decision-making body Q2: Completeness and consistency of reporting evidence Q3: Relevance and validity of evidence

Criteria

1.9 (0.8)

2.2 (1.1)

2.4 (0.7)

2.6 (0.7)

1.0 (0.7) 0.6 (1.1)

0 1.0 (0.7)

1.6 (1.1)

1.4 (0.7)

1.8 (0.7)

1.8 (0.7)

0

2.1 (0.8) 0.4 (0.5)

D

1.6 (0.5)

1.9 (0.4)

2.3 (0.7)

1.1 (0.6)

0.1 (0.4) 0.9 (1.0)

0 0.6 (0.5)

0.9 (0.4)

0.8 (0.5)

1.0 (0.5)

2.1 (0.6)

1.9 (0.4)

1.6 (0.7) 2.4 (0.7)

E

2.1 (0.4)

1.7 (0.5)

2.0 (0.6)

1.3 (0.8)

1.9 (0.9) 1.6 (1.1)

0 0.7 (0.5)

1.0 (0)

1.4 (0.5)

1.7 (0.5)

2.6 (0.5)

0

2.7 (0.5) 0.3 (0.5)

F

2.8 (0.5)

2.3 (0.5)

2.1 (0.6)

0.8 (0.5)

0.9 (0.6) 1.0 (0)

0 1.8 (0.7)

0.9 (0.4)

0.0 (0)

2.7 (0.5)

1.9 (0.6)

2.1 (0.8)

2.8 (0.5) 0.8 (0.5)

G

Medicine, Mean Score (Standard Deviation)

2.6 (0.5)

2.3 (0.5)

2.0 (0.5)

0.6 (0.5)

0.6 (0.7) 0.9 (1.0)

0 1.6 (0.7)

1.1 (0.7)

0.4 (1.1)

2.0 (1.1)

3.0 (0)

0

3.0 (0) 1.1 (0.4)

H

Table 2 Mean Scores of Participants Assigned to the Decision Criteria of the Multicriteria Decision Analysis Matrix for the 10 Tested Medicines

2.3 (0.5)

2.2 (0.4)

1.7 (0.5)

2.0 (0.6)

1.8 (0.8) 2.7 (0.5)

0 2.2 (0.4)

1.5 (0.5)

1.8 (0.4)

2.2 (0.8)

2.7 (0.5)

2.0 (0.6)

3.0 (0) 0.7 (0.8)

I

2.3 (0.5)

2.3 (0.5)

2.4 (0.5)

2.0 (0.8)

1.1 (0.7) 1.6 (0.8)

0 1.9 (1.2)

1.3 (0.5)

1.3 (1.0)

0.7 (0.5)

2.1 (0.7)

2.0 (1.3)

2.7 (0.5) 1.0 (0)

J

GOETGHEBEUR AND OTHERS

Normalized weight

Decision criteria

MCDA estimate

D1 Disease severity

0.07

D1

D2 Size of population affected by disease

0.06

D2

C1 Clinical guidelines

0.05

I1

C2 Comparative interventions limitations

0.06

I2

I1 Improvement of efficacy/effectiveness

0.08

I3

I2 Improvement of safety & tolerability

0.08

I4

I3 Improvement of patient reported outcomes

0.06

I5

T1 Public health interest

0.07

I6

T2 Type of medical service

0.07

I7

E1 Budget impact on health plan

0.05

E1

E2 Cost-effectiveness of intervention

0.07

E2

E3 Impact on other spending

0.07

E3

Q1 Adherence to requirements of decision making body

0.05

Q1

Q2 Completeness and consistency of reporting evidence

0.07

Q2

Q3 Relevance and validity of evidence

0.08

Q3

1

0.64

0.52

0.54 0.51 0.48

0.48

0.55 0.49

0.44 0.42

0 A

B

C

D

E

F

G

H

I

J

Medicine

Figure 3 Multicriteria decision analysis (MCDA) core model estimates for the 10 tested medicines based on weights and scores assigned by participants. Estimates were obtained using a linear model combining normalized weights and scores for each decision criterion.

Participants indicated that the process of scoring supported the deliberative process by promoting systematic consideration of a wide range of decision criteria and by making the appraisal process and the reasoning behind it more explicit. It was considered valuable to force people to think through each concept when evaluating a new health care intervention. It was also felt that this process would highlight those facets of the data about which an appraisal committee may have concerns. Participants also noted that it required working through a couple of assessments before becoming comfortable with the tool and fully comprehend the thought processes that it triggered in them. In some cases, participants indicated that they provided a low score because they deemed the data insufficient or not valid. Also, some participants noted that scoring scales may not be linear, an underlying assumption of the linear model selected for this proof-of-concept study. Participants indicated that the approach tested here could increase efficiency and transparency in decision making because data and/or scores from one group or committee could be shared or reused by others, and the framework would facilitate reevaluation of interventions as new data were generated.

MCDA Estimates of Perceived Value MCDA estimates of perceived value for the 10 medicines appraised, resulting from multiplication of normalized weights by scores and summation across criteria, ranged from 0.42 to 0.64 on a scale of 0 to 1 (Figure 3). Criteria related to evidence quality and adherence to requirements (Q1–Q3) together contributed most to the total value of the MCDA estimates, an absolute of 0.13 to 0.17 points across the 10 medicines (mean 0.15). This was followed by the 3 criteria related to comparative intervention outcomes (I1–I3), which together contributed 0.06 to 0.13 points (mean 0.09). The 2 disease-related criteria (D1 and D2) contributed together a mean of 0.08 points (range, 0.06– 0.10). Contribution of the economic criteria (E1–E3) varied greatly across the medicines, from 0.04 to 0.14 points, with a mean of 0.08 points. Participants cautioned that this type of analysis should not be used as a formulaic approach since consideration of an aggregate number, such as the MCDA estimate of perceived value, needs to be balanced by other not easily quantifiable considerations of ethical and context-specific aspects. Rather than focusing on the MCDA estimate, it was felt that a lot of the value of the approach came from organizing data and scoring by criterion.

384  MEDICAL DECISION MAKING/MAR–APR 2012 Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

EVIDEM PROOF-OF-CONCEPT STUDY WITH MEDICINES

DISCUSSION Bridging HTA and decision making with MCDA, the framework was designed to support health care decision making primarily by stimulating reflection and exchange and making more explicit the natural thinking process underlying the appraisal of health care interventions. This proof-of-concept testing of an adaptable framework in the context of formulary decision making with a diverse panel of potential end users who appraised 10 medicines from different therapeutic fields demonstrated the feasibility and utility of this approach in supporting and harmonizing knowledge transfer and appraisal of health care interventions. Decision criteria of the MCDA core model constitute a set of criteria fundamental to appraisal, selected from an extensive analysis of decisionmaking processes. The model assumed preferential independence and was designed to fulfill MCDA methodological requirements of completeness, nonredundancy, mutual independence, and operationality.35 These 15 decision criteria can be used as a starting point in defining an MCDA model for a committee advising or making recommendations on reimbursement of health care interventions (medicines, devices, procedures). Because of its redundancy with other criteria, the inclusion of the cost-effectiveness criterion is methodologically problematic; however, cost-effectiveness studies provide critical evidence for other economic criteria (e.g., E3, impact on other spending) to ensure consideration of cost-consequences. It is proposed that the cost-effectiveness criterion might be entirely removed from the MCDA model as the information it covers is already considered in other criteria— namely, the following: I1, improvement of efficacy/ effectiveness; I2, improvement of safety and tolerability; I3, improvement of patient-reported outcomes; E3, impact on other spending; and E1, budget impact on health plan. In the current debate about limitations of reliance on cost-effectiveness for decision making,10 MCDA approaches open a path to fully exploit the available evidence and move beyond the cost-effectiveness paradigm for decision making, furthering A4R advocated by Daniels and Sabin.7 The pilot study was limited to exploring decision criteria that could be operationalized from a universal perspective (i.e., universal agreement on what constitutes the low and high end of the scoring scale). For example, everything else being equal, it was assumed that an intervention for a severe disease would have more overall value than an intervention for a mild

disease. However, a number of other criteria related to the context of the decision may play a significant role in the appraisal of an intervention and final decision on its use and reimbursement. These include political and historical context, health care system capacity and barriers, stakeholder pressures, and ethical principles of resource allocation (utility, efficiency, and fairness, including priorities regarding access and equity). These aspects fall under what Lomas and others call42 ‘‘context-sensitive’’ and ‘‘colloquial’’ forms of evidence and need to be incorporated into the process for it to be truly deliberative.42,43 To ensure systematic consideration of all aspects weighing on a decision, a qualitative tool to support the explicit consideration of such contextspecific and ethical criteria was developed and integrated into the EVIDEM framework, in a subsequent study.39 In settings where specific priorities are defined (e.g., priority based on age of population), some of these context-specific criteria could be operationalized and included in the MCDA model. For example, in a study exploring priority setting in Ghana, ‘‘age of target group’’ and ‘‘poverty reduction’’ were identified as key criteria through discussion with stakeholders and local policy makers.30 Contextual priorities such as ‘‘disabled populations’’ have been integrated into the MCDA model used by a district health board in New Zealand (Sharon Kletchko, MD, personal communication, 2011). End users of the proposed framework need to reflect on which criteria to include or whether to expand some criteria into subcriteria to adapt the framework into a model that best fits their goals within their settings. In this pilot study, a simple weight elicitation method was used in which weights were directly assigned on a scale of 1 to 5 on the basis of conceptual importance. Simple weight elicitation allows the direct expression of individual perspectives and, by doing so, stimulates communication among decision makers.44 Drawbacks of simple methods include the risk of low discriminatory power if high values are assigned predominantly to criteria for which alternatives show little difference. Many other methods exist to elicit weights (e.g., analytical hierarchy process [AHP]; multiattribute utility analysis, including SMART and SMARTs; swing weighting; conjoint analysis) that have varying degrees of complexity.44–46 Bearing in mind that weight elicitation is an imperfect process and that results may vary depending on the method used, developers of MCDA applications need to balance end users’ ease of use and methodological rigor.47 An MCDA model adapted from the EVIDEM framework by

385

ORIGINAL ARTICLES Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

GOETGHEBEUR AND OTHERS

a district health board uses a hierarchical weighting technique (distributing 100 points across the main criteria and then 100 points across subcriteria of each main criterion) (Sharon Klethcko, MD, personal communication, 2011). This remains a simple technique that can easily be implemented by busy committees. Studies are ongoing to explore the use of different weight elicitation techniques for the MCDA model. In the present study, applying the MCDA module (weights and scores) allowed calculation of an MCDA estimate of perceived value for each medicine. Scoring was performed using a simple linear 4-point scale (0–3); scoring scales may in certain cases not be linear. More complex MCDA models can be developed to address this issue, keeping in mind that one of the framework objectives was to keep the approach simple, intuitive, and easy to use. The estimate of perceived value generated by the present MCDA model represents a comprehensive measure encompassing 15 decision criteria that go beyond cost-effectiveness. Inclusion of such a wide range of criteria in a single measure provides a broad scale and allows discrimination among medicines, with MCDA estimates ranging from 0.42 to 0.64 for the 10 medicines tested. However, as noted by the participants, the use of an MCDA value estimate should be based on a firm understanding of its meaning and limitations. Because they reflect the values of the group (captured by weights), MCDA value estimates are committee specific. These estimates are also context specific since comparator treatments and other factors often differ by setting. For the proposed MCDA core model, an estimate close to 1 would represent an ideal intervention. This interpretation is directly linked to anchoring of the scoring scales, which were defined from a societal perspective in this MCDA model. Consistent application of an MCDA model by a decisionmaking committee will produce MCDA value estimates to build a frame of reference for the ranking of interventions. Although this will be helpful to ensure consistent appraisal and facilitate prioritization, ranking should not be used as a formulaic approach but as a basis for deliberation that explicitly addresses context-specific and ethical issues.32,48 Finding, assessing, and synthesizing data on health care interventions and developing knowledge transfer tools are central both to supporting health care decision making and to promoting knowledge translation.49 A specific feature of the present decisionmaking framework is the use of HTA principles (HTA module) to consistently synthesize evidence

for each criterion of the MCDA model. This requires a detailed systematic methodology to provide appraisers access to evidence that is relevant and necessary to score the intervention on each given criterion. To avoid double counting, careful attention was given to preclude consideration of the same evidence in multiple criteria. For example, when compiling evidence on the criterion ‘‘current interventions limitations,’’ the cost aspect needs to be excluded because it is covered by the economic criteria E1, E2, and E3. The methodology draws on existing approaches, such as the Cochrane reviews for clinical data, in which synthesized evidence is reported along with an evaluation of the quality of this evidence, and on other tools, such as CONSORT,50 CHEC,51 STROBE,52 and GRADE.53 The methodology also applies in a pragmatic manner best practices for undertaking and reporting HTAs as described by the European Collaboration for Assessment of Health Interventions.37 Study participants who used these ‘‘by-criterion’’ HTA reports, directly incorporated into the MCDA model, to appraise multiple medicines found that they conveyed a clear picture of the available data to score each criterion, which would be very helpful in committee deliberations. This ‘‘evidence-by-criterion’’ format was also identified as a useful tool to reveal data gaps and to develop strategic research planning. It is important to keep in mind that MCDA approaches are tools to support systematic and consistent decision making and to force reflection on drivers of decision. Hence, beyond calculation of MCDA value estimates, the process of weighting and scoring supports group deliberation by stimulating reflection and discussion. The process of assigning weights forces decision makers to reflect on what they value and why they value it. In this study, weighting raised questions among the group, such as which perspective should be adopted in healthcare decision making and whether the same set of weights and values should be used for all types of interventions. Weighting can also set the stage for discussions on the mission and specific priorities of the health plan being served. With respect to appraising interventions, ‘‘packaging’’ a large quantity of complex data using an evidence-by-criterion format helps clarify concepts and ensures that consideration and discussion are not obstructed by large amounts of undigested information. The process of scoring itself helps decision makers think through each issue separately. On a committee level, variations in weights reveal differences in personal values among participants, highlighting the importance of

386  MEDICAL DECISION MAKING/MAR–APR 2012 Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

EVIDEM PROOF-OF-CONCEPT STUDY WITH MEDICINES

the composition of decision-making committees,38 whereas variations in scores among committee members could help identify those aspects that may profit from further discussion. Most important, weighting and scoring make the multiple considerations affecting decisions explicit and create a clear link between HTA and decision, as discussed by Drummond and others.27 Such MCDA-based approaches, adaptable to specific contexts, provide a means to reveal the different perspectives of decision makers (individual weights and scores), facilitating discussion and consensus seeking on recommendations and decisions. This proof-of-concept study should be considered in light of its limitations. Although intended to represent all relevant stakeholders, the appraisal group was limited to 13 experts, and testing by more end users is necessary to assess and further develop the framework. As well, even though the framework was designed to be applicable to any type of health care intervention, it was tested with a limited set of medicines; further testing with other types of interventions is under way. The MCDA model and its mechanics (criteria included, weighting method, and scoring scales and calculation of estimates) are also under further development and testing. Methodology for the synthesis of evidence and development of Web HTA reports providing access to evidence at several levels of detail are also undergoing further development. An explicit set of decision criteria combined with focused HTA reports can help promote collaboration among stakeholders by facilitating communication of data needs and decision rationales and contribute to the 360-degree transparency envisioned by Dhalla and Laupacis.18 MCDA is proposed as a practical avenue to address several of the elements identified by Sibbald and others54 to ensure successful health care decision making and priority setting. This adaptable MCDA-based approach to health technology assessment and appraisal is intended not as a formulaic approach but to stimulate thought and support the balancing act of health care decision making.55 Further research is needed to advance integration of MCDA approaches with HTA for more effective and transparent knowledge transfer and health care decision making.

3. PausJenssen AM, Singer PA, Detsky AS. Ontario’s formulary committee: how recommendations are made. Pharmacoeconomics. 2003;21(4):285–94. 4. Sinclair S, Hagen NA, Chambers C, Manns B, Simon A, Browman GP. Accounting for reasonableness: exploring the personal internal framework affecting decisions about cancer drug funding. Health Policy. 2008;86(2–3):381–90. 5. Eddy DM. Clinical decision making: from theory to practice. Anatomy of a decision. JAMA. 1990;263(3):441–3. 6. Tunis SR. Reflections on science, judgment, and value in evidence-based decision making: a conversation with David Eddy. Health Aff (Millwood). 2007;26(4):w500–15. 7. Daniels N, Sabin J. Limits to health care: fair procedures, democratic deliberation, and the legitimacy problem for insurers. Philos Public Aff. 1997;26(4):303–50. 8. Daniels N. Decisions about access to health care and accountability for reasonableness. J Urban Health. 1999;76(2):176–91. 9. Teerawattananon Y, Russell S. A difficult balancing act: policy actors’ perspectives on using economic evaluation to inform health-care coverage decisions under the universal health insurance coverage scheme in Thailand. Value Health. 2008;2(suppl 1):S52–60. 10. Schlander M. The use of cost-effectiveness by the National Institute for Health and Clinical Excellence (NICE): no(t yet an) exemplar of a deliberative process. J Med Ethics. 2008;34(7):534–9. 11. Camidge DR, Oliver JJ, Skinner C, et al. The impact of prognosis without treatment on doctors’ and patients’ resource allocation decisions and its relevance to new drug recommendation processes. Br J Clin Pharmacol. 2008;65(2):224–9. 12. Wilson EC, Peacock SJ, Ruta D. Priority setting in practice: what is the best way to compare costs and benefits? Health Econ. 2009;18(4):467–78. 13. Tappenden P, Brazier J, Ratcliffe J, Chilcott J. A stated preference binary choice experiment to explore NICE decision making. Pharmacoeconomics. 2007;25(8):685–93. 14. International Network of Agencies for Health Technology Assessment (INAHTA). HTA Resources. 2010. http://www.inahta.org/HTA/. Accessed 8 March 2010. 15. Velasco GM, Gerhardus A, Rottingen JA, Busse R. Developing health technology assessment to address health care system needs. Health Policy. 2010;94(3):196–202. 16. Johri M, Lehoux P. The great escape? Prospects for regulating access to technology through health technology assessment. Int J Technol Assess Health Care. 2003;19(1):179–93. 17. Lehoux P, Williams-Jones B. Mapping the integration of social and ethical issues in health technology assessment. Int J Technol Assess Health Care. 2007;23(1):9–16. 18. Dhalla I, Laupacis A. Moving from opacity to transparency in pharmaceutical policy. CMAJ. 2008;178(4):428–31.

REFERENCES

19. Bryan S, Williams I, McIver S. Seeing the NICE side of costeffectiveness analysis: a qualitative investigation of the use of CEA in NICE technology appraisals. Health Econ. 2007;16(2):179–93.

1. Baltussen R, Niessen L. Priority setting of health interventions: the need for multi-criteria decision analysis. Cost Eff Resour Alloc. 2006;4:14.

20. Vuorenkoski L, Toiviainen H, Hemminki E. Decision-making in priority setting for medicines: a review of empirical studies. Health Policy. 2008;86(1):1–9.

2. Menon D, Stafinski T, Stuart G. Access to drugs for cancer: does where you live matter? Can J Public Health. 2005;96(6):454–8.

21. Browman GP, Manns B, Hagen N, Chambers CR, Simon A, Sinclair S. 6-STEPPPs: a modular tool to facilitate clinician

387

ORIGINAL ARTICLES Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012

GOETGHEBEUR AND OTHERS

participation in fair decisions for funding new cancer drugs. J Oncol Pract. 2008;4(1):2–7. 22. Tierney M, Manns B. Optimizing the use of prescription drugs in Canada through the Common Drug Review. CMAJ. 2008;178(4):432–5. 23. Martin DK, Pater JL, Singer PA. Priority-setting decisions for new cancer drugs: a qualitative case study. Lancet. 2001; 358(9294):1676–81. 24. Mullen PM. Quantifying priorities in healthcare: transparency or illusion? Health Serv Manage Res. 2004;17(1):47–58. 25. Wilson E, Sussex J, Macleod C, Fordham R. Prioritizing health technologies in a Primary Care Trust. J Health Serv Res Policy. 2007;12(2):80–5. 26. Battista RN, Hodge MJ. The evolving paradigm of health technology assessment: reflections for the millennium. CMAJ. 1999; 160(10):1464–7. 27. Drummond MF, Schwartz JS, Jonsson B, et al. Key principles for the improved conduct of health technology assessments for resource allocation decisions. Int J Technol Assess Health Care. 2008;24(3):244–58. 28. Lampe K, Makela M, eds. EUnetHTA work package 4 team. HTA core model for medical and surgical interventions. Copenhagen, Denmark: EUnetHTA; 2007. 29. Hailey D. Toward transparency in health technology assessment: a checklist for HTA reports. Int J Technol Assess Health Care. 2003;19(1):1–7. 30. Baltussen R, Stolk E, Chisholm D, Aikins M. Towards a multi-criteria approach for priority setting: an application to Ghana. Health Econ. 2006;15(7):689–96. 31. Baltussen R, ten Asbroek AH, Koolman X, Shrestha N, Bhattarai P, Niessen LW. Priority setting using multiple criteria: should a lung health programme be implemented in Nepal? Health Policy Plan. 2007;22(3):178–85.

health technology assessment: applying the EVIDEM decisionmaking framework to growth hormone for Turner syndrome patients. Cost Eff Resour Alloc. 2010;8(1):4. 40. van Til JA, Dolan JG, Stiggelbout AM, Groothuis K, Ijzerman MJ. The use of multi-criteria decision analysis weight elicitation techniques in patients with mild cognitive impairment: a pilot study. Patient. 2008;1(2):127–35. 41. EVIDEM Collaboration. Open access prototypes of the Collaborative registry. 2010. Available from: URL: http://www.evidem.org/ evidem-collaborative .php 42. Lomas J, Culyer T, McCutcheon C, McAuley L, Law S. Conceptualizing and combining evidence for health system guidance. 2005. Available from: URL: http://www.chsrf.ca/kte_docs/ Conceptualizing%20and%20combining%20evidence.pdf 43. Culyer AJ. Equity of what in healthcare? Why the traditional answers don’t help policy—and what to do in the future. Healthc Pap. 2007;8 Spec No:12–26. 44. Dolan JG. Multi-criteria clinical decision support: a primer on the use of multiple-criteria decision-making methods to promote evidence-based, patient-centered healthcare. Patient. 2010;3(4): 229–48. 45. Clemen RT, Reilly T. Making hard decisions with DecisionTools. Mason, OH: South-Western Cengage Learning; 2001. 46. Mussen F, Salek S, Walker S. A quantitative approach to benefit-risk assessment of medicines—part 1: the development of a new model using multi-criteria decision analysis. Pharmacoepidemiol Drug Saf. 2007;16(Suppl 1):S42–6. 47. Ryan M, Scott DA, Reeves C, et al. Eliciting public preferences for healthcare: a systematic review of techniques. Health Technol Assess. 2001;5(5):1–186. 48. Felli JC, Noel RA, Cavazzoni PA. A multiattribute model for evaluating the benefit-risk profiles of treatment alternatives. Med Decis Making. 2009;29(1):104–15.

32. Baltussen R, Youngkong S, Paolucci F, Niessen L. Multi-criteria decision analysis to prioritize health interventions: capitalizing on first experiences. Health Policy. 2010;96(3):262–4. 33. Nobre FF, Trotta LT, Gomes LF. Multi-criteria decision making: an approach to setting priorities in health care. Stat Med. 1999;18(23):3345–54.

49. Straus SE, Tetroe JM, Graham ID. Knowledge translation is the use of knowledge in health care decision making. J Clin Epidemiol. 2011;64(1):6–10.

34. Peacock S, Mitton C, Bate A, McCoy B, Donaldson C. Overcoming barriers to priority setting using interdisciplinary methods. Health Policy. 2009;92(2–3):124–32.

51. Evers S, Goossens M, de Vet H, van Tulder M, Ament A. Criteria list for assessment of methodological quality of economic evaluations: consensus on health economic criteria. Int J Technol Assess Health Care. 2005;21(2):240–5.

35. Goetghebeur M, Wagner M, Khoury H, Levitt R, Erickson LJ, Rindress D. Evidence and value: impact on DEcisionMaking—the EVIDEM framework and potential applications. BMC Health Services Research. 2008;8(1):270. 36. EVIDEM Collaboration. EVIDEM Collaboration. 2010. Available from: URL: http://www.evidem.org 37. Busse R, Orvain J, Velasco M, Perleth M. Best practice in undertaking and reporting health technology assessments. Working group 4 report. Int J Technol Assess Health Care. 2002;18(2):361–422. 38. Gruskin S, Daniels N. Process is the point: justice and human rights: priority setting and fair deliberative process. Am J Public Health. 2008;98(9):1573–7. 39. Goetghebeur MM, Wagner M, Khoury H, Rindress D, Gregoire JP, Deal C. Combining multicriteria decision analysis, ethics and

50. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA. 2001;285(15):1987–91.

52. The STROBE group. The STROBE statement: checklist of essential items. 2005. Available from: URL: http://www .strobe-statement.org 53. Atkins D, Briss PA, Eccles M, et al. Systems for grading the quality of evidence and the strength of recommendations II: pilot study of a new system. BMC Health Serv Res. 2005;5(1):25. 54. Sibbald SL, Singer PA, Upshur R, Martin DK. Priority setting: what constitutes success? A conceptual framework for successful priority setting. BMC Health Serv Res. 2009;9:43. 55. Lomas J. Decision support: a new approach to making the best healthcare management and policy choices. Healthc Q. 2007; 10(3):16–8.

388  MEDICAL DECISION MAKING/MAR–APR 2012 Downloaded from mdm.sagepub.com at MCGILL UNIVERSITY LIBRARY on December 7, 2012