Developing a Clinical Dashboard

15 downloads 46366 Views 438KB Size Report
Mar 27, 2012 - improvement, and while best practice evidence is available for many ... (performance monitoring system) measures processes in real time, just ...
OBSTETRICS

Measuring Quality in Maternal-Newborn Care: Developing a Clinical Dashboard Ann E. Sprague, RN, PhD,1 Sandra I. Dunn, RN, PhD,1 Deshayne B. Fell, MSc,1 JoAnn Harrold, MD, FRCPC,2, Mark C. Walker, MD, FRCSC,1,3,4 Sherrie Kelly, MSc,1 Graeme N. Smith, MD, PhD, FRCSC5 1

Better Outcomes Registry & Network (BORN) Ontario, Ottawa ON

2

Department of Pediatrics, Children’s Hospital of Eastern Ontario, Ottawa ON

3

Departments of Obstetrics and Gynecology, and Epidemiology, University of Ottawa, Ottawa ON

4

Department of Obstetrics and Gynecology, The Ottawa Hospital and the Ottawa Hospital Research Institute, Ottawa ON

5

Department of Obstetrics and Gynecology, Kingston General Hospital, Queen’s University, Kingston ON

Abstract

Résumé

Pregnancy, birth, and the early newborn period are times of high use of health care services. As well as opportunities for providing quality care, there are potential missed opportunities for health promotion, safety issues, and increased costs for the individual and the system when quality is not well defined or measured. There has been a need to identify key performance indicators (KPIs) to measure quality care within the provincial maternal-newborn system. We also wanted to provide automated audit and feedback about these KPIs to support quality improvement initiatives in a large Canadian province with approximately 140 000 births per year. We therefore worked to develop a maternal-newborn dashboard to increase awareness about selected KPIs and to inform and support hospitals and care providers about areas for quality improvement.

La grossesse, l’accouchement et les débuts de la période néonatale sont des périodes où les femmes font grandement appel à des services de santé. Bien qu’il s’agisse là de bonnes occasions de fournir des soins de qualité, ces périodes peuvent également être marquées, lorsque la qualité n’est pas bien définie ou mesurée, par des occasions manquées en matière de promotion de la santé, par des problèmes de sûreté et par des dépassements de coûts aux niveaux individuel et systémique. Nous avons constaté un besoin d’identifier des indicateurs de rendement clés (IRC) en vue de mesurer la qualité des soins au sein du système provincial de santé maternelle-néonatale. Nous souhaitions également fournir des mécanismes automatisés d’audit et de rétroaction au sujet de ces IRC, en vue de soutenir les initiatives d’amélioration de la qualité au sein d’une grande province canadienne comptant environ 140 000 naissances par année. Nous avons donc travaillé à l’élaboration d’un tableau de bord maternel-néonatal visant à accroître les connaissances au sujet de certains IRC, ainsi qu’à renseigner et à soutenir les hôpitaux et les fournisseurs de soins en ce qui a trait à certains aspects nécessitant une amélioration de la qualité.

We mapped maternal-newborn data elements to a quality domain framework, sought feedback via survey for the relevance and feasibility of change, and examined current data and the literature to assist in setting provincial benchmarks. Six clinical performance indicators of maternal-newborn quality care were identified and evidence-informed benchmarks were set. A maternal-newborn dashboard with “drill down” capacity for detailed analysis to enhance audit and feedback is now available for implementation. While audit and feedback does not guarantee individuals or institutions will make practice changes and move towards quality improvement, it is an important first step. Practice change and quality improvement will not occur without an awareness of the issues.

Key Words: Quality improvement, performance measures, dashboard, audit and feedback, obstetrics, maternal-newborn care Competing interests: None declared.

Nous avons cartographié les éléments de données maternellesnéonatales dans un cadre de domaine de qualité, obtenu des commentaires par l’intermédiaire d’un sondage pour ce qui est de la pertinence et de la faisabilité du changement, et examiné la littérature et les données actuelles en vue de nous aider à formuler des points de repère provinciaux. Six indicateurs de rendement cliniques en matière de qualité des soins maternels-néonataux ont été identifiés et des points de repère factuels ont été établis. Un tableau de bord maternel-néonatal disposant d’un « accès en mode descendant » en vue de permettre une analyse détaillée visant l’amélioration de l’audit et de la rétroaction est dorénavant prêt à être mis en œuvre. Bien que l’audit et la rétroaction ne puissent garantir que les personnes et les établissements modifieront leurs pratiques et se tourneront vers l’amélioration de la qualité, il s’agit tout de même d’une première étape importante. La modification des pratiques et l’amélioration de la qualité ne se produiront pas sans que les zones problématiques ne soient mises au jour.

Received on March 27, 2012 Accepted on September 12, 2012

J Obstet Gynaecol Can 2013;35(1):29–38

JANUARY JOGC JANVIER 2013 l 29

Obstetrics

GLOSSARY

Quality improvement: Better patient experience and outcomes achieved through changing provider behaviour and organization through use of a systematic change method and strategies.1 Performance measurement: The use of both process and outcomes measures to understand organizational performance and effect positive change to improve care.2,3 Key performance indicator: A quantifiable measure that is tied to organizational goals and is used to evaluate performance over a designated time period. It is used to determine whether the practice, hospital, or other accountable organization is meeting predefined targets. Appropriate benchmarks are necessary to determine how performance compares against desired goals and objectives and against others.4 Dashboard: A performance measurement system that provides data on structure, process, and outcome variables and incorporates the following functions: a. reporting on a selection of performance indicators (feedback); b. comparing performance to established ideal levels (benchmarking); and c. providing alerts when performance is sub-optimal to trigger action (warning or signal) Based on the principle of a dashboard in a vehicle, dashboards provide a visual display of how various components or systems within an organization are functioning.5 Audit and feedback: a. A quality improvement process that seeks to improve patient care and outcomes through systematic review of care against explicit criteria and the implementation of change. Aspects of the structure, processes, and outcomes of care are selected and systematically evaluated against explicit criteria. Where indicated, changes are implemented at an individual, team, or service level and further monitoring is used to confirm improvement in health care delivery.6 b. The provision of any summary of clinical performance over a specified period of time. The summary may include data on processes of care (e.g., number of diagnostic tests ordered), clinical ABBREVIATIONS BIS

BORN Information System

BORN

Better Outcomes Registry & Network

MND

maternal-newborn dashboard

QI

quality improvement

30 l JANUARY JOGC JANVIER 2013

endpoints (e.g., blood pressure readings), and clinical practice recommendations (e.g., proportion of patients managed in line with a recommendation).7 INTRODUCTION

Q

uality is receiving increasing attention as an attribute of health care that can both improve care and provide a mechanism for accountability. In 1990, the Institute of Medicine completed an extensive exercise to define quality in health care, and stated that it was “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.”8 Specific attributes of quality health care have been defined so that this concept can be measured. Commonly used quality domains include accessibility, effectiveness, safety, person-centredness, equitability, and appropriateness of resources.9,10 Governments in Canada and the United States have legislated hospital governing bodies to be accountable for quality.11,12 While many hospitals and other health care agencies had quality mandates and committees predating this legislation to meet accreditation standards or certification programs, there are now specific requirements. These are occasionally tied to funding, compensation, and ability to provide specific services, as well as a requirement for public reporting on some health care issues. Pregnancy, birth, and the early newborn period are times of high use of health care services. Although these times provide opportunities for providing quality care, they are also potentially times of missed opportunities for health promotion; furthermore, there may be safety issues and increased costs for the individual and the system when quality is not well defined or measured. Almost all women have multiple contacts with the health care system during these times, including consultation with a variety of care providers, diagnostic testing, and a hospital admission. Most newborns also spend time in hospital, and a small proportion of them require intensive care. The objective of this project was to identify key performance indicators to measure quality within the maternal-newborn system. We also wanted to provide an automated mechanism for audit and feedback of data about these key performance indicators to support quality improvement initiatives in a large Canadian province with approximately 140 000 births per year. ASSESSMENT OF THE PROBLEM

BORN Ontario has been in place since 2009 and has responsibility for measuring the performance of provincial

Measuring Quality in Maternal-Newborn Care: Developing a Clinical Dashboard

maternity care services and supporting continuous quality improvement in this setting. Through our routine surveillance reports,13 we have found considerable variation in some maternal-newborn clinical practices and patient outcomes across the province, indicating optimal care is not always delivered. While there is clearly a need for quality improvement, and while best practice evidence is available for many pregnancy and birth-related care issues, deciding how to proceed and how to plan for large scale improvement strategies is challenging. In Ontario, there are 109 hospitals of varying sizes and levels of care that provide maternalnewborn services, approximately 80 midwifery practices, and about 140 000 babies born annually. All hospitals and midwifery practices contribute pregnancy, birth, and early newborn data to the BORN Information System, but we have not yet developed an optimal mechanism for disseminating data feedback to engage stakeholders in QI initiatives. This report of a Phase 1 project outlines our process to develop a readily accessible, online maternalnewborn dashboard to initiate or augment data feedback to hospitals and care providers regarding specific areas for QI. Quality improvement is a multiphase process requiring the identification of goals for an entire team to work towards, a data collection and feedback process, targeted evidenceinformed strategies for implementation and maintenance, and ongoing evaluation.14,15 To initiate a QI process, data are required to identify practice gaps and establish baseline performance and outcomes. As the process evolves, data are then used to provide regular feedback and to identify areas in which improvement has occurred or further improvement is required. In Ontario, we have the means to collect, display, and provide feedback on data. When a woman is admitted to hospital to give birth, data are collected from her medical record, from clinical forms, and from the patient herself. After the birth, these data are entered into the BIS, either through a secure website by hospital staff or uploaded directly from hospitals that have electronic record capability. Each site has access to its own data and to the BORN Science Team reports on outcomes aggregated to regional and provincial levels at regular intervals. Data quality is an important concern, especially because the data will be used to populate the dashboard and will be used by hospitals and the province to assess the quality of care provided. BORN has an ongoing program of data verifications, automated quality checks, and formal training sessions for individuals collecting and entering data to ensure that a high level of data quality is maintained.16 BORN also routinely reports on the proportion of missing data, and provides cautionary notes when this may have an impact on the accuracy of the results.

Because of the large amount of data collected in the BIS, we needed to make evidence-informed decisions about the clinical practices that were most in need of improvement and were most likely to be amenable to change. To guide our process in developing system-wide maternal-newborn quality care indicators, we used Janakiraman and Eker’s17 criteria for a good measure: a. easy to define and observe, b. important to patients and health care providers, c. “ripe” for improvement (amenable to change), and d. obtainable from existing or easily collected data. DATA/INFORMATION FEEDBACK

Ultimately, the purpose of developing quality indicators is to promote change, to decrease practice variation in the system, and to encourage best practices. Deciding how to provide feedback on quality indicators led us to investigate various data feedback mechanisms such as scorecards, dashboards, and standard paperbased reports. A scorecard (performance measurement system) provides periodic “snapshots” of performance associated with an organization’s strategic objectives and plans. It measures activity at a summary level within pre-established time frames (i.e., monthly or quarterly) against predefined targets to assess whether performance is within acceptable ranges. The scorecard indicators help managers to communicate strategy and users to focus on the highest priority tasks.18 In contrast, a dashboard (performance monitoring system) measures processes in real time, just as an automobile dashboard permits verification of important information (e.g., current speed, fuel level) at a glance. A dashboard is linked directly to systems that capture events as they happen, and it warns users through alerts or exception notifications when performance against any number of metrics deviates from the norms.18 Dashboards provide an instantaneous picture of performance status, generally through a system of colour codes—typically green, yellow, and red—that alerts users to areas needing improvement before they become problems.19 Because the BIS is capable of realtime measurement of practice, we chose a dashboard concept. APPROACH TO INDICATOR SELECTION

The Maternal Newborn Outcomes Committee of BORN established a dashboard subcommittee to direct this project. Membership was interprofessional and included representatives of obstetrics, neonatology, midwifery, nursing, pediatrics, and epidemiology from throughout the province. The members of the subcommittee were chosen JANUARY JOGC JANVIER 2013 l 31

Obstetrics

for their clinical expertise and because they were responsible for quality improvement within their practice settings, which covered all levels of care. The group’s first task was to map the existing BIS data elements for maternal-newborn care to a quality domain framework to determine which single or combined data elements were potential indicators of quality within the antepartum, intrapartum, postpartum, and newborn time periods (Table 1). An a priori decision to include breastfeeding as a stand-alone area in the maternalchild continuum was made because of the importance of this practice to infant/child health. The domains of quality we used were chosen following a literature and Internet review. A number of quality reports were reviewed (Ontario Health Quality Council,20 the Ontario Ministry of Health and Long-Term Care,20 the Canadian Institute for Health Information,10 the Institute of Medicine,9 the World Health Organization,21 the National Health Service High-Level Performance Framework from the United Kingdom,22 and Australia’s National Health Performance Committee’s Framework23). From these, we selected the six quality domains of health care common to all reports— accessibility, effectiveness, safety, person-centredness, equity, and appropriateness of resouces. Other domains measuring system impact, population health, or individual practitioner expertise (e.g., capacity, competence, efficiency, integration, sustainability) were excluded as these would be more appropriate for a scorecard; additionally, few data elements or indicators within the BORN Ontario system were reflective of these domains. To determine the data available for indicators to populate the domains of quality for the maternal-newborn population, and to meet Janakiraman and Eker’s17 criteria for a good indicator (being easy to define and observe, having existing or easily collected data) we used the BIS, which captures 100% of hospital births in Ontario. A small group of members of the dashboard subcommittee reviewed BIS data elements seeking high quality data that reflected current practice issues, and slotted them into the quality domain framework. In addition, data elements that reflected quality care “hot topics,” or our own clinical practice experiences, were also included. To seek feedback on the potential indicators of quality, to establish face validity, and to reduce the list to a manageable number for hospitals, the dashboard subcommittee undertook a decision-making process that included survey and deliberation.24 Based on Hermann and Palmer’s framework for selecting core quality measures,25 a web-administered survey was developed for the purpose of scoring each of the proposed indicators on a scale from 1 (strongly disagree) to 5 (strongly agree) first for 32 l JANUARY JOGC JANVIER 2013

its clinical relevance to maternal-newborn care and then for its amenability to change when required. A third openended question was included to identify other indicators that could or should be considered. Seven members of the dashboard subcommittee were asked to complete the survey, and six completed surveys were returned. Respondents represented different professional groups (2 obstetricians, 1 nurse with an obstetric background, 1 nurse with a neonatal background, 1 midwife, and 1 neonatologist/administrator), and they had a sense of what would be meaningful, feasible, and actionable at different levels of care settings in the province. A high quality indicator was defined as any indicator selected by at least five of six respondents to be both highly clinically relevant and amenable to change (rated as either “agree” or “strongly agree”). Ten indicators met these criteria for clinical relevance, seven for being amenable to change, and five for both categories (clinically relevant and amenable to change). These indicators covered the antenatal, intrapartum, and newborn components of the framework, and represented two of the six quality domains (effectiveness and appropriateness of resources). To broaden the clinical scope of the indicators to include breastfeeding and the domain of safety, two additional indicators were recommended by the committee chairs for further debate. While neither of these indicators was rated as highly as those from the survey results, both were relevant to clinical practice and of interest throughout the province. Finally, seven indicators were recommended for further investigation of scientific validity through systematic literature reviews and examination of existing provincial data (Figure 1). When there was concern about a particular indicator not being included even though it was clinically important (e.g., choosing repeat Caesarean section before 39 weeks over Caesarean section in women in spontaneous labour at term), we discussed alternate solutions for ensuring that the indicator would not be sidelined. In the case of Caesarean section, the BORN Information System was also developing a standardized report on the Robson classification26 for Caesarean section that any hospital could generate or that BORN could generate at a provincial level. This report provides an overview of the Robson categories and documents the proportion of Caesarean sections attributable to each category. Although this indicator was not included on the dashboard (likely because management of labour is much more difficult to change than the timing of a Caesarean section for a woman at low risk), it has not been ignored. To validate the seven potential indicators as being appropriate for use throughout the province, we first

Right care at right time by right provider in right setting

Percentage of women with first trimester visit

Definition

Antepartum

Term newborns admitted to the NICU or needing transfer to a higher level of care

Proportion of babies that did not receive newborn screening as per guidelines Proportion of unsatisfactory samples (newborn screen)

Newborn

Percentage of babies receiving ventilatory support with T-piece during neonatal resuscitation

Percentage of babies receiving oxygen during resuscitation

Proportion of babies (less than 28 weeks) resuscitated in a plastic bag for thermoregulation

Percentage of cases of birth depression (5-minute Apgar < 3 and arterial cord pH < 7.0)

Proportion of term babies receiving supplementation among mothers who intended to breastfeed

Breastfeeding

Surgical site infection: percentage of women having a Caesarean section who get antibiotics before incision

Proportion of women receiving blood transfusions

Proportion of women with postpartum hemorrhage

Proportion of women with 3rd and 4th degree tears

Avoiding / preventing errors or injury

Safe

Percentage of women with length of stay > 48 hours for vaginal birth

Proportion of low-risk women delivering at term having electronic fetal monitoring

Length of time (in minutes) in second stage for nulliparous women with and without epidural anaesthesia

Percentage of women who had prenatal screening

Percentage of women with Group B streptococcus swab at 35 to 37 weeks’ gestation

Care based on best evidence

Effective

Postpartum

Intrapartum

Accessible

Perinatal period

Proportion of term babies exclusively breastfeeding at discharge in women who intended to breastfeed

Maternal experiences: satisfaction with care received

Respectful and responsive to individual patient preferences and needs

Person-centred

Table 1. Initial draft of BORN Ontario maternal-newborn dashboard indicator framework (potential quality indicators) Access to care stratified by demographic indicators

Equitable

Percentage of repeat Caesarean section in low-risk women at less than 39 weeks’ gestation

Rate of induction at < 41 weeks with no indication

Percentage of Caesarean section (in defined population)

Percentage of women delivered by someone other than a physician or midwife

Sufficient providers, funding, equipment, supplies

Appropriately resourced

Measuring Quality in Maternal-Newborn Care: Developing a Clinical Dashboard

JANUARY JOGC JANVIER 2013 l 33

Obstetrics

Figure 1. Seven indicators recommended for further review for the BORN Ontario dashboard ● Proportion of women delivering at term with a Group B streptococcus swab completed at 35 to 37 weeks’ gestation ● Proportion of unsatisfactory newborn screening samples ● Proportion of Caesarean sections in full-term women in spontaneous labour ● Proportion of women induced for an indication of post-dates who were at less than 41 weeks’ gestation at delivery ● Proportion of repeat Caesarean sections in low-risk women < 39 weeks’ gestation ● Proportion of women having a vaginal birth with third and fourth degree tears ● Proportion of term babies receiving supplementation born to mothers who intended to breastfeed

extracted data from the BIS for fiscal year 2009–2010 to assess historical and current performance on these indicators across Ontario’s 14 health regions (Local Health Integration Networks). Simultaneously, evidence summaries on each of the potential indicators were developed in collaboration with the Knowledge to Action Research Centre at the Ottawa Hospital Research Institute. 27–31 This group, which has expertise in the review and synthesis of literature to support evidence-informed health care decision-making, assisted with determining the level of scientific evidence to support each indicator. For example, the evidence summary on early term repeat Caesarean section (i.e., before 39 weeks’ gestation) in a defined population determined that as a result of this practice there were indeed objective risks to babies that could be reduced by delaying delivery. Following review of the data and evidence summaries, the committee removed one indicator and refined some of the others, leaving six (Table 2). In five of the six, the potential for improvement in rates was obvious. The remaining indicator (rate of screening for Group B Streptococcus) is currently satisfactory throughout all health regions of the province; however, the committee felt it was important at the outset to have the dashboard reflect not only performance areas requiring improvement, but also areas in which performance was good. ESTABLISHING PERFORMANCE BENCHMARKS

To set benchmarks for performance, we used peerreviewed literature, current clinical practice within Ontario, and the clinical expertise of our committee members. There were no recommended benchmarks in the literature for all of our chosen indicators, with the exception of the rate of episiotomy. An episiotomy rate of < 15% in 34 l JANUARY JOGC JANVIER 2013

spontaneous vaginal births was recommended in a 2005 systematic review,32 and in a study from Alberta rates of 13% at regional centres and much lower at rural centres were achieved.33 For rates of breastfeeding, we assessed the current rates of supplementation occurring in hospitals in Ontario designated as “baby-friendly” according to the Baby-Friendly Initiative34 because we believed that they would be modelling best practice. Where no indicator benchmarks existed, we examined current practice by analyzing data for the 2009–2010 fiscal year. We identified data corresponding to percentiles for the province as a whole, and for the individual health regions and hospitals. After reviewing these data, the committee voted on what they believed to be the most appropriate benchmark for Ontario. The committee members felt in many cases that the performance targets should be much better than the current standard. For example, the 75th percentile for early term repeat Caesarean section in low-risk women (i.e., with no medical or obstetrical indication for early delivery) was approximately 53%. The committee members felt strongly that if hospitals were asked to improve only to this level, the potential risk to babies from elective early term delivery would not be meaningfully reduced.35 They felt that hospitals should be encouraged to do better, and thus set a much lower benchmark (target rate ≤ 10%, with a warning for rates between 11% and 15% and an alert for rates > 15%). Indicator definitions and benchmarks are shown in Table 2. DASHBOARD IMPLEMENTATION

Implementation of the MND presents another set of challenges, related primarily to the stability of indicator estimates over time (particularly in smaller hospitals), but also to supporting institutions having difficulty meeting performance standards. From a technology perspective, the MND has the capability to be “live” a few months after data are entered into the BIS. Once there are sufficient data in the system to populate the dashboard, it will appear on the landing page for each hospital when authorized staff log into the online system (a mock-up of the MND with dummy data is shown in Figure 2). Hospitals must have acknowledged their monthly data for submission into the system in order to populate their dashboard for that time period. We anticipate relatively stable indicator estimates over time for hospitals with large delivery volumes; however, for smaller centres, the estimates will likely have significant variability from month to month. To give a concrete example, in a centre with only 120 births per year there would be few Caesarean sections. If there were two elective repeat Caesarean sections in women at low risk in one month and one was performed before 39 weeks, the

Measuring Quality in Maternal-Newborn Care: Developing a Clinical Dashboard

Table 2. Final BORN Ontario definitions and benchmarks Target (green) %

Warning (yellow) %

Alert (red) %

1.  Proportion of newborn screening samples that are unsatisfactory for testing†

3

The number of newborn screening samples that were unsatisfactory for testing, expressed as a percentage of the total number of newborn screening samples submitted to Newborn Screening Ontario (NSO) from a given hospital.

2.  Rate of episiotomy in spontaneous vaginal births

< 13

13 to 17

> 17

The number of women who had spontaneous vaginal births with episiotomy, expressed as a percentage of the total number of women who had spontaneous vaginal births at a given hospital.

3.  Rate of formula supplementation in term infants whose mothers intended to breastfeed

< 20

20 to 25

> 25

Number of term live babies receiving formula supplementation expressed as a percentage of the total number of term babies whose mothers intended to breastfeed (in a given place and time).

4.  Rate of repeat Caesarean section in low-risk women* not in labour at term, with no medical or obstetrical complications, prior to 39 weeks’ gestation

< 11

11 to 15

> 15

The number of women with a Caesarean section performed before 39 weeks’ gestation, expressed as a percentage of the total number of low-risk women who had a repeat Caesarean section at term (in a given place and time).

5.  Proportion of women delivering at term who had GBS screening at 35 to 37 weeks’ gestation

> 94

90 to 94

< 90

The number of women having an unplanned Caesarean section in labour who deliver at term and have GBS screening at 35 to 37 weeks’ gestation expressed as a percentage of the total number of labouring women delivering at term (in a given place and time).

6.  Proportion of women induced with an indication of post-dates who are at less than 41 weeks’ gestation at delivery

10

The number of women who were at less than 41 weeks of gestation at delivery, expressed as a percentage of the total number of women who had labour induction with an indication for induction of “post-dates pregnancy” (in a given place and time).

Key performance indicators

Definitions

GBS: Group B Streptococcus *Repeat Caesarean section in low-risk women is defined as a Caesarean section performed before the onset of labour, and in the absence of medical or obstetrical indications for delivery among women with a history of one or more previous Caesarean sections. For this analysis, the definition included women with a singleton live birth, between 37 and 42 weeks of gestational age, with no maternal medical problems, no obstetrical complications, and none of the following indications for the Caesarean section: cord prolapse, fetal anomaly, intrauterine growth restriction/small for gestational age, large for gestational age, nonreassuring fetal status, placenta previa, placental abruption, pre-eclampsia, and preterm rupture of membranes. †Samples coded as unsatisfactory due only to collection at less than 24 hours of age (i.e., there are no other reasons for the sample to be deemed unsatisfactory) will not be considered unsatisfactory for this analysis, since sample collection at less than 24 hours of age is recommended in cases of early discharge, transfer, or transfusion.

resultant estimate of 50% would automatically receive an alert (red) on the dashboard; however, given the very small denominator, that estimate would be highly imprecise and potentially misleading. To address this issue, exact binomial 95% confidence intervals will be provided with each estimate, accompanied by documentation regarding the correct interpretation of dashboard estimates.36 Additional strategies such as aggregation of monthly observations (e.g., using a quarterly estimate rather than a monthly estimate to achieve a more reliable reflection of clinical practice), and data smoothing (i.e., 3-month moving averages) will also be built into the MND.36 Notwithstanding the concern about unstable estimates for smaller hospital sites, some would argue that certain practices such as elective repeat Caesarean section for low-risk women not in labour at early

term gestation (i.e., 37 to 38 weeks) should rarely happen, and that the case should be reviewed when it does.37–39 Within each hospital institution, authorized staff will be able to use the BORN MND to generate a standard report that can be used to obtain individual chart number(s) for cases requiring review to support practice audits. This capacity for “drill down” will be available for all of the MND indicators. The MND will provide the capability for hospitals to compare themselves not only with other hospitals with the same level of care designation, but also with other hospitals with a similar volume of births. This is important because a Level 1 hospital with 700 births per year is likely different from a Level 1 hospital with fewer than 100 births per year. JANUARY JOGC JANVIER 2013 l 35

Obstetrics

Figure 2. BORN Ontario Maternal Newborn Dashboard landing page mock-up Date: November 1, 2012

Maternal Newborn Dashboard — Home Page Hospital1, July 1, 2012, to September 30, 2012 Status range (%)

Comparator rates (%)

Warning (yellow)

Alert (red)

Other same level of care hospitals

Other similar birth volume hospitals

Ontario

Rate (%)

Status

Target (green)

Proportion of newborn screening samples that are unsatisfactory for testing

5.0

red

< 2.0

2.0–3.0

> 3.0

4.5

4.3

4.5

Rate of episiotomy in women having a spontaneous vaginal birth

14.1

yellow

< 13.0

13.0–17.0

> 17.0

14.0

15.2

13.9

Rate of formula supplementation at discharge in term infants whose mothers intended to breastfeed

15.0

green

< 20.0

20.0–25.0

> 25.0

20.0

19.0

15.0

Proportion of women with a Caesarean section performed prior to 39 weeks’ gestation among low-risk women having a repeat Caesarean section at term

16.0

red

< 11.0

11.0–15.0

> 15.0

14.6

35.0

37.0

Proportion of women delivering at term who had Group B Streptococcus screening at 35-37 weeks’ gestation

97.0

green

> 94.0

90.0–94.0

< 90.0

91.0

89.0

91.0

Proportion of women induced with an indication of post-dates who are less than 41 weeks’ gestation at delivery

5.0

yellow

< 5.0

5.0–10.0

> 10.0

4.5

4.1

4.5

Key Performance Indicators

Data source

BORN Ontario, 2012-2013

Notes 1. Rates and status are based on three prior months of data that are acknowledged for submission, allowing a one-month lag. 2. Grey status indicates data has not been acknowledged for submission for the three-month reporting period. Please ensure data has been acknowledged for submission for all three months in the reporting period, see acknowledgement summary. 3. Comparator data is represented as the rate from a minimum of three or more hospitals who have acknowledged their data for the three-month reporting period, within a given comparator category. The compartor rates for other same level of care hospitals and other similar birth volume hospitals exclude the reporting hospital, whereas rates for Ontario include the reporting hospital.

Data quality is also important because the individual data elements may be aggregated to make up indicators populating the dashboard. For example, the indicator of repeat Caesarean section in low-risk women not in labour before 39 weeks’ gestation relies on good data quality regarding the maternal health problems, obstetrical and labour complications (as these are exclusionary characteristics), type of delivery, gestational age, and type of labour. In the BORN legacy data collection system our practice was to add footnotes to figures and tables to alert the reader when more than 10% but fewer than 30% of records for a particular variable had missing information. If ≥ 30% of data making up an indicator were missing, we excluded that hospital’s data, and a regional coordinator followed up with the hospital. In the new BIS, we have included a number of automated measures to improve data quality, and we have new upload specifications for hospitals exporting data from an electronic health record system. However, we will be unable to assess these outcomes until after the first 6 to 12 months of data entry. Until then, data tables will include the number of actual cases, the missing data, the site-specific rates for the indicator, comparator rates (for hospitals with the same level of care and similar number 36 l JANUARY JOGC JANVIER 2013

of births and for the province), confidence intervals, and the number of comparator sites that have acknowledged and validated their data. This information will help users to judge the reliability of the data and to interpret their site-specific rates. In addition, quarterly rates will not be displayed until their data for the quarter are in the system and have been acknowledged and validated. Rates will be calculated based on the three most recent months of acknowledged data after a one month lag to allow hospitals to enter and validate their data. The strategy for implementation of the MND will follow the same process outlined by The Royal College of Obstetricians and Gynaecologists,40 which recommends that each hospital/unit have a clearly defined mechanism and named individuals who are responsible for dealing with issues as they arise. Further, they stress the need for the entire maternity team, from the front lines to the relevant leaders, to take an active part in monitoring clinical practice when indicators have a yellow or red designation. The Royal College of Obstetricians and Gynaecologists also believes that the performance information should be shared with consumer representatives who have a vested interest in maternity care within hospital settings.

Measuring Quality in Maternal-Newborn Care: Developing a Clinical Dashboard

BORN Ontario can make available a live MND, ready for viewing, and can report performance at the health region or provincial level. We will have the capability to identify groups or institutions that are doing well and those that may be struggling. We can then work with the BORN staff located throughout the province, our partners in the health regions and regional perinatal networks (where they exist), and the Provincial Council for Maternal Child Health, to assist with implementation of QI strategies. But ultimately the responsibility for quality rests with the institutions. After implementation, we intend to design and test the effectiveness of selected knowledge translation strategies to determine whether certain combinations of interventions will improve uptake and use of the MND, and ultimately improve outcomes on the performance indicators. We will undertake a mixed methods study to determine what helps hospitals and care providers adapt to change and what barriers to change exist. We hope to learn how an MND can drive strategic planning and decision making. LIMITATIONS

Despite our diligence during the process of indicator selection, there are limitations to our approach. Small volume hospitals will require more time than larger volume hospitals to collect sufficient data on outcomes such as Caesarean section in low-risk women prior to 39 weeks to obtain reliable estimates of their practice patterns. The provision of 95% confidence intervals for point estimates will help to reduce misinterpretation of highly variable estimates. In addition, having the appropriate comparator groups (i.e., same level of care and similar sized hospital) will allow more appropriate conclusions. We also acknowledge that because this large-scale process is a fairly new concept for Canadian provinces, and because there is limited information about benchmarks in the maternalchild care literature, much of our initial benchmarking work was based on consensus within our provincial committee. CONCLUSION

When the BIS is implemented in 2012, our initial goal is to have all hospitals and midwifery practices enter data into the new system. As we develop the MND interface in the BIS our goal will be to implement the dashboard and its accompanying standard reports over the ensuing six to 12 months. For the future, we anticipate that the performance indicators and the benchmarks will evolve over time, with some retired and others added as issues resolve or as new evidence, new clinical needs, and/or new data sources are added into the BIS. We also expect hospitals will want to

add hospital-specific indicators to meet their own needs. Finally, we have the ability to move the dashboard concept beyond hospitals to midwifery practices, laboratories, and follow-up centres in the BORN system. While audit and feedback does not guarantee that individuals or institutions will make practice changes and move towards quality improvement, they are an important first step. We can be sure that practice change and quality improvement will not occur without an awareness of the issues. ACKNOWLEDGEMENTS

We wish to thank the members of the BORN Ontario Maternal Newborn Outcomes Committee Dashboard Subcommittee for their work in helping to define the dashboard indicators and benchmarks. REFERENCES 1. Ovretveit J. Does improving quality save money? A review of the evidence of which improvements to quality reduce costs to health service providers. London: Health Foundation; 2009. 2. Adair CSECA, Birdsell J. Performance measurement in healthcare: Part 1-Concepts and trends from a state of the science review. Healthc Policy 2006;4:1–20. 3. Nadzam D, Nelson M. The benefits of continuous performance measurement. Nurs Clin North Am 2012;32:543–9. 4. Gruber S. Is it a metric or a key performance indicator (KPI)? HealthcareAnalytics.info. Available at: http://healthcareanalytics.info/ 2012/02/is-it-a-metric-or-a-key-performance-indicator-kpi. Accessed February 15, 2012. 5. Franceschini F, Galetto M, Maisano D. Management by measurement. London: Springer; 2007. 6. National Institute for Clinical Excellence. Principles for best practice in clinical audit. Abingdon, UK: Radcliffe Medical Press; 2002. 7. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and health care outcomes. Cochrane Database Syst Rev 2012, 6:CD000259; doi: 10.1002/14651858.CD000259.pub3. 8. Institute of Medicine. Medicare: a strategy for quality assurance, vol 1. Washington, DC: National Academies Press; 1990. 9. Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press; 2001. 10. Canadian Institute for Health Information. Highlights of 2008–2009 selected indicators describing the birthing process in Canada. CIHI; 2009: 1–7. Available at: https://secure.cihi.ca/free_products/ childbirth_highlights_2010_05_18_e.pdf. Accessed October 28, 2012. 11. Excellent Care for All Act, SO 2010, c 14. 12. Kroch E, Vaughn T, Koepke M, Roman S, Foster D, Sinha S, et al. Hospital board and quality dashboards. J Patient Saf 2006;2:10–9. 13. BORN Ontario. Perinatal Health Reports 2009–2010. Ottawa: BORN Ontario; 2011. Available at: http://www.bornontario.ca/reports/ lhin-regional-reports. Accessed February 15, 2012. 14. Main EK, Bingham D. Quality improvement in maternity care: promising approaches from the medical and public health perspectives. Curr Opin Obstet Gynecol 2008;20:574–80.

JANUARY JOGC JANVIER 2013 l 37

Obstetrics

15. Pisek P. Quality improvement methods in clinical medicine. Pediatrics 1999;103:203–14. 16. Dunn S, Bottomley J, Ali A, Walker M. 2008 Niday Perinatal Database quality audit: report of a quality assurance project. Chronic Dis Inj Can 2011;32:32–42. 17. Janakiraman V, Ecker J. Quality in obstetrics care: measuring what matters. Obstet Gynecol 2010;116:728–32. 18. Eckerson W. Dashboard or scorecard: which should you use? Dashboard insight–turning data into knowledge 2007. Available at: http://www.dashboardinsight.com/articles/digital-dashboards/ fundamentals/dashboard-or-scorecard-which-should-you-use.aspx. Accessed October 9, 2012. 19. Wyatt J. Scorecards, dashboards, and KPIs, keys to integrated performance measurement. Healthc Financ Manage 2004;58(2):76–80. 20. Ontario Health Quality Council. OHQC reporting framework: the attributes of a high-performing health system. Toronto: OHQO; 2010. Available at: http://www.ohqc.ca/pdfs/ohqc_attributes_handout_-_ english.pdf. Accessed October 9, 2012. 21. World Health Organization. The world health report 2000. Health systems: improving performance. Geneva: WHO; 2000. 22. Department of Health. The NHS performance assessment framework. London: The Stationary Office; 1999. 23. National Health Performance Committee. National report on health sector performance indicators 2003. AIHW Cat. No. HW178. Canberra: Australian Institute of Health and Welfare; 2004. 24. Sunstein CR. Infotopia: how many minds produce knowledge. New York: Oxford University Press; 2006. 25. Hermann RC, Palmer RH. Common ground: a framework for selecting core quality measures for mental health and substance abuse care. Psychiatr Serv 2002;53:281–7. 26. Robson MS. Classification of caesarean sections. Fetal Matern Med Rev 2001;12:23–39. 27. Thielman J, Konnyu K, Grimshaw J, Moher D. What is the evidence supporting universal versus risk-based screening for group B streptococcal infection in newborns? Evidence Summary No. 14. Ottawa: Ottawa Hospital Research Institute; 2012. Available at: http://www.ohri.ca/kta. Accessed November 9, 2012. 28. Konnyu K, Grimshaw J, Moher D. What are the drivers of inhospital formula supplementation in healthy neonates and what is the effectiveness of hospital based interventions designed to reduce formula supplementation? Evidence summary no. 1. Ottawa: Ottawa Hospital Research Institute; 2010. Available at: http://www.ohri.ca/kta. Accessed November 9, 2012. 29. Konnyu K, Grimshaw J, Moher D. What are the maternal and newborn outcomes associated with episiotomy during spontaneous vaginal delivery. Evidence summary no. 13. Ottawa: Ottawa Hospital

38 l JANUARY JOGC JANVIER 2013

Research Institute; 2011. Available at: http://www.ohri.ca/kta. Accessed November 9, 2012. 30. Konnyu K, Grimshaw J, Moher D. What is known about the maternal and newborn risks of elective induction of women at term? Evidence summary no. 10. Ottawa: Ottawa Hospital Research Institute; 2011. Available at: http://www.ohri.ca/kta. Accessed November 9, 2012. 31. Khangura S, Grimshaw J, Moher D. What is known about the timing of elective repeat cesarean section? Evidence summary no. 1. Ottawa: Ottawa Hospital Research Institute; 2010. Available at: http://www.ohri.ca/kta. Accessed November 9, 2012. 32. Viswanathan M, Hartmann K, Thorp J, Linda Lux L, Swinson T, Lohr KN, et al. The use of episiotomy in obstetrical care: a systematic review. AHRQ. Evidence report/technical assessment no. 112. Rockville MD: Agency for Healthcare Research and Quality; 2005. 33. Hargrove A, Penner K, Williamson T, Ross S. Family physician and obstetrician episiotomy rates in low-risk obstetrics in southern Alberta. Can Fam Physician 2011;57:450–6. 34. World Health Organization; UNICEF; Wellstart International. Baby friendly hospital initiative: revised, updated and expanded for integrated care. Section 2, Strengthening and sustaining the baby-friendly hospital initiative: a course for decision-makers. Geneva: WHO/UNICEF: 2009. 35. Tita ATN, Landon MB, Spong CY, Lai Y, Leveno KJ, Varner MW, et al. Timing of elective repeat Cesarean delivery at term and maternal perioperative outcomes. N Engl J Med 2009; 360:111–20.doi: 10.1056/NEJMoa0803267. 36. Rudolph B. Statistical approaches for small numbers: addressing reliability and disclosure risk. NAHDO-CDC Cooperative Agreement Project. CDC Assessment Initiative. National Association of Health Data Organizations; 2004:1–22. 37. Main E, Oshiro B, Chagolla B, Bingham D, Dang-Kilduff LKI, et al. Elimination of non-medically indicated (elective) deliveries before 39 weeks gestational age. A California toolkit to transform maternity care. San Francisco: March of Dimes; California Maternal Quality Care Collaborative; California Department of Public Health, Maternal, Child & Adolescent Health Division; 2011. 38. Oshiro BT, Henry E, Wilson J, Branch DW, Varner MW; for the Women and Newborn Clinical Integration Program. Decreasing elective deliveries before 39 weeks gestation in an integrated health care system. Obstet Gynecol 2009;113:804–811. 39. Macones GA. Elective delivery before 39 weeks: reason for caution. Am J Obstet Gynecol 2010;202:208. 40. Arulkumaran S, Chandraharan E, Mahmood T, Louca O, Mannion C. Maternity dashboard: clinical performance and governance score card. Good Practice No. 7. London: Royal College of Obstetricians and Gynecologists; 2008:1–8.