Laboratory Quality Control Requirements Should be Based on Risk ...

27 downloads 0 Views 140KB Size Report
Oct 8, 2005 - embracing risk management as part of the quality system.4. While the trend in the medical device manufacturing in- dustry has been away from ...
Feature Received 7.11.05 | Revisions Received 8.10.05 | Accepted 8.11.05

Laboratory Quality Control Requirements Should be Based on Risk Management Principles Donald M. Powers, PhD (Powers Consulting Group, Pittsford, NY) DOI: 10.1309/GBU6UH7Q3TFVPLJ7

 Industry experience has demonstrated the value of a formal risk management process in the context of a quality system. It supports systematic, informed decision making, provides greater insight into patient safety risks, fosters quality by design and continuous improvement, and leads to greater predictability of results.

 It is time to question why manufacturers and laboratories are following such divergent paths to the same end.

Proposed initiatives by the Centers for Medicare and Medicaid Services (CMS) would allow clinical laboratories to reduce quality control testing based on the extent of internal monitoring performed by a commercial laboratory analytical system.1 To qualify for reduced testing, the proposal requires a laboratory to demonstrate that these internal safeguards are equivalent to traditional statistical quality control methods. While unnecessary testing should be eliminated, equivalence to traditional statistical quality control (QC) is not the appropriate benchmark. Quality control is only a tool to detect process changes with predetermined statistical power, typically 95% for clinical laboratories. Robust analytical systems designed to avoid or prevent incorrect results are superior to a quality control process that catch errors after they occur. Criteria for the appropriate level of QC should be based on the risks that incorrect results would present to patients. The current debate over how to demonstrate Equivalent QC (EQC) highlights the different approaches to quality taken by manufacturers and laboratories. United States manufacturers moved beyond statistical quality control in the 1970s to focus on total quality concepts, following the example of Japanese industry. By the 1990s, quality management systems and risk management had taken hold in the United States as the preferred approach. Risk as used here is the combination of severity of harm and the probability of that harm occurring. Revamped FDA regulations in 1996 gave in vitro diagnostic (IVD) and other medical device manufacturers the responsibility to decide the appropriate amount of quality control testing based on risk assessment. The United States Quality System Regulation (QSR)2—and its European regulatory counterpart, the IVD Medical Device Directive3—require manufacturers to integrate a formal risk management process into the quality management system. Although the QSR only mentions risk analysis under design controls, the preamble clearly describes an expectation that risk management is a key management responsibility. Australia, Canada, Japan, and the Global Harmonization Task Force have also embraced or are embracing risk management as part of the quality system.4 While the trend in the medical device manufacturing industry has been away from prescriptive regulation, clinical laboratory regulations in the 1990s5 prescribed the number of QC tests that must be performed daily regardless of the clinical significance of an erroneous result or the likelihood of labmedicine.com

 Risk management should offer clinical laboratories the same advantages it offers manufacturers.

occurrence, thus removing an incentive to seek inherently safer IVD medical devices. The revised CLIA regulations retained the prescriptive requirements.6

Risk Management in Industry The risk management process for medical device manufacturers is described in an international standard, ISO 14971.7 The stepwise process is shown in Figure 1. A brief description of the main elements will reveal its applicability to clinical laboratories. Hazard Analysis The risk management process starts with systematic identification of known and foreseeable hazards based on the anticipated uses of the device. This includes not only the analytical performance required for medical use, but also any foreseeable use errors associated with the device or its results. Any failures to meet explicit and implied claims are also evaluated as potential hazards. In the case of a laboratory method, the main hazard to patients is an incorrect result, which in the hands of a physician could lead to a dangerous misdiagnosis or treatment decision. Another hazard might be a delay in availability of a critical result, which for certain analytes could lead to a harmful delay in treatment.

Figure 1_Life-cycle risk management. October 2005  Volume 36 Number 10  LABMEDICINE

633

Feature Risk Analysis Among several possible risk models applicable to in vitro diagnostic medical devices, the one shown in Figure 2 depicts a sequence of events that starts with a failure in a manufacturer’s quality system that results in a defective device.8 In this example, a laboratory using the device generates an incorrect result and reports it to a physician, who relies on it and reaches the wrong diagnosis. This creates a hazardous situation for the patient, who may be harmed by inappropriate treatment. Once a list of potential hazards is constructed, possible harm scenarios can be identified. Error grid analysis was developed by Clarke and colleagues9 to classify incorrect glucose results based on the degree of error and the physiological status of the patient. Parkes and colleagues10 have published an improved procedure for developing an error grid based on the consensus of a large number of medical practitioners. Their approach can be applied to many clinical analytes. An error grid overlaying hypothetical method comparison data is shown in Figure 3. Each of the zones is characterized by the potential of a result in that zone to cause harm to a patient, with severity increasing from Zone A (clinically acceptable performance) to Zone E (dangerously incorrect treatment). An error grid provides a logical basis for ranking the severity of harm on a scale of 1 (Zone A) to 5 (Zone E). In a typical risk analysis, each hazard is analyzed to estimate the probability that it will occur in actual use. When failure data is available from experience, semi-quantitative estimates of failure rates can be projected (eg, 1/1,000, 1/10,000, 1/100,000, etc). When failure data is not available, perhaps when a new assay, technology, or application is being evaluated, expert judgment is relied upon to rank the likelihood on a qualitative scale, using terms such as frequent, occasional, rare, and theoretical. Since the purpose of a preliminary hazard analysis is to identify the risks early so they can be systematically eliminated during the design phase of a product or process, or at least reduced to an acceptable level, knowing the relative risk is good enough. Subsequent experience will be used to verify and refine the risk estimates. The probabilities identified in a hazard analysis usually apply to the occurrence of the hazard, even though not all hazards develop into hazardous situations and not all hazardous situations result in harm. Risk management decisions are based on the probability of harm. If the incorrect result can be detected first by the laboratory or the physician, harm can be avoided. Therefore, the probability that an incorrect result will be detected by laboratory QC may be factored into the probability of harm. The way physicians use test results may also reduce the hazard from incorrect results. It is reasonable to assume that improbable results would be challenged and implausible results disregarded by physicians. Risk Evaluation The next step is to evaluate the risks against acceptability criteria that have been pre-determined in a risk management plan. Each manufacturer must decide the acceptable degree of risk, weighing the benefits to the patients and taking into account the current values of society. A risk acceptability chart can be constructed to distinguish between acceptable and unacceptable combinations of probability and severity. An example of a typical chart is given in Figure 4. Risk acceptability decisions are made initially for each individual risk before mitigation, again for each residual risk after 634

LABMEDICINE  Volume 36 Number 10  October 2005

Figure 2_IVD risk model.

Figure 3_Error grid analysis.

mitigation, and finally for the overall residual risk after all risk controls have been implemented. Risk Controls Individual risks that fall in the “broadly acceptable” zone are considered negligible and require no further attention. Risks that fall in the unacceptable zone must be reduced to an acceptable level. Risks that fall in between may also require mitigation. Although not required by the standard, many manufacturers establish a policy of reducing all risks—even those in the acceptable zone—to a level as low as “practicable,” which means taking into account both technical and economic feasibility.

Figure 4_Risk acceptability. labmedicine.com

Feature “Built-in” monitoring systems that inspired the EQC term are examples of risk control measures. The need for monitoring would be identified during a risk analysis when the instrument or assay is being designed, where potential failure modes are identified through a Failure Modes and Effects Analysis (FMEA). Systems to monitor the critical instrument parameters associated with potential failure causes (incubator temperature, reagent absorbance, detector voltage stability, cutoff point, sample level, etc) would be incorporated into the assay design to avoid or prevent reporting incorrect results. These internal monitoring systems are directed at the potential failure causes to prevent incorrect results from being produced. As a result of the manufacturer’s risk assessments, risk controls may also be implemented in the manufacturing process to prevent the production of defective units. According to the ISO 14971 standard, risk controls must be implemented in the following priority order. First, an attempt must be made to eliminate significant risks from the device. Second, if inherently safe design is not practicable, protective measures must be incorporated in the device itself (eg, monitoring sensors, fail-safe alarms, fault-tolerant systems) or in the manufacturing process (eg, bar-coded materials, acceptance testing, inprocess monitoring). Third, if hazards cannot be avoided or prevented, the user must be provided with the information to understand and address residual risks. This is called “information for safety” in ISO 14971, which includes instructions for use, warnings and precautions, contraindications and limitations, performance characteristics and quality control recommendations. Risk Monitoring Risk management according to ISO 14971 is a product “life-cycle” process, which means it continues as long as the product is being produced and is still in active use. For some workhorse laboratory analyzers, this could be a decade or more. During this time, manufacturers monitor experience with the device to look out for the occurrence of previously unidentified hazards or indications that known risks might be considered no longer acceptable. In addition to external sources of user feedback, such as performance evaluations, customer complaints and adverse event reports, internal data sources are also monitored. Any newly identified hazards loop back through the risk assessment process as shown in Figure 1.

Application to Clinical Laboratories Manufacturers are required to implement a risk management process in their quality system to protect patients from

Figure 5_Clinical lab process map. labmedicine.com

hazardous results. Unfortunately, this addresses only a fraction of the possible failures shown in Figure 2. Many other opportunities to produce hazardous results are under the control of the laboratory. The same ISO 14971 risk management process can be used to analyze, evaluate, and control these risks. It would also fulfill the JCAHO requirement to define and implement an ongoing, proactive program for identifying risks to patient safety and reducing medical/health care errors.11 One approach that can easily be adapted for use in a clinical laboratory is Hazard Analysis and Critical Control Points— “HACCP.” It applies common risk analysis tools, such as Fault Tree Analysis (FTA) and FMEA to identify the activities in the testing process that require controls. Then the existing controls are evaluated, their ability to maintain patient risk within a predefined acceptable level is verified, and additional controls are added where the analysis shows they are needed. The following example for a typical analyte will illustrate how it works. The hazards—incorrect results—and any potential harm would be identified as described above, so this example will begin with identifying potential failure modes in the testing process. Map the Process To see where failures can occur, the laboratory starts by creating a detailed flowchart of the entire testing process. Most analytical systems are made up of several components—specimens, instrument, reagents, accessories, calibrators, and control materials—and each involves several sub-processes that taken together produce a reportable test result. For example, specimens must be collected, transported, stored, and prepared before being added to the measuring instrument, which must have been properly installed, functionally qualified, calibrated, maintained, and setup for the analysis. The reagents, calibrators, and controls must be acquired, qualified, stored, and prepared specifically for the analysis. Lastly, results are calculated from the analytical measurements, verified in some manner and uploaded into the laboratory’s information management system for eventual communication to the physician. The process map for a clinical laboratory can be complex, as the high-level map in Figure 5 illustrates. Every step must be broken down to a level that enables the critical quality attributes of the activity to be analyzed and potential failure modes and suitable controls identified. For complex processes, a FTA can be used to focus the FMEA on significant failures. Brainstorm Potential Failure Modes and Likely Causes This step requires the participation of persons knowledgeable about the analytical technology, reagent chemistry, specimens/analytes, personnel, methods, and other components of the system. These subject matter experts meet to discuss the possibilities from their different perspectives, weigh the evidence and reach consensus on potential failures and their effects. A balanced, cross-functional team can complete an objective risk assessment in a short time. A table is created to document the team’s decisions. A scribe is designated to record each process failure (hazard), failure cause, effect (harm), severity, existing process controls (to prevent the failure), probability of occurrence (of the failure), detectability (prior to harm), and comments explaining rationale. A format similar to that shown in Table 1 results in a valuable reference document for use subsequently in troubleshooting, root cause analysis and laboratory improvement programs. It is a living document that is updated whenever new risk information is obtained. October 2005  Volume 36 Number 10  LABMEDICINE

635

Feature The team addresses potential failures of the analytical phase, as well as potential failures during the pre- and post-analytic phases. Ideally, manufacturers would disclose significant residual risks from the device in the labeling in a form that laboratories could incorporate directly into their risk assessment, but there are no standards for what risks to communicate and how to convey the information. These decisions are left to the manufacturers, and although historically manufacturers have not published expected failure rates, it will be helpful to provide them to their customers upon request. Laboratories can also identify potential failure modes and estimate the probability of their occurrence from their own experience with the instrument or with similar instruments. Guidelines are found in CLSI EP18-A,12 which was written for “unit-use” devices but is generally applicable to all IVD medical devices, and Annex H of the new version of ISO 14971,8 which should be completed in 2006. Laboratories should not hesitate to ask their suppliers for residual risk information as well as advice and technical assistance for performing a risk assessment involving their device. Evaluate Risks Next, the risks need to be evaluated against criteria approved by the laboratory director. To identify the process activities that must be controlled to maintain acceptable risk, the risk assessment team makes decisions based on the severity, occurrence, and detectability values. For simplicity, the 3 scales can be set up as shown in Table 2 so that low numbers are good and values 6 and above must be addressed. Note that the detectability scale has an inverse relationship to the probability of detection.

Identify Essential Control Points The ratings assigned by the team are used to determine whether a process or subprocess requires an Essential Control Point (ECP) and whether existing controls are adequate or further controls are needed to reduce the risk. An ECP is defined as a process activity where control to maintain risk at an acceptable level can be applied. The following rules dictate the actions. All processes with SEV ≥6 require an ECP. Process activities with OCC ≥6 require an ECP to ensure control. The ECP must be an effective method of detection, since the OCC score means a significant number of process failures are expected to occur. The ECP is placed either at the point of failure or at a subsequent step. Process activities with DET ≥6 require an ECP to ensure control. In this case, the ECP must be a process control that prevents failures, since the DET score means the probability that failures will be detected if they occur is not high enough. If both OCC ≥6 and DET ≥6, the process activity lacks adequate controls and corrective action must be initiated, either to reduce the failure rate or to increase the ability to detect a failure, or both. Probability of occurrence and DET will be reanalyzed after the changes are implemented to determine if an ECP is needed. Additional rules may be established to identify and prioritize improvement opportunities based on combinations of SEV, OCC, and DET. Examples of possible outcomes are given in Table 3. Note that all steps have the same severity score because the medical

Table 1_Risk Assessment #

Component Potential failure mode

Effect

Failure cause

SEV Existing controls

1

Reagent Reagent

Reagent deterioration due to improper storage Reagent lot-lot differences

8

3

Instrument

Incorrect results/ misdiagnosis Incorrect results/ misdiagnosis Incorrect results/ misdiagnosis

8

2

Lamp aging

6

4

Instrument

5

Calibrator

6

Calibrator

Stability not meeting claim (negative drift) Large bias shift at lot change Increased imprecision at high analyte concentrations Sporadic “outlier” readings Large bias shift after calibration Large bias shift after calibration

Incorrect results/ misdiagnosis Incorrect results/ misdiagnosis Incorrect results/ misdiagnosis

Unstable power source 8 in lab Incorrect calibrator value 8 assigned by manufacturer Calibrator reconstitution 8 error

7

Sample

Sporadic “outlier” results

Incorrect results/ misdiagnosis

Drug interference (known interferent)

8

8

Sample

Unsuitable sample (hemolyzed)

No result/delayed treatment

Improper specimen preparation

4

OCC DET Comments/ Rationale

SOP (validated storage conditions), trained personnel; weekly QC QC acceptance testing, supplier qualification Preventive maintenance program / SOP (lamp replacement schedule) Voltage regulator, installation qualification Certificate of traceability, postcalibration QC, proficiency testing Qualified personnel, SOP, training, post-calibration QC, proficiency testing Specimen requisition form; hospital pharmacy drug alert system SOP (sample preparation); training/ personnel qualification

6

2

Manufacturer’s instructions

6

2

2

8

Manufacturer’s instructions

4

10

Observed with similar instruments

6

2

4

2

4

10

6

4

Observed in method verification study Requires re-draw

Table 2_Severity, Occurrence, and Detection Scales Score Severity of Harm (SEV)

Probability of Occurrence (OCC)

Detectability Prior to Harm (DET)

10 8 6 4 2

Frequent (≥ 10-3) Probable (< 10-3 and ≥ 10-4) Occasional (< 10-4 and ≥ 10-5) Remote (< 10-5 and ≥ 10-6) Improbable/theoretical (< 10-6)

Almost impossible to detect Low probability of detection Medium probability of detection High probability of detection Almost certain to be detected

636

Catastrophic – patient death Critical – permanent impairment or life-threatening injury Serious – injury or impairment requiring medical intervention Minor – temporary injury or impairment not requiring medical intervention Negligible – inconvenience or temporary discomfort

LABMEDICINE  Volume 36 Number 10  October 2005

labmedicine.com

Feature consequence of an incorrect result is often the same regardless of its cause. The medical actions and severity of harm may depend on the direction and magnitude of error. Verify and Monitor ECPs Once ECPs are identified, objective evidence is required to show that the controls in place are effective. The team reviews the evidence to satisfy itself that the controls will maintain risk at an acceptable level. In addition, because of the importance placed on these controls to prevent incorrect results from being reported, their continued effectiveness over time must be monitored. Any indication that a failure has led to incorrect results being reported must be investigated to determine where the controls failed and root causes must be addressed to prevent recurrence. The risk assessment is periodically reviewed and updated based on the laboratory’s failure investigations, and confidence in the risk control measures continues to increase over time.

Determining the Appropriate Amount of QC The CMS initiative to allow reduced frequency of QC for certain analytical systems has sparked a healthy debate over the evidence required to justify a laboratory’s QC strategy. Arbitrary options to implement the revised CLIA rule could not be defended on scientific grounds and a consensus has emerged that QC alternatives must be validated. Proponents of “Equivalent QC” are looking to manufacturers to prove the effectiveness of their internal monitoring systems, which they are already required to do under design control regulations, and to FDA to certify that systems are “EQC ready” based on review of the manufacturer’s risk assessment and design validation data. Unfortunately, this solution is too simplistic. The laboratory’s entire QC process must have a solid scientific footing. Furthermore, the new proposals focus on controls immediately related to failure of commercial test systems, which are generally acknowledged to be low compared to the frequency of pre-analytic and post-analytic failures. The real question the laboratory

must answer is this: “What are the appropriate laboratory controls to minimize patient risk?” The ISO 14971 risk management process offers a systematic way to answer that question. Whenever possible, control measures should be targeted specifically to events that might lead to failures, as identified in the risk assessment. Table 4 illustrates various types of events that are associated with failures leading to incorrect results and some possible risk controls. There are several hurdles that must be overcome before an effective solution can be realized. Cooperation among regulatory authorities (CMS and FDA), manufacturers and laboratories is needed to overcome them. CMS should harmonize its regulatory philosophy with the quality system approach adopted by FDA 10 years ago. The current CLIA regulations contain elements of a quality system, but they retain overly prescriptive requirements. The lack of agreement about medical requirements makes it difficult to determine when an incorrect result is a significant hazard. The laboratory community should own the task of defining the performance required for medical utility, perhaps aided by the CLSI consensus process. Forcing each manufacturer to figure out the medical requirements for assays they manufacture is highly inefficient and promotes inconsistency. Manufacturers need to know (1) what accuracy is needed for diagnosing, classifying, or treating patients and (2) what harm could occur from failure to meet the requirements. Consensus error grids could be constructed for many analytes using the Parkes approach.10 Table 3_Assignment of ECPs Step

SEV

OCC

DET

ECP

1 2 3 4 5

8 8 8 8 8

2 2 6 4 7

2 7 6 5 2

None Process Add control Improve Detection

Table 4_Examples of controls targeted to specific events Laboratory Controls Event

Prevention

Detection

Instrument drift Reagent deterioration New lot of reagents, controls or calibrators New shipment of materials System calibration New calibrator Maintenance and repair Method modification Equipment relocation Sample preparation Specimen identification Drug interference Instrument failure Operator error

Preventive maintenance program; calibration schedule Storage per manufacturer’s instructions; standard operating procedure; training Material handling procedures; material specifications

QC samples run at predetermined intervals QC samples run at predetermined intervals QC acceptance testing of materials upon receipt

Validated shipping conditions (manufacturer) Calibrate per manufacturer’s instructions; standard operating procedure; training Certificate of traceability Perform per manufacturer’s instructions; personnel qualification, SOPs, training Validation or verification Requalification Prepare per manufacturer’s instructions; personnel qualification, SOPs, training Bar codes; personnel qualification, SOP, training Sample acceptability requirements; drug alert system (pharmacy) Preventive maintenance per manufacturer's instructions Personnel qualification; training on manufacturer's instructions for use & SOPs

Method imprecision (within manufacturer's claims) Software update

Validation; improve/replace method if not meeting medical requirements

QC acceptance testing upon receipt QC samples run post calibration; proficiency testing QC acceptance testing upon receipt; proficiency testing QC samples run afterward QC samples run after implementation QC samples run afterward Plausibility checks Verification protocol; redundant checks None Check critical functions periodically QC samples run after operator sensitive procedures; verify critical steps QC samples run (mean and range charts)

Software validation (manufacturer); verify critical functions (laboratory)

QC samples run afterward

labmedicine.com

October 2005  Volume 36 Number 10  LABMEDICINE

637

Feature Laboratories should do a better job of qualifying their suppliers, not only specifying their performance requirements but also insisting on the information they need to control patient risk. Greater value should be placed on a cooperative relationship in the interest of patient safety. Manufacturers need to communicate the residual risk in a format that is useful to laboratories. The common practice of defaulting to a recommendation to implement traditional QC procedures prevents laboratories from designing a more appropriate and cost-effective control system. FDA can help level the playing field among manufacturers by providing clear guidance regarding risk management expectations and strictly enforcing compliance. For the system to work, laboratories need to have confidence that manufacturers are doing a good job in designing and validating their devices. It has been almost 10 years since the QSR was enacted, so by now every manufacturer should have implemented design controls and a comprehensive risk management program. CLSI should broker a consensus on the risk information that manufacturers need to provide users and on the supporting evidence that might be reviewed by the FDA to certify that internal device controls are effective. ISO/TC212 should develop an international standard on the application of the ISO 14971 risk management principles to clinical laboratories as a companion to the successful quality management standard, ISO 15189.13 There is a perception that risk assessments are beyond the capability of most laboratories, but that is not the case. There are plenty of textbooks,14,15 training programs, and other resources to help get started, and there are examples of laboratories that have successfully applied FMEA methodology.16-18 Nevertheless, a set of practical risk assessment guidelines targeted to the clinical laboratory would be an obvious extension of the “Option 4” proposal. The side debate over whether the analytical performance of present methods is good enough for medical use should continue.19 If analytes such as calcium and creatinine are not being measured with sufficient precision, then they must be producing some percentage of hazardous results in normal use, which would be identified and evaluated in a properly executed risk assessment if medical requirements were defined. Such a finding should stimulate improved methods under present design control regulations. However, no amount of QC testing will improve the poor performance of these assays, and this gap should not distract from a focus on identifying and implementing appropriate laboratory controls for risks that we do know. LM

638

LABMEDICINE  Volume 36 Number 10  October 2005

1. Clinical Laboratory Improvement Amendments (CLIA)—Equivalent Quality Control Procedures, Centers for Medicare and Medicaid Services, Brochure #4, Baltimore: 2003; Available at: www.cms.hhs.gov/clia/6606bk.pdf. Accessed January 27, 2005. 2. Quality System Regulation, US Code of Federal Regulations, 21 CFR Part 820. 3. Council Directive 98/79/EC of the European Parliament and of the Council of 27 October 1998 on In Vitro Diagnostic Medical Devices," Official Journal of the European Union L331 (December 7, 1998). 4. Global Harmonization Task Force, Risk Management as an Integral Part of the Quality Management System, Proposed Draft SG3/N15R6, 5. U.S. Department of Health and Human Services. Medicare, Medicaid and CLIA programs: Regulations implementing the Clinical Laboratory Improvement Amendments of 1988 (CLIA). Final rule. Fed Regist 1992; 57:7002-186 6. CLIA regulations, 42 CFR Part 493, Available at: www.hcfa.gov/medicaid/clia/cliahome.htm. Accessed January 27, 2005. 7. ISO 14971:2000, Medical Devices – Application of risk management to medical devices 8. ISO/DIS 14971:2005, Medical Devices – Application of risk management to medical devices 9. Clarke WL. Evaluating clinical accuracy of systems for self-monitoring of blood glucose. Diabetes Care. 1987;10:622-628. 10. Parkes JL. A new consensus error grid to evaluate the clinical significance of inaccuracies in the measurement of blood glucose. Diabetes Care. 2000;23:1143-1148. 11. Joint Commission on the Accreditation of Healthcare Organizations (JCAHO). Effective July 1, 2001. Revisions to Joint Commission Standards in Support of Patient Safety and Medical/Health Care Error Reduction. Chicago, Joint Commission on Accreditation of Healthcare Organizations. 12. Clinical and Laboratory Standards Institute (formerly NCCLS). Quality Management for Unit-Use Testing; Approved Guideline. NCCLS document EP18-A (ISBN 1-56238-481-3). NCCLS, 940 West Valley Road, Suite 1400, Wayne, Pennsylvania 19087-1898 USA, 2002. 13. ISO 15189:2003; Medical laboratories—Particular requirements for quality and competence. Geneva: ISO, 2003. 14. Failure Mode and Effect Analysis: FMEA From Theory to Execution, Second Edition, Stamatis, D. H. 15. American Institute Of Chemical Engineers, Guidelines for Hazard Evaluation Procedures - With Worked Examples (2nd Edition). Center for Chemical Process Safety (CCPS). 1992. 16. Burgmeier J. Failure mode and effect analysis: an application in reducing risk in blood transfusion. J Quality Improvement. 2002;28:331-339. 17. Woodhouse S. Engineering for safety: Use of failure mode and effects analysis in the laboratory. Lab Med. 2005;36:16-18. 18. Astion M. Interview: Error reduction and risk assessment with Shirley Weber, MHA. Laboratory Errors & Patient Safety. 2004;8-9. 19. Clinical and Laboratory Standards Institute (CLSI). Proceedings from the QC for the Future Workshop; A Report. CLSI document X6-R (ISBN 1-56238573-9). Clinical and Laboratory Standards Institute, 940 West Valley Road, Suite 1400, Wayne, Pennsylvania 19087-1898 USA, 2005.

labmedicine.com