Background: In view of increasing attention focused on patient safety and the need to reduce laboratory errors, it is important that clinical laboratories collect statistics on error occurrence rates over the whole testing cycle, including pre-, intra-, and postanalytical phases.
Methods: The present study was conducted in 2006 according to the design we previously used in 1996 to monitor the error rates for laboratory testing in 4 different departments (internal medicine, nephrology, surgery, and intensive care). For 3 months, physicians and nurses were asked to pay careful attention to all test results. Any suspected laboratory error was recorded with associated pertinent clinical information. Every day, a laboratory physician visited the 4 departments and a critical appraisal was made of any suspect results.
Results: Among a total of 51 746 analyses, clinicians notified us of 393 questionable findings, 160 of which were confirmed as laboratory errors. The overall frequency of errors, 3092 ppm, was significantly lower (P <0.05) than in 1996 (4700 ppm). Of the 160 confirmed errors, 61.9% were preanalytical errors, 15% were analytical, and 23.1% were postanalytical.
Conclusions: During the last decade the error rates in our stat laboratory have been reduced significantly. As demonstrated by the distribution pattern, the pre- and postanalytical steps still have the highest error prevalences, but changes have occurred in the types and frequencies of errors in these phases of testing.
The Institute of Medicine report To Err Is Human: Building a Safer Health System (1) and other reports (2)(3)(4)(5) have increased concern over the negative impact of medical errors on public health and patient care. Although the Institute of Medicine report presented few data on errors in laboratory medicine, it had wide-reaching implications for all disciplines. Because data from clinical laboratories are directly involved in the vast majority of all medical diagnoses and treatments (6), there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. Many strategies are used to reduce laboratory errors, including internal quality control procedures, external quality assessment programs, certification of education programs, licensing of laboratory professionals, accreditation of clinical laboratories, and the regulation of laboratory services. In the past, laboratory professionals focused their attention on analytical errors and mistakes resulting in adverse events but overlooked errors in the preanalytical and postanalytical steps. In 1996, we demonstrated that pre- and postanalytical processes in our laboratory were more vulnerable to errors than the analytical steps (7). The results of our study and others (7)(8)(9) have led to wide agreement that the total testing process is the best framework for evaluating and reducing error rates in laboratory medicine and wide acceptance of the proposed definition for laboratory error as a defect occurring at any part of the laboratory cycle, from ordering tests to reporting, interpreting, and reacting to results (8)(9), a definition that has been incorporated in the ISO Technical Report 22367 (10).
The aim of the present study was to compare our 1996 data on errors with data collected 10 years later in the same clinical context, using the same study design for both investigations.
Materials and Methods
The department of laboratory medicine of the University Hospital of Padova provides ∼8 million tests per year (30% for inpatients and 70% for outpatients) for clinical chemistry, hematology, coagulation, immunology, and molecular biology. The laboratory serves a university hospital that had 2900 beds in 1996, but this number had decreased to 1750 beds in 2006 owing to a policy aimed at improving efficiency, reducing the length of stay, and increasing day surgery and day hospital activities. The teaching hospital is affiliated with a well-established medical school providing specialized disciplines, including a very active organ transplantation program. The stat laboratory, which is a part of the department of laboratory medicine, which operates with independent staff in premises near to the emergency department, performed 1.65 million tests in 2006, compared with 1.4 million in 1996. The criteria for using the stat facilities vs the “routine” laboratory have not changed since 1996, and the ratio of routine to stat tests has remained the same. Improvements have been made to the laboratory information system since 1996. In particular, electronic test requests have replaced manually completed forms, electronic reporting has supplanted communication by telefax, and automated validation of laboratory test results based on rules and limits has been implemented.
Blood drawing and sample collection are performed by physicians and nurses from the individual wards. In 1996, specimens were transported manually by ward staff, but now more than 70% of specimens are transported by means of pneumatic tube technology. Approximately 95% of laboratory test requests are made directly by the ward staff using the hospital information system, which allows the immediate printing of bar-code labels for the bedside identification of all patient samples. All specimen tubes and related tests are identified in the stat laboratory through automated check-in at the preanalytical workstation (Stream LAB, Dade Behring). Other analytical systems include 2 hematological analyzers (Advia 120, Siemens Medical Solutions Diagnostics SRL), 2 coagulation analyzers (Sysmex CA 7000, Dade Behring), and 2 blood gas analyzers (Rapidlab 405, Siemens Medical Solutions Diagnostics SRL).
After technical and medical validation, all laboratory results are transmitted to computer terminals located at each nursing ward. The laboratory services, accredited by Clinical Pathology Accreditation organization, are certified according to the standards defined by ISO 9001:2000.
The study design and methodology used in 2006 were exactly the same as those used in 1996. Briefly, the same 4 departments (internal medicine, nephrology, surgery, and the intensive care unit) were selected for the study, and all stat laboratory data for these departments were strictly monitored for 3 months, from September 15 to December 15, 2006 (the same time span as that used for the previous study). The brevity of the observation period was a prerequisite for the study design, the physicians (both attending and trainees) and nurses of the 4 departments being asked to pay careful attention to all test results. These staff members were provided with a special notebook in which any questionable result was recorded, together with any pertinent clinical information. Every day, a laboratory physician visited the 4 units, and a critical review of any suspected laboratory result was made through a revision and discussion. If data were considered clinically questionable, the original request and related specimen(s) were checked and, if deemed necessary, retested, sometimes using alternative methods and instruments. Finally, whenever appropriate, a search was made for any interferences. In addition, a careful investigation of the blood drawing and sample collection procedures was performed. To prevent any bias in the results obtained, laboratory staff were not notified that the study was being conducted. One of the 2 laboratory scientists involved in the study made a careful check of the internal quality control data and registration. We considered as analytical errors all data released if quality control data exceeded limits established on the basis of the hierarchy of models to be applied to set analytical quality specifications as approved in the Stockholm Consensus Conference of 1999 (11). In particular, for most laboratory tests, analytical quality specifications were based on biological variation (12). In addition, we considered as analytical errors any results that exceeded the same limits when repeated in the same specimen and any results released later than the established analytical turnaround time (TAT) 1 limits. The classification of errors was performed according to the same criteria as those used in 1996, with additional criteria proposed in the ISO Technical Report 22367, particularly for the definition of latent, cognitive, and preventable errors (10). The statistical significance of differences between relative frequencies of errors observed in the participating departments was calculated with the χ2 test.
During the 3-month period, 51 746 analyses from 7615 requests and 17 514 blood collection tubes were considered. Among a total of 51 746 analyses, clinicians notified us of 393 questionable findings, 73 of which were confirmed as laboratory errors, whereas 87 were identified by the laboratory staff (Table 1⇓ ), resulting in a total number of 160 errors and an overall frequency of 0.309%, 3092 ppm. For the other questionable results, no evidence of error was found. No statistically significant differences were found in the error frequencies observed in the 4 investigated departments. The comparison between these data and those obtained in 1996 demonstrated a significant decrease in error rates: from 4667 to 3092 ppm (Table 2⇓ ; odds ratio, 0.66; 95% CI, 0.54–0.82; P <0.05). The distribution of errors was 61.9% preanalytical, 15% analytical, and 23.1% postanalytical. The preanalytical phase was found to have the highest prevalence of errors, the most frequent problems arising from mistakes in tube filling with, in particular, incorrect blood:anticoagulant ratios for coagulation testing, and empty or inadequately filled tubes. Identification errors were noted for 3 patients and 14 related tests (875 ppm); other common errors were due to the wrong type of collection tubes being used (13 tests), errors in request procedures (12 tests), empty tubes (11 tests), contradictory demographic data from different information systems leading to delay in result reporting (6 tests), missing tubes (5 tests), samples diluted with an intravenous infusion solution (3 tests), ordered test results not received (2 tests), and other minor problems.
In the analytical phase, we identified 24 errors, accounting for 15% of the total mistakes; the relative frequency was thus 463 ppm. In particular, some results were released in the presence of unacceptable internal quality control data owing to a problem in the calibration-verification procedure.
In 3 cases unacceptable results were obtained because of random errors due to pipetting difficulties on the analyzer related to problems such as fibrin clots and short sampling. In the postanalytical phase, 37 errors (relative frequency 23.1%, 715 ppm) were observed. A single episode of information system inactivity that delayed the communication of 32 results represented the majority of these errors; in these cases the established procedure for transmitting laboratory results by telefax was activated, but the delivery of some patient results was delayed. In another 3 cases, problems in within-laboratory communication delayed the results, and an excessive TAT was recorded in 2 cases. The preanalytical phase is still the most vulnerable to errors. The type of error, however, has changed, with a decrease (from 20.6% to 1.9%) being achieved in the number and relative frequency of specimens inappropriately collected from an intravenous infusion route. Because of improved information procedures, a reduction has been achieved in errors in test transcription (21.3%), and ward (19%) and patient identification (2.6%). However, new types of errors have emerged, particularly those attributable to the staff’s application of new information procedures. The relative frequency of errors in the analytical and postanalytical processes was very similar to that documented in 1996.
Error classification based on ISO/PDTS 22367 (10) is reported in Table 3⇓ . In particular, the identification of laboratory errors as internal or external demonstrates that only 18.1% of total errors were internal to the laboratory, emphasizing the role of external causes, particularly in the pre- and postanalytical phases; 32.5% of errors were classified as latent, 10.6% as cognitive, and 73.1% as preventable.
The relationship between errors and patient outcomes is shown in Table 4⇓ . Although 75.6% of errors had no effect, 24.4% had a negative impact on patient care. In particular, an identification error caused an inappropriate admission to an intensive care unit; this error was preanalytic, active, noncognitive, and preventable. Two episodes of inappropriate transfusion due to preanalytical errors were recorded. In one case, the preanalytic error was the result of mistaken patient identification (active, noncognitive, and preventable); in the other, the error was due to a specimen being diluted (preanalytic, active, noncognitive, and preventable). Inappropriate laboratory test repetition and further inappropriate investigations were observed in 16.9% and 5.6% of cases, respectively.
The healthcare system is increasingly dependent on reliable clinical laboratory services, which, as part of the overall healthcare system, are prone to errors. Although numerous studies have been conducted with a view to enhancing analytical laboratory quality, the literature on errors in laboratory medicine is scarce and presents several limitations (8). Some important achievements have been made, however. In the last 40 years, there has been an impressive decrease in error rates, particularly for analytical errors, and the variability of analytical performance is now frequently less than 1/20th of what it was 40 years ago (13). Moreover, evidence from recent studies demonstrates that a large percentage of laboratory errors occur in the pre- and postanalytical steps (7)(8)(13)(14). These observations are confirmed by the findings in the present study. The total error rates studied with an identical protocol showed a significant decrease from 4667 ppm in 1996 to 3092 ppm in 2006. Errors in the preanalytical (61.9%) and postanalytical (23.1%) processes occurred much more frequently than in the analytical steps (15%). However, the types of error, particularly in the preanalytical phase, appear to have changed over time. The frequency of specimens inappropriately collected from the infusion route, which was very high in 1996 (20.6%), dramatically decreased to 1.9%, a decrease attributable to the corrective measures undertaken and improved training and commitment of phlebotomists. Unsatisfactory compliance with written and widely distributed procedures for sample collection may explain the frequency of tube-filling errors, inappropriate containers, and empty tubes, thus emphasizing the need for even closer interdepartmental cooperation (15). The most unexpected finding was the identification of errors linked to information system procedures. The introduction of a hospital computerized order entry system has eliminated some errors, such as a wrong patient name or care unit and the performance of unrequested laboratory tests, but it has not eliminated the risk of mismatching patients. These errors derive from poor compliance with written procedures, emphasizing some unsatisfactory aspects of information technology. The clinical workplace is a complex system in which technologies, people, and organizational routines interact dynamically. Accordingly, good design or implementation of computerized procedures is not a technical problem, but rather one of jointly optimizing the combined sociotechnical system (16). The frequency and potential seriousness of the identification errors involving clinical laboratories have recently been described in a report of data collected from 120 institutions in the US (17). Our data confirm that errors in identification can frequently cause inconvenience and harm to patients. The same lesson should be drawn from the high frequency of postanalytical problems, deriving from a single episode of information system inactivity. In this case, the alternative procedure, consisting of the transmission of laboratory results by telefax, was found to be less reliable than expected. Consequently, a relatively high number of results were released, with an excessively prolonged TAT. The small number and low frequency of analytical errors was similar to that observed in 1996, the great majority deriving from a single episode in which, despite the evidence of internal quality control data, the analytical run was accepted, thus leading to the delivery of wrong results. Poor compliance with written and widely distributed procedures is still a problem, particularly when errors in execution derive from stressed and overworked staff members. However, current evidence shows that analytical quality is still a major issue. On reviewing results from numerous clinical laboratories participating in proficiency testing programs, Westgard and Westgard (18) recently demonstrated that estimates on the σ scale for common clinical chemistry assays are not satisfactory. Unsatisfactory analytical performances have also been described in the fields of immunoassay (19)(20), coagulation (21), hematology(22), and molecular biology tests (23). It may well be that currently detected analytical errors are only a portion of the total analytical errors. Neither laboratory professionals nor clinicians currently are aware of all analytical errors. Their lack of awareness may have various causes, including the limited design of quality control procedures, excessively wide reference ranges, and poor communication of suspected laboratory errors by clinicians.
In conclusion, in 2006, using the same study design as that in 1996, a significant reduction was found of the error rates in the same stat laboratory, but the distribution of errors among the different phases of the testing cycle remained essentially the same. The findings made in the present study confirm the high prevalences of errors in pre- and postanalytical steps. Moreover, the high frequency of errors attributable to processes external to the laboratory (∼80%), particularly in the preanalytical phase (87%), stresses the need for a closer control of all processes and procedures involving patients undergoing laboratory testing. It is clear that most current harmful errors occur in the preanalytical phase of TTP, meaning all procedures performed before the sample reaches the laboratory. Issues such as proper patient and specimen identification, appropriateness of test requests, accuracy in blood drawing, specimen handling, and transportation can no longer be considered trivial because they strongly affect total quality. Another important finding of the present study is the very high (73%) percentage of errors classified as preventable. This aspect is even more important when we consider that 24.6% of errors have resulted in unnecessary laboratory test repetition, further inappropriate investigations, and episodes of negative clinical outcomes. In the last few decades, a significant decrease in error rates has been achieved in laboratory medicine, but there is much room for improvement, and the need for more effective procedures for quality assessment and control is greater than ever.
Grant/funding support: None declared.
Financial disclosures: None declared.
Acknowledgments: We thank Dr. Francesco Di Vito Nicola and Enrico Babetto for valuable cooperation and the physicians and nurses of the Departments of the University Hospital of Padova for making this study possible.
↵1 Nonstandard abbreviation: TAT, turnaround time.
- © 2007 The American Association for Clinical Chemistry