BACKGROUND: Although a theoretical consideration suggests that point-of-care testing (POCT) might be uniquely vulnerable to error, little information is available on the quality error rate associated with POCT. Such information would help inform risk/benefit analyses when one considers the introduction of POCT.
METHODS: This study included 1 nonacute and 2 acute hospital sites. The 2 acute sites each had a 24-h central laboratory service. POCT was used for a range of tests, including blood gas/electrolytes, urine pregnancy testing, hemoglobin A1c (Hb A1c), blood glucose, blood ketones, screening for drugs of abuse, and urine dipstick testing. An established Quality Query reporting system was in place to log and investigate all quality errors associated with POCT. We reviewed reports logged over a 14-month period.
RESULTS: Over the reporting period, 225 Quality Query reports were logged against a total of 407 704 POCT tests. Almost two-thirds of reports were logged by clinical users, and the remainder by laboratory staff. The quality error rate ranged from 0% for blood ketone testing to 0.65% for Hb A1c testing. Two-thirds of quality errors occurred in the analytical phase of the testing process. These errors were all assessed as having no or minimal adverse impact on patient outcomes; however, the potential adverse impact was graded higher.
CONCLUSIONS: The quality error rate for POCT is variable and may be considerably higher than that reported previously for central laboratory testing.
In the last decade, there has been a large expansion in use of point-of-care testing (POCT)2 (1). The rationale for using POCT, as opposed to central laboratory testing, is that the more rapid availability of a test result will facilitate early clinical decision making and might therefore contribute to better patient management and outcomes, although evidence for the latter is inconsistent (2, 3). A consideration of the POCT process suggests that it might be uniquely vulnerable to error because the testing is generally undertaken by nonlaboratory clinical staff, whose primary job is the delivery of patient care rather than the analysis of body fluid samples. Even with adequate training, it is possible that the pressures of a busy clinical environment might result in lapses or violations in the performance of POCT. The fact that clinical management decisions may be made immediately on receipt of a POCT result increases the risk to the patient if the result is erroneous. This is in contrast to central laboratory testing, where the time interval between result generation and reporting may allow greater opportunity for error detection (through processes such as delta checking) before clinical management decisions have been implemented.
It is essential that the quality of an analytical result generated by POCT is appropriate for optimum patient management. In recognition of these quality requirements, many regulatory bodies have developed standards for the performance and governance of POCT (4, 5). Typically, these specify requirements for the organization and management of POCT, instrument selection, operator training, instrument calibration and operation, quality control and external quality assurance, reporting/documentation of results, and health and safety considerations. Surprisingly little information is available on the quality error rate associated with POCT (6). This is an important gap in the knowledge base, since such information would help inform risk/benefit appraisals considering the introduction of POCT and would also direct attention to preventive actions to reduce quality error rates.
The aim of this study was to investigate the quality error rate associated with POCT, classified by type and severity, in a UK hospital setting, and to compare this to the quality error rate reported previously for central laboratory testing using an identical study methodology.
This study took place between October 2009 and December 2010 in the Western Health and Social Care Trust (WHSCT), Northern Ireland, UK. The WHSCT comprises 2 acute hospital sites (Altnagelvin Hospital, Londonderry; Erne Hospital, Enniskillen) and 1 nonacute hospital site (Tyrone County Hospital, Omagh). Altnagelvin and Erne hospitals are supported by on-site 24-h laboratory services, whereas the Tyrone County Hospital has a 0900 to 1700 central laboratory on site. Outside this time period, Tyrone County Hospital samples can be analyzed only by using POCT or transfer of the sample to the laboratories at either Altnagelvin or Erne, each of which is approximately 1 h travel time away.
POCT AVAILABLE AT EACH HOSPITAL SITE
Blood gas/electrolytes analyzers (Roche Omni S, Roche Diagnostics), urine pregnancy testing (Clearview HCG, Inverness Medical Innovations Inc.), blood glucose meter testing (Performa, Informa, and Advantage meters, Roche Diagnostics), blood ketone meter testing (Abbott Medisense, Abbott Laboratories), urine screening for drugs of abuse (Nal von Minden-Drug screen), and dipstick urinalysis (Siemens-Multistix, Siemens Healthcare Diagnostics).
Blood gas/electrolytes (Gem Premier, Instrumentation Laboratory Ltd.), blood gas (i-STAT, Abbott Point of Care Inc.), hemoglobin A1c (Hb A1c) (DCA 2000, Siemens Healthcare Diagnostics), urine pregnancy testing (Clearview HCG, Inverness Medical Innovations Inc.), blood glucose meter testing (Performa, Informa, and Advantage meters, Roche Diagnostics), and dipstick urinalysis (Siemens-Multistix, Siemens Healthcare Diagnostics).
Tyrone County Hospital.
Blood glucose meter testing (Performa, Informa, and Advantage meters, Roche Diagnostics), dipstick urinalysis (Siemens-Multistix, Siemens Healthcare Diagnostics), electrolytes, and troponin I (i-STAT instruments, Abbott Point of Care Inc.).
An organizational policy on POCT (based on the recommendations of the UK Medicines and Healthcare Regulatory Agency) (7, 8) was in operation that addressed the need for POCT in a given clinical area, operator training, instrument operation and maintenance, quality control and quality assurance, and result reporting/documentation. A multidisciplinary committee, comprising senior representatives of laboratory staff, POCT users, hospital finance department, and pharmacy, oversaw the performance of POCT. The introduction of POCT by clinical teams required approval from this committee. All users of POCT were required to undergo specific training that was delivered and coordinated by the central laboratory. A member of laboratory staff acted as the POCT officer, with the full-time role of supporting all POCT at the 3 hospital sites. The duties of the POCT officer included operator training, instrument troubleshooting, supervision of quality control and quality assurance, and prospective audit and quality management reviews. POCT performance was reviewed at a monthly quality meeting attended by senior laboratory staff and was subject to external audit as a component of the clinical biochemistry service by the UK laboratory accreditation body, Clinical Pathology Accreditation UK Ltd. For the entire period described in this study, the clinical biochemistry service had full accreditation status.
All POCT modalities were subject to rigorous internal quality control (IQC) and external quality assurance (EQA). For blood glucose meter testing and dipstick urinalysis, the EQA schemes were organized by the central laboratory; for all other tests, by an external body (UK External Quality Assurance Scheme or Welsh External Quality Assurance Scheme).
This clinical biochemistry laboratory service had in place a Quality Query reporting system for the reporting and logging of defects as described (9, 10). In brief, all laboratory staff were trained in the reporting of any perceived actual or potential quality lapses anywhere in the pathway from test selection to the timely return of a report to the requesting clinician. Any defects, observations, or complaints reported by clinical users were processed in an identical manner. All Quality Query reports were logged and investigated by senior laboratory staff. The defects were classified by cause (using the 3 broad divisions of preanalytical, analytical, and postanalytical phases) and graded by actual (A score) and potential (P score) adverse patient impact severity using a 5-point scale (with 1 the least and 5 the most serious). The A score described the actual effect of the defect on patient care, whereas the P score described the worst-case potential outcome that might arise from such a defect. The Quality Query reporting system was extended to incorporate POCT from early 2009. Underpinning this reporting system was a culture of openness among clinical users and laboratory staff that encouraged the identification of weaknesses in processes and procedures and that focused on improving systems rather than apportioning individual blame.
We reviewed all Quality Query reports relating to POCT. We extracted data relating classification by cause and A and P scores and established data on total number of POCT analyses performed for each test type by consumable usage (e.g., reagent strip or cartridge use or by information from the POCT equipment as relevant).
In the review period, 225 Quality Query reports were logged against a total of 407 704 point-of-care tests. The defect rate varied between test types: from 0% for blood ketone meter testing to 0.65% for Hb A1c (Table 1). For the most commonly performed POCT test (blood glucose meter), the defect rate was 0.023%. Two-thirds (65.8%) of all Quality Query reports were logged by users, and the remainder by laboratory staff supervising POCT.
The actual impact of the quality errors on patient care (A score) was considered to be low, with 51.8% graded at 1 (no impact on patient care) and 48.2% graded at 2 (minimal impact on patient care) (Table 2). The potential impact values of the quality error (P scores) were positively skewed toward a much greater impact on patient care, with 14.7% graded at 4 (moderate adverse patient outcome) and 3.6% graded at 5 (significant adverse patient outcome) (Table 2).
Most (97.3%) of the quality errors occurred in the preanalytical and analytical phases (Table 3). The specific cause differed between tests. For glucose meter testing, the single most common quality error (70.8% of total) was the operator being unable to use the instrument because he or she did not have a personal identifying barcode to allow meter use. This resulted in a delay, since the testing had to be allocated to a different staff member. For blood gas analysis, the single most frequent quality error (77.5%) was instrument lockout, generally because of the need for a minor maintenance procedure that the operator felt unable or unwilling to perform. Again, this resulted in a delay in analysis, as the sample had to be sent to the central laboratory for testing or analyzed using an alternative POCT instrument. Breaches of good health and safety practice (most commonly POCT instruments contaminated with blood) accounted for 5.8% of all quality errors. For Hb A1c measurement, which had the highest quality error rate, three-quarters were failures either to analyze EQA samples or to submit the results in a timely manner as required. This also applied to urine pregnancy testing, where failure to perform EQA, submit results, or attain a satisfactory score accounted for 11 of 13 quality errors. Overall, however, there was no relationship between the required frequency of performance of IQC/EQA for the different tests and quality error rates. Failure to stock essential consumables such as instrument cartridges (so that the instrument could not be used when required) constituted 5.8% of quality errors.
A striking feature of the results presented here was the wide variability in quality error rates between test types, ranging from 0% (blood ketone meter testing) to 0.65% (Hb A1c). The reason for the variability is unclear. An important question relates to the robustness of the methodology to detect quality errors in this study. It is particularly difficult to assess the true error rate in any analytical testing program, be it POCT or central laboratory testing, under normal working conditions for 2 reasons. First, someone must recognize that a quality error has occurred (for example, that IQC or EQA has not been carried out on schedule); second, the occurrence must be logged. Failure of either of these steps will result in underestimation of the true quality error rate. It is likely that some, but not all, such quality errors would be identified by laboratory staff responsible for supervising POCT. Failure to log a quality error may happen for a variety of reasons: assessing the error as too trivial to log, feeling concern that it might result in censure of staff for failing to follow standard operating procedures, or simply overlooking it in a busy clinical environment with other, more pressing priorities. For example, it is noteworthy that only 2 quality errors were logged (a defect rate of 0.0025%) for dipstick urinalysis (the second highest volume POCT test), both of which related to unsatisfactory EQA performance and were logged by the POCT officer rather than the clinical user. We speculate that dipstick urine testing may be regarded as a less critical POCT test by clinical users, since patients will generally proceed to a battery of further blood and quantitative urine tests that may be regarded as more definitive. This might result in a greater degree of tolerance toward errors in dipstick urinalysis compared with other tests such as blood gas analysis, blood glucose analysis, or pregnancy testing, which are likely to have a more immediate impact on patient management decisions. No quality errors were reported for POCT ketone testing, a low-volume test that was restricted to clinical staff dealing specifically with patients with diabetes. It is unclear whether this related to underreporting or that the test was restricted to a highly specialized staff group.
For all the above reasons, it is likely that the quality error rate measured here for POCT represents an underreporting of the true rate. The methodology represented a pragmatic approach to the detection of quality errors: vigilance was encouraged by all staff (clinical or laboratory based) in identifying and logging actual or perceived quality failures. An inevitable limitation of this methodology, however, is the likelihood that some errors were either not identified or not logged and therefore that the quality error rate is underestimated. Whereas a more active and overt approach to identifying errors may well have detected a greater number, it would have had the potential disadvantage of altering the behavior of POCT practitioners (the so-called Hawthorne effect) (11) during the observation period and therefore might not have been representative of what happens in standard routine practice. Furthermore, the methodology used here to identify quality errors was identical to that used previously by us to investigate laboratory testing (9, 10) and involved the same laboratory staff and clinical users. We therefore suggest that the data presented here provide valuable information on the relative quality of performance of POCT compared with laboratory testing. Our previous study reported a quality error rate of 0.085% for all requests for laboratory testing, which is considerably lower than the overall quality error rate for most of the POCT testing modalities reported here, the exceptions (Table 1) being ketone meter testing (0%), dipstick urinalysis (0.003%), and glucose meter (0.02%). It should be noted that other publications have reported varying quality error rates for laboratory testing of 0.012% to 0.6% (12,–,15), although those studies used different methodologies and took place in different settings. Nevertheless, the data presented here suggest that in certain situations POCT is associated with a markedly higher quality error rate than laboratory testing.
Kost (16) proposed that errors in the POCT process could be classified as occurring in the preanalytical, analytical, and postanalytical phases. This was subsequently modified by Meier and Jones (17) to include errors associated with test order indication and test frequency. In laboratory testing, the majority of quality errors occur in the preanalytical phase (reported range 31.5%–88.9%) (13). In contrast, we found that that for POCT the majority of quality errors (65%) occurred in the analytical phase. These rarely related to instrument failure, however, but rather to the failure or inability of operators to undertake basic instrument preparation or maintenance (e.g., emptying of waste containers) or forgotten passwords or mislaid individualized operator barcodes. For the most part, these simple procedures had been covered in operator training but the operator felt unwilling or unable to perform them. For manual tests without automated running of QC, such as Hb A1c or pregnancy testing, failure to analyze internal QC or external QA was an important source of error. It is important to note that preanalytical factors relating to sample integrity (e.g., hemolysis, icterus, lipemia) that would be identified in central laboratory testing of serum/plasma may go unrecognized in POCT systems that use a whole blood sample. This fact may contribute to the lower prevalence of preanalytical quality errors in POCT compared with central laboratory testing. It was beyond the scope of this study to assess whether a particular POCT test was appropriate and necessary for patient management, i.e., errors relating to test order indication and frequency (17).
The actual impact of POCT quality errors on patient care differed from that reported by us previously for central laboratory testing (9). All the quality errors were recorded as having an actual (A) score of either 1 or 2, indicating no change or a minor change in clinical management but without any adverse impact on patient outcome. This assessment was made by senior laboratory staff in consultation with clinical staff where necessary, and is therefore likely to be accurate. The majority of these quality errors related to delays in the performance of POCT, e.g., the particular operator was unable to use the POCT instrument when required so the testing had to be delegated to a different staff member or the sample had to be sent to the central laboratory for analysis. There were no instances of erroneous results being generated by POCT or incorrect documentation of results, and therefore none of the defects reported resulted in any adverse clinical outcomes. Clearly, the potential adverse impact on patient outcomes is much greater, and this was reflected in the higher P scores. However, the P scores were generally lower than those reported previously for central laboratory testing (9). For example, in laboratory testing, two-thirds of quality errors were given a maximum P score of 5, i.e., the error had the potential to result in a clinically significant adverse patient outcome. In contrast, only 3.6% of POCT quality errors were graded as P score of 5, with the great majority graded as P score 2, i.e., the potential for minor clinical management change but no adverse patient outcome. The major reason for this difference was that where POCT equipment was unavailable for use for whatever reason, emergency laboratory testing was available at both acute hospital sites. Any delay in the availability of test results was therefore minimal. This does, however, raise the question of whether POCT has any role where there is a rapid-turnaround laboratory service, particularly when its nonavailability is not considered to have an adverse impact.
Many of the quality errors identified here might have been prevented by better training or greater adherence to protocols by operators. Although all POCT operators underwent a comprehensive training program, it appeared that in many instances operators felt unwilling or unable to undertake certain tasks (often very simple), despite specific training in these topics. This may have been due to technical problems that arose only infrequently in the experience of a given operator and the resulting lack of familiarity. This finding does, however, point to the need either for enhanced training of operators or for greater support of POCT from laboratory-based staff.
In summary, our study suggests that quality error rates associated with POCT may be considerably higher than those associated with central laboratory testing. This is important information when assessing the potential risks and benefits of introducing POCT. If the potential risk to patient care is increased, then POCT can be justified only when the potential benefits arising from the rapid availability of a test result are high.
↵2 Nonstandard abbreviations:
- point-of-care testing;
- Western Health and Social Care Trust;
- Hb A1c,
- hemoglobin A1c;
- internal quality control;
- external quality assurance.
Author Contributions: All authors confirmed they have contributed to the intellectual content of this paper and have met the following 3 requirements: (a) significant contributions to the conception and design, acquisition of data, or analysis and interpretation of data; (b) drafting or revising the article for intellectual content; and (c) final approval of the published article.
Authors' Disclosures or Potential Conflicts of Interest: No authors declared any potential conflicts of interest.
Role of Sponsor: The funding organization played no role in the design of the study, interpretation of data or preparation or review of the manuscript. No patients were enrolled.
- Received for publication February 24, 2011.
- Accepted for publication June 21, 2011.
- © 2011 The American Association for Clinical Chemistry