## Abstract

**BACKGROUND:** QC planning based on risk management concepts can reduce the probability of harming patients due to an undetected out-of-control error condition. It does this by selecting appropriate QC procedures to decrease the number of erroneous results reported. The selection can be easily made by using published nomograms for simple QC rules when the out-of-control condition results in increased systematic error. However, increases in random error also occur frequently and are difficult to detect, which can result in erroneously reported patient results.

**METHODS:** A statistical model was used to construct charts for the 1_{ks} and *X̅*/χ^{2} rules. The charts relate the increase in the number of unacceptable patient results reported due to an increase in random error with the capability of the measurement procedure. They thus allow for QC planning based on the risk of patient harm due to the reporting of erroneous results.

**RESULTS:** 1_{ks} Rules are simple, all-around rules. Their ability to deal with increases in within-run imprecision is minimally affected by the possible presence of significant, stable, between-run imprecision. *X̅*/χ^{2} rules perform better when the number of controls analyzed during each QC event is increased to improve QC performance.

**CONCLUSIONS:** Using nomograms simplifies the selection of statistical QC procedures to limit the number of erroneous patient results reported due to an increase in analytical random error. The selection largely depends on the presence or absence of stable between-run imprecision.

Statistical QC (SQC)^{2} based on the periodic analysis of stable control materials is one of the main strategies used to detect instabilities in analytical systems, such as increases in systematic or random errors, to prevent reported patient results from containing medically important errors that could affect clinical decision-making (1, 2).

The ability of an SQC procedure to prevent the reporting of erroneous results is determined by several factors, which include the type of error caused by the analytical instability, the SQC rule used, the number of controls analyzed in each QC event, the frequency of QC events, and the laboratory testing mode (3). The selection of the most suitable SQC procedure for a test should be based on a QC performance measure that considers all of these factors and is directly related to the number of erroneous reported results, and thus to the probability of patient harm. One such measure is the maximum expected increase in the number of unacceptable patient results [Max E(N_{UF})] reported during the presence of an undetected out-of-control error condition (4).

Accordingly, the probability of patient harm can be reduced by minimizing the Max E(N_{UF}) value via the selection of appropriate SQC procedures. This value can be easily estimated for simple rules from the σ value that characterizes the analytical process under control. The estimation uses nomograms (5) assuming that out-of-control conditions increase the systematic error. However, increases in random error also occur frequently and are often difficult to detect, leading to erroneous reported results. Eventual out-of-control conditions leading to an increase in imprecision are likely to persist in modern laboratories that analyze samples in a continuous stream rather than in batches and that employ QC events at relatively high frequency rather than episodes of in-depth maintenance of the analytical system.

In this work, we investigate the relationship between Max E(N_{UF}) and the σ value of the analytical process under control when a persistent increase in random error occurs for simple QC rules of the form 1_{ks} (k = 2, 2.5, 3, 3.5, 4) and for an *X̅*/χ^{2} multirule that combines the mean rule and a within-run SD rule (6) for different probabilities of false rejection. The χ^{2} rule uses a χ^{2} test to check whether the distribution of results obtained from repeated analysis of a control material in a QC event (measured by the SD) is greater than that expected from the stable imprecision of the measurement procedure. The *X̅* rule checks whether the mean of these control results is too different from the mean assigned to the control, according to the sampling distribution of the mean, given the stable imprecision of the measurement procedure.

Charts plotted from the results aid the selection of the most appropriate SQC procedure for risk management to limit the number of erroneous results reported due to increased measurement imprecision. The effect of the presence of between-run imprecision on the practical usefulness of these nomograms is also investigated, because the error detection ability of a rule depends on the relative magnitudes of the stable between-run and within-run imprecision components (7).

## Materials and Methods

The statistical model used in this study has been described previously (3). A continuous-mode testing process with bracketed QC is assumed: sample patients are analyzed in a continuous stream, and periodically a set of control samples is analyzed. Their results are evaluated according to the control rule. If the control results are rejected, patient results since the last accepted controls are retained until the problem is corrected. Patient samples are reanalyzed, and the test results are not reported until the next set of control results is accepted. It is assumed that an out-of-control error condition can occur with equal probability anywhere within the stream of samples being analyzed, and that it persists until it is detected.

The inception of an out-of-control condition that increases analytical random error will increase the imprecision of the measurement procedure during stable operation. It will thus increase the number of patient results produced with errors exceeding the maximum allowable total error (TE_{a}).

The term E(N_{UF}) represents the expected increase in the number of unacceptable patient results reported due to an undetected out-of-control error condition that increases analytical imprecision. It depends on the size of the increase in random error (ΔRE) and the power of the QC procedure to detect this condition and to prevent the reporting of the erroneous results. For a continuous-mode process using bracketed QC that encounters a persistent out-of-control error condition, the E(N_{UF}) value can be computed, according to (3), as in Eq. 1:
(1) where, ΔPE is the increase in the probability of producing unacceptable patient results due to the out-of-control error condition, ARL_{ED} is the average run length to error detection (the average number of batches that are processed before a QC rule rejection occurs), M is the average number of patient samples per batch, P_{1} is the probability of a QC rule rejection at the first QC event after the out-of-control error condition occurs, and E(N_{0}) is the expected number of patient results produced between the time an out-of-control error condition occurs and the next QC event.

For QC rules that test only the control samples in the current analytical run, ARL_{ED} = 1/P_{ED} and P_{1} = P_{ED}, where P_{ED} is the probability of error detection by the QC procedure. On the other hand, assuming that an out-of-control error condition can occur with equal probability anywhere within the stream of samples being analyzed and that it persists until it is detected, E(N_{0}) can be closely approximated by M/2. Therefore, Eq. 1 can be written as:
(2) Unless otherwise specified, it is assumed that random error refers only to the within-run imprecision and that the analytical procedure is unbiased; therefore, σ is defined as TE_{a}/s_{a}, where s_{a} is the analytical SD. ΔPE is computed as the probability that the error exceeds TE_{a} in the presence of an out-of-control error condition minus the probability that the error exceeds TE_{a} when the process is in control (8).

For 1_{ks} rules, P_{ED} is computed by integration of the density function of the normal distribution. The *X̅*/χ^{2} rule is a multirule based on the sequential application of a mean rule, *X̅*, and a variance rule, χ^{2}. Its P_{ED} is calculated as P_{ED} = P_{X} + (1 − P_{X})P_{χ2}, where P_{X} and P_{χ2} are respectively the probabilities of error detection of the *X̅* and χ^{2} rules, when applied separately. P_{X} is computed by integration of the density function of the normal distribution, and P_{χ2} is computed using the χ^{2} cumulative distribution function. The probability of false rejection for the *X̅*/χ^{2} rule is calculated as P_{FR} = P_{FRX}(2 – P_{FRX}), where P_{FRX} is the probability of false rejection for both the *X̅* and χ^{2} rules, when applied separately. Rejection limits are obtained so P_{FRX} =1 − (1 − P_{FR})^{1/2} for P_{FR} values of 0.05, 0.01, and 0.002 for the *X̅*/χ^{2} rule. The limits of rejection for the *X̅* rule (expressed as multiples of s_{a}/, where is the number of QC samples analyzed by QC event) are 2.237, 2.806, and 3.291 for P_{FRX} values of 0.0253, 0.0050, and 0.0010, respectively. The limits of rejection for the χ^{2} rule can be obtained from a χ^{2} distribution table by selecting the critical χ^{2} value for the corresponding P_{FRX}. During stable operation (i.e., ΔRE = 1), when the probability of a false rejection equals that of error detection (i.e., P_{ED} = P_{FR}), the average number of patient samples analyzed between QC rejections (ANP_{FR}) can be estimated as M/P_{FR}.

A run is rejected by the 1_{ks} rule when at least 1 control observation exceeds a set limit of k analytical SD, and it is rejected by the *X̅*/χ^{2} rule when it is rejected by either component rule. The *X̅* rule rejects a run when the mean of a set of n control observations exceeds the control limits corresponding to the desired P_{FRX}. The χ^{2} rule rejects a run when the ratio *s _{r}*

^{2}(n − 1)/

*s*

_{w}^{2}exceeds the critical χ

^{2}value corresponding to the desired P

_{FRX}, where

*s*is the SD of the n control results analyzed in each QC event, and

_{r}*s*is the stable within-run SD of the measurement procedure. If no between-run imprecision is assumed,

_{w}*s*equals s

_{w}_{a}, the analytical SD. This χ

^{2}rule is equivalent to the

*s*

_{w}^{2}rule described and evaluated by Linnet (9) and to the

*s*

_{w,1}

^{2}rule evaluated by Parvin (7).

To study the dependence of the E(N_{UF}) value for a given SQC procedure on the magnitude of an increase in analytical random error, ΔRE, the corresponding values of P_{ED} and ΔPE are calculated for each value of ΔRE, and then the E(N_{UF}) values are estimated using Eq. 2.

All calculations assume that 100 patient samples are analyzed between QC events (i.e., M = 100) and that ΔRE is expressed as a multiple of the stable s_{a} of the measurement procedure.

Consider, for example, a measurement procedure with TE_{a} = 10 and s_{a} = 2.5, representing therefore a 4σ process (10/2.5); it is controlled by a 1_{2.5s} QC rule with 2 control samples (of the same or different materials) analyzed in each QC event (n = 2). Suppose that an out-of-control episode occurs that produces an increase in the random error, which doubles the stable analytical imprecision to 5.0 (ΔRE = 2). The P_{ED}, by the SQC procedure, and the ΔPE, due to the out-of-control error condition, can be numerically estimated as 0.378 and 0.045, respectively. Therefore, from Eq. 2, E(N_{UF}) = 6.1.

The P_{ED} value is calculated as 1 − (1 − P_{ED1})^{2}, where P_{ED1} is the probability of detecting an increase in random error when n = 1. P_{ED1} is calculated as 1 − 0.789, which results from integrating the probability density function of the normal distribution, P(*x*), with a mean of zero and an SD of 5 (i.e., s_{a}ΔRE) between *x* = 6.25 and *x* = −6.25 (i.e., between ks_{a} and −ks_{a}).

The ΔPE value is calculated as PE(RE) − PE(0), where PE(RE) and PE(0) respectively represent the probability that the error of a patient result exceeds TE_{a} in the presence or absence of the increase in random error. PE(RE) = 1 − 0.9545, which results from integrating P(*x*) with a mean of zero and an SD of 5 (i.e., s_{a}ΔRE) between *x* = 10 and *x* = −10 (i.e., between TE_{a} and − TE_{a}). Similarly, PE(0) = 1 − 0.999937, which results from integrating P(*x*) with a mean of zero and an SD of 2.5 (i.e., s_{a}) between *x* = 10 and *x* = −10 (i.e., between TE_{a} and −TE_{a}).

The data represented in Fig. 1 were obtained by repeating these calculations for different values of ΔRE, where the maximum value achieved by E(N_{UF}) represents the Max E(N_{UF}) value. Data represented in Figs. 2 and 3 were obtained by using this method to estimate the Max E(N_{UF}) values for different QC procedures controlling analytical processes with different σ values.

The effects of between-run imprecision on the QC procedures are assessed by computer simulation as previously described by Parvin (7). This investigation aims to evaluate the ability of SQC procedures using the 1_{ks} rule to limit the number of erroneous patient results reported in the presence of stable between-run imprecision. Although in this study the behavior of the 1_{ks} and *X̅*/χ^{2} rules is assessed only for increases of within-run imprecision in the absence of analytical bias, both rules can also detect increases in between-run imprecision and systematic error, an ability that could be affected by the presence of stable between-run imprecision. This is of particular interest in the case of the 1_{ks} rules, which are widely used in QC planning aimed at detecting increases in systematic error. For this purpose Max E(N_{UF}) values are estimated for out-of-control conditions leading to systematic shifts or to increases in within-run or between-run imprecision for QC procedures using the 1_{ks} rule with ratios of between-run to within-run SD (Φ_{S}) of 0 and 1. The effects of an unnoticed between-run component on P_{FR} of the 1_{ks} and *X̅*/χ^{2} rules are assessed by computing P_{FR} for Φ_{S} values of 1 and 2 with the same rejection limits used for Φ_{S} = 0, and comparing them with their original values.

## Results

Fig. 1 shows the increase in the E(N_{UF}) reported during an undetected out-of-control error condition, with respect to the magnitude of the ΔRE. The plot is for a 4σ analytical procedure with 100 patient samples analyzed between QC events and 2 control samples (of the same or different materials) analyzed in each QC event. A 1_{2.5s} QC rule is applied assuming continuous-mode testing with bracketed QC, as described above. The P_{ED} by the SQC procedure, is also plotted as a function of ΔRE in Fig. 1.

E(N_{UF}) is zero for the analytical process during stable operation (ΔRE = 1), and increases with increasing ΔRE until it reaches a maximum, Max E(N_{UF}), before slowly decreasing. In the initial part of the curve, P_{ED} is small and most of the few erroneous results produced are reported. However, later in the curve, where P_{ED} is greatest, only a few of the many erroneous results produced are reported. At the maximum, many of the erroneous results are reported, which corresponds to the worst-case situation for the QC procedure. For this example, Max E(N_{UF}) = 7.0 at ΔRE = 2.6, when P_{ED} = 0.56.

Values for Max E(N_{UF}) were estimated for different σ values (ranging from 2–7) using SQC employing the 1_{ks} rule (for k = 2, 2.5, 3, 3.5, 4) and different numbers of samples tested in each QC event (from 1 to 4). The results, plotted in Fig. 2, are for 100 patient samples analyzed between QC events by continuous-mode testing with bracketed QC.

These graphical representations can be used as nomograms for selecting the appropriate QC procedure to obtain a specified Max E(N_{UF}) for known σ. Consider the example of a laboratory employing continuous-mode testing with bracketed QC with an average of 100 patient samples between QC events. An analytical procedure with σ = 4 could report up to 7 unacceptable patient results, on average, whenever an increase in random error occurs, if it selects a QC procedure 1_{3s} with n = 3. If the laboratory wants to reduce patient risk, it can use the same SQC procedure; however, by setting M = 29 rather than 100, it will obtain Max E(N_{UF}) = 7 × 29/100 = 2, because Max E(N_{UF}) is proportional to the number of patient samples analyzed between QC events. This strategy decreases the average number of patient samples analyzed between false rejections, ANP_{FR}, from 12340 to 3579, and thus increases the rate of false rejections and the number of controls analyzed per patient sample.

Alternatively, the laboratory can change to a 1_{2.5s} rule with n = 4 and M = 50 to obtain Max E(N_{UF}) = 1, but with a higher frequency and rate of false rejections (P_{FR} increases from 0.008 to 0.049 and ANP_{FR} decreases from 12340 to 1025 patient samples).

Analogous nomograms are shown in Fig. 3 for QC procedures that use *X̅*/χ^{2} rules with different probabilities of false rejection (P_{FR} = 0.05, 0.01, 0.002) and different numbers of samples tested in each QC event (n = 2, 3, 4, 5, 6, 8). Fig. 3 can be used in the same way as Fig. 2 to select an appropriate QC procedure to limit the number of erroneous results reported in the event of increased random error.

Continuing the example, the laboratory can obtain Max E(N_{UF}) = 1 by using the *X̅*/χ^{2} rule with P_{FR} = 0.05, n = 4, and M = 56 (Fig. 3, P_{FR} = 0.05) instead of the simple 1_{2.5s} rule with n = 4 and M = 50. Given that the ANP_{FR} value is similar for both strategies (1025 vs 1000), using the *X̅*/χ^{2} rule in this case results in a slight improvement, because it allows for more patient samples to be analyzed between QC events. When adjusting for ANP_{FR} (i.e., P_{FR}), this advantage of the *X̅*/χ^{2} rule becomes greater as n increases, but it is minimal for n = 3, and practically nonexistent for n = 2. For example, the Max E(N_{UF}) values for the *X̅*/χ^{2} rule with n = 3 and ANP_{FR} = 10000 (Fig. 3) are only slightly smaller than those for the 1_{3s} rule with n = 3 (Fig. 2), although the ANP_{FR} value is slightly greater (ANP_{FR} = 12340) in the latter case.

The nomograms in Figs. 2 and 3 were constructed assuming zero between-run imprecision and that the increase in random error to be detected by the SQC procedure is entirely due to an increase in within-run imprecision. However, a rule's error detection ability depends on the type of error and the relative magnitudes of the between-run and within-run imprecision components under stable operating conditions (7, 9, 11).

This is exemplified by Fig. 4, which plots Max E(N_{UF}) as a function of σ for the 1_{2.5s} rule with n = 4 when the increase in analytical error to be detected by the QC procedure is due to a systematic shift (ΔSE), or due to an increase in within-run (ΔRE_{w}) or between-run imprecision (ΔRE_{b}). The effect of an unnoticed stable between-run imprecision is assessed for a Φ_{S} of 1 and is represented by the dashed lines in Fig. 4.

The presence of stable between-run imprecision does not practically affect Max E(N_{UF}) when the increase in random error is due to within-run imprecision (line Φ_{S} = 1, ΔRE_{w} vs line Φ_{S} = 0, ΔRE_{w}). However, Max E(N_{UF}) increases significantly when the out-of-control condition increases the systematic error (Φ_{S} = 1, ΔSE vs Φ_{S} = 0, ΔSE) or between-run imprecision (Φ_{S} = 1, ΔRE_{b} vs Φ_{S} = 0, ΔRE_{b}). The performance of the QC procedure for increases in between-run imprecision varies in parallel with that observed for increases in systematic error. This is because for rules that test only the control samples in the current analytical run, the increase in between-run error represents a systematic shift in the QC mean (7). Qualitatively identical behavior can be demonstrated for all QC procedures in Fig. 2 to at least Φ_{S} = 1.

The presence of a stable between-run imprecision has little effect on the estimated P_{FR} values for the 1_{ks} QC rules in Fig. 2. In general, P_{FR} values decrease slightly when Φ_{S} increases, allowing rules that are more sensitive (lower k) to be used while maintaining the original value for P_{FR}. However, the improvement achieved using this adjustment is not usually important.

In contrast with 1_{ks} rules, P_{FR} of the *X̅*/χ^{2} rules increases substantially in the presence of an unnoticed between-run imprecision due to an increase in the P_{FR} of the *X̅* component. This effect is more important at lower original P_{FR} (Φ_{S} = 0) and greater n and Φ_{S}. For example, the P_{FR} of an *X̅*/χ^{2} rule with n = 2 and P_{FR} = 0.05 increases only to 0.07 when Φ_{S} increases from 0 to 1, whereas for when n = 8 and P_{FR} = 0.002, this value becomes 0.20 when Φ_{S} increases from 0 to 2.

## Discussion

This study presents a set of nomograms for 1_{ks} and *X̅*/χ^{2} rules that facilitate the selection of the most suitable SQC procedure for limiting the number of erroneous results reported because of an increase in random error. These findings complete those published previously for increases in systematic error (5).

The 1_{ks} rules are widely used in clinical laboratories because they are simple, intuitive, and easy to understand by technicians. They are responsive to increases in systematic and random errors (10, 11), and are more powerful than variance and range rules for detecting increases in random error when the number of controls analyzed by each QC event is low (10). However, comparing random and systematic error nomograms (5) for these rules shows that Max E(N_{UF}) values are greater for increases in random error, especially for the higher capability analytical methods, with the differences often exceeding an order of magnitude. Therefore, out-of-control conditions represent generally greater patient risk when they increase random errors rather than systematic errors. This is not surprising, because the ability of a QC rule to detect medically important random errors is usually less than its ability to detect medically important systematic errors (12, 13). This emphasizes the need for including strategies in QC planning to limit the probability of patient harm due to an increase in the stable analytical imprecision of the measurement procedure.

The probability of patient harm due to increased analytical error depends on the number of erroneous results reported as a consequence of the out-of-control error condition [Max E(N_{UF})], the average number of patient results examined between failures, and the probability that an incorrect result leads to patient harm (14). Because these last 2 factors may vary depending on the test considered, the desired value for Max E(N_{UF}) should be defined individually for each test to reduce the probability of patient harm to an acceptable level given the clinical use of the test and the reliability of the analytical system.

The use of the *X̅*/χ^{2} rule can be an alternative to simple 1_{ks} rules for improving QC performance. It combines a mean rule, *X̅*, sensitive to systematic error, and a variance rule, χ^{2}, sensitive to random error (6). Although no QC rule is best under all error conditions, this type of multirule appears to offer an attractive compromise (11). The use of a range or χ^{2} rule with within-run SD limits has been recommended for optimizing the performance of a QC system for detecting random errors (15), although the χ^{2} rule should be preferred, because it is superior to the range rule when n > 2 (6, 10).

The results of the present study indicate that *X̅*/χ^{2} rules are preferable to 1_{ks} rules when a reduction in the number of erroneous patient results reported due to increased random error is achieved by increasing the number of control samples analyzed in each QC event. In these cases *X̅*/χ^{2} rules are also expected to perform better for increases in systematic error (10).

The usefulness of the nomograms for the 1_{ks} rules (Fig. 2) in QC planning is not compromised by the presence of a stable between-run imprecision at least equal to the stable within-run imprecision of the measurement procedure. On the contrary, when using the *X̅*/χ^{2} nomograms in Fig. 3, it is necessary to verify that the imprecision does not include a significant between-run component. Otherwise, the actual P_{FR} could be much greater than the nominal value given by the nomogram, especially for SQC procedures that analyze a high number of controls per run.

The presence of stable between-run imprecision decreases the ability of the rules investigated here to limit the number of erroneous patient results reported due to increased systematic error. In the case of the *X̅*/χ^{2} rule, this occurs through an increase in P_{FR}. Although a reduction in the magnitude of stable between-run imprecision [or even its elimination, as suggested by some authors (15)] may be desirable, it cannot always be achieved. When the between-run imprecision cannot be reduced sufficiently, the ability to detect increasing within-run imprecision can be significantly improved by using QC rules based on this component (7, 15), such as the χ^{2} rule.

Accordingly, *X̅*/χ^{2} rules are also preferable to 1_{ks} rules when a substantial noncorrectable between-run component of variation exists. However, in this case, the differences in performance between these rules for increasing systematic error are expected to be smaller than when a between-run component is absent. These differences tend to disappear as the contribution of the between-run component to the total imprecision increases, because it puts an upper limit on the achievable power for systematic errors (10), and tends to equalize the performance of these rules.

The use of *X̅*/χ^{2} rules as described in this study requires each QC event to analyze 2 or more replicates of the same control material. This could represent a practical problem for laboratories that use QC procedures with materials at different concentration levels. However, this difficulty can be easily overcome by standardizing the measurements, *x*, to *u _{i}* = (

*x*−

_{i}*x̄*

_{c})/

*s*, with mean 0 and SD 1, when analyzing control samples (10).

_{c}*x̄*

_{c}and

*s*represent the mean and SD assigned to the control material, respectively. The

_{c}*X̅*rule is applied by calculating the mean of the u

*values and comparing the result with the limits detailed in the Materials and Methods section but expressed in this case as multiples of 1/ instead of s*

_{i}_{a}/. The χ

^{2}rule is applied by calculating

*s*

_{u}^{2}(n − 1), where

*s*is the SD of the n values of

_{u}*u*, and comparing the result with the critical χ

_{i}^{2}value.

On the other hand, the results obtained suggest that current QC strategies used in laboratories may not be adequate to reduce residual risk to acceptable levels when considering the possibility of increases in random error. These strategies are often selected in traditional approaches to QC planning for their ability to detect increases in systematic, rather than random errors, and without taking into account the laboratory testing mode or the frequency of control analyses. The number of erroneous results reported due to an increase in random error is proportional to the number of patient samples analyzed between QC events (M). This could lead some QC procedures selected in this way to be ineffective in limiting the residual risk if the number of patient samples analyzed between QC events is excessively large relative to the probability of error detection. Note that the value of M = 100 used throughout this study is chosen only as a reference to express the results, and is neither a recommendation nor a limit that should not be exceeded. A laboratory may analyze any number of patient samples between QC events provided that the number of erroneous results reported is sufficiently low to ensure that the level of residual risk is acceptable given the capability of the measurement procedure, the reliability of the analytical system—which determines the number of patient samples analyzed between failures—and the risk (i.e., probability and severity of harm) that an erroneous result represents for the patient.

The selection of appropriate SQC procedures based on the *X̅*/χ^{2} rule to reduce the probability of patient harm to a predefined level is complicated in the presence of between-run imprecision. It requires more complex tools than the simple nomograms presented here to account for the relative importance of the between-run component when random or systematic errors increase.

## Footnotes

↵2 Nonstandard abbreviations:

- SQC,
- statistical QC;
- Max E(N
_{UF}), - maximum expected number of unacceptable final patient results;
- TE
_{a}, - allowable total error;
- ΔRE,
- increase in random error;
- ΔPE,
- probability of producing unacceptable patient results due to the out-of-control error condition;
- M,
- average number of patient samples per batch;
- P
_{ED}, - probability of error detection;
- s,
- analytical SD;
- P
_{FR}, - probability of false rejection;
- ANP
_{FR}, - average number of patient samples analyzed between QC rejections;
- ΔSE,
- increase in systematic error;
- Φ
_{S}, - ratios of between-run to within-run SD;
- ΔRE
_{w}, - increase in within-run imprecision;
- ΔRE
_{b}, - increase in between-run imprecision.

**Author Contributions:***All authors confirmed they have contributed to the intellectual content of this paper and have met the following 3 requirements: (a) significant contributions to the conception and design, acquisition of data, or analysis and interpretation of data; (b) drafting or revising the article for intellectual content; and (c) final approval of the published article.***Authors' Disclosures or Potential Conflicts of Interest:***No authors declared any potential conflicts of interest.***Role of Sponsor:**No sponsor was declared.

- Received for publication October 4, 2016.
- Accepted for publication January 9, 2017.

- © 2017 American Association for Clinical Chemistry