## Abstract

The formal definition of sensitivity associates the term with the change in the response of a system for a small change of the stimulus causing the response, i.e., the ratio of the response of a system to the stimulus causing it. One interpretation of sensitivity associates the rate of change of the response for a small change of the stimulus as the slope of a calibration plot of response vs stimulus. An alternative interpretation associates sensitivity with the smallest value of the stimulus that can be resolved with a given degree of confidence, i.e., the detection limit. Applications of the first usage to analytical chemistry date at least to the beginning of this century; applications of the second interpretation are of more recent origin. The accompanying paper argues in favor of the second interpretation on the basis that, among other things, the “slope” interpretation conflicts with the formal definition of sensitivity and is meaningless as a descriptor of the performance of a measuring system. In this paper I offer arguments to support my belief that the slope definition of sensitivity is consistent with both formal definitions and accepted usage in analytical chemistry and, more importantly, that it is an invaluable descriptor of one of the most important characteristics of any analytical method. I include information to support my belief that proper use of the slope definition yields much more information than is available in the “detection limit” interpretation.

In the paper by Ekins and Edwards, sensitivity was defined in terms of the imprecision (of the dose) at zero dose (1). The authors expressed hope for an open debate on this subject and suggested that it is “incumbent on the protagonists of the slope definition to justify its utility and, most particularly, to indicate if they regard an improvement in sensitivity (as thus defined) as advantageous—and, if so, on what grounds.” I argue in support of the “slope” definition of sensitivity for several reasons (2). First, this interpretation is consistent with formal definitions of the term. Second, there is historical precedence for the slope definition, which is easily traced at least to the beginning of this century (3). Finally, and most importantly, the slope definition provides much more information about an analytical method than either the “imprecision” definition proposed by Ekins and Edwards (1) or the related “detection limit” definition (4).

Before proceeding, it will be helpful to identify several caveats related to word usage in this paper. The term *response* is used to represent the output of the measuring system, and the terms *variable of interest* and *stimulus* are used to represent the quantities to be determined (e.g., analyte concentration, enzyme activity, dose). In some cases, the term *measurement objective* (5) is used to represent the quantity used to compute concentration, activity, and so on. The measurement objective may be the measured response, R, a mathematical transform of the measured response (e.g., log R), or some other quantity such as time or volume of reagent (as in a titration). The term *measurement error* is used to represent the uncertainty in the measured value of the response, and *quantitative resolution* is used to represent the corresponding uncertainty (or error) in the determined value of the stimulus (e.g., concentration, dose). The terms *imprecision definition* and *detection limit definition* are used to represent the interpretations of sensitivity in the paper by Ekins and Edwards (1) and related papers (4). Finally, the term *slope definition* is used to represent the IUPAC interpretation of sensitivity (2); when *sensitivity* is used without a qualifier, it is used in the context of the slope definition.

Although it is relatively easy to support the slope definition of sensitivity by using literature citations (3)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18), I shall focus initially on the more quantitative aspects of the subject.

## Quantitative Descriptions of Sensitivity

### mathematical relationships

Because Ekins and Edwards suggested that the slope definition of sensitivity has caused some confusion (1), it will be helpful to include a mathematical description of the definition and how it is used.

A measurement system is assumed to give an output response, R, for a given value of the input variable of interest, V. In the simplest form of the slope definition, the sensitivity is defined as the ratio of the change in the response (dR ≈ ΔR) for a given change in the variable of interest (dV ≈ ΔV) as follows: The slope definition could also be described in terms of mathematical transforms of the output response and (or) the input variable (e.g., log R, log V, or both). An important feature of the slope definition of sensitivity is that it provides an unambiguous parameter that can be used to compute the quantitative resolution of any measurement process for any value of the input variable.

Two quantities, namely, the uncertainty in the measured response, ε_{r}, and the sensitivity, S, are needed to quantify the quantitative resolution (QR) of a measurement system for the variable of interest. For these quantities, the quantitative resolution is given by:

Importantly, both the measurement error and the sensitivity may be different for different values of the input variable; accordingly, the quantitative resolution for many methods is expected to be different for different values of the variable of interest. This is one of the important advantages of the slope definition of sensitivity: It permits calculation of quantitative resolution for different values of sensitivity, measurement error, and the variable of interest. This point is illustrated in the next section.

### quantitative applications

Ekins and Edwards (1) state among other criticisms that “the slope definition … is meaningless as an indicator of the performance of a measuring system.” To refute this statement, I present several situations here to illustrate how sensitivity is used with measurement errors to understand much more about the quantitative performance of analytical methodologies than is possible with either the detection limit (4) or imprecision (1) definitions.

*Graphical/mathematical interpretations.*

Fig. 1⇓ includes two hypothetical calibration plots, a and b, with different sensitivities and measurement errors for two hypothetical methods identified as methods A and B, respectively. Method B is more sensitive than method A because method B gives a larger change in response for a given change in concentration (greater calibration slope) than does method A. Combined effects of the sensitivities and measurement errors can be quantified by projections of the error bars onto the concentration axes. These projections show that the concentration error for method B is smaller than that for method A (Δ*C*_{b} <Δ*C*_{a}) even though the measurement error for method B is larger than that for method A, the reason being that method B is more sensitive than method A.

The concept illustrated graphically in Fig. 1⇑ is easily quantified by using numerical values of the sensitivity and measurement error. From Eq. 2 and the sensitivities and measurement errors in Fig. 1⇑ , the concentration errors are Δ*C*_{a} = ε_{a}/S_{a} = 0.004 mV/(0.2 × 10^{4} mV L/mol) = 2 × 10^{−6} mol/L, and Δ*C*_{b} = ε_{b}/S_{a} = 0.01 mV/(1 × 10^{4} mV L/mol) = 1 × 10^{−6} mol/L. If method B had the same sensitivity and a measurement error of ε_{b} = 0.04 mV, then the concentration error would be Δ*C*_{b} = 0.04 mV/(1 × 10^{4} mV L/mol) = 4 × 10^{−6} mol/L. For the first situation (ε_{b} = 0.01 mV), one would conclude that method B is more sensitive and has better quantitative resolution than method A. For the second situation (ε_{b} = 0.04 mV), method B would be considered as more sensitive but having poorer quantitative resolution than method A. If the behavior illustrated in Fig. 1⇑ extended to the smallest detectable concentrations, then the same conclusions would apply to the detection limits.

*Nonlinear calibration plots.*

The same principles can be extended to more complex systems. Fig. 2⇓ A illustrates general shapes of calibration plots for three situations, namely, transmittance vs concentration (curve a), reaction rate vs concentration for enzymatic determination of a substrate (curve b), and some process yielding a sigmoid-shaped calibration plot (curve c). Fig. 2B⇓ illustrates the sensitivities of these data; negative values mean that the measurement objective (e.g., transmittance, rate) decreases with increasing values of concentration. For the transmittance and rate curves, sensitivities are greatest at the smallest values of concentration and decrease continuously as concentration increases. For the sigmoid plot, sensitivities are smallest at the smallest and largest concentrations and are largest at intermediate concentrations.

Sensitivity information such as that in Fig. 2B⇑ taken alone is useful in planning the design of an analytical method that is based on either type of calibration curve. In general, one would like to avoid regions of very low sensitivity unless very small measurement errors can be achieved in those regions. Figs. 2C⇑ and D illustrate the combined effects of measurement errors and sensitivities on absolute (2C) and relative (2D) concentration errors. (Note: Some plots are truncated to keep ordinate scales within reasonable ranges.)

Because the absolute and relative errors in Figs. 2C⇑ and D are normalized to measurement errors of 1% of the maximum response, one could compute concentration errors for other values of measurement errors simply by multiplying numerical values from the appropriate plot by other values of the measurement error expressed in the same way. Alternatively, one could generate a family of absolute or relative error curves for each type of calibration plot simply by using different values of the measurement error for each plot.

*Variable sensitivity and measurement errors.*

Let me reemphasize: The assumption of a fixed measurement error is not an inherent part of the slope definition, as might be inferred from comments in the preceding paper (1). A fixed measurement error was used in the examples above for sake of simplicity. To avoid misunderstanding, I illustrate the effects of changing sensitivity and measurement errors in Fig. 3⇓ .

The solid curve in Fig. 3A⇑ represents a hypothetical plot of absorbance vs concentration for a fixed value of stray light (0.5% *T*) and otherwise ideal behavior. The dashed lines represent the envelope of error bars (±) representing measurement errors, which change with changing absorbance. Fig. 3B⇑ shows the sensitivity and measurement-error profiles vs concentration. The sensitivity is relatively independent of concentration at smaller values but then decreases at higher concentrations as a result of stray light. On the other hand, the absolute magnitude of the measurement error starts small and increases throughout the concentration range. One could use Eq. 2 to compute absolute and relative concentration errors at each point along the concentration axis. Shapes would be similar to the corresponding plots in Figs. 2C⇑ and D, except that concentration errors would be larger at high concentrations because of the lower sensitivity at higher concentrations.

Effects of stray light on data in Fig. 3⇑ notwithstanding, comparison of data for transmittance in Fig. 2⇑ with data for absorbance in Fig. 3⇑ illustrates how axis transformations are handled when using the slope definition of sensitivity. The continually changing sensitivity and constant measurement error in Figs. 2A⇑ and B are replaced by a near constant sensitivity and constantly changing measurement error in Figs. 3A⇑ and B. Such axis transformations do not affect computed values of quantitative resolution when changes in both sensitivity and measurement errors are handled properly.

*Linear dynamic range.*

As noted by a reviewer of this paper, sensitivity can also be used to quantify the linear dynamic range of a method in concentration units. If one knows the sensitivity of a method and the values of the signal or other measurement objective corresponding to the lower and upper ranges of sensitivity, then the concentrations corresponding to lower and upper limits of the linear range can be computed as the quotient of each signal divided by the sensitivity.

## Response

The above examples illustrate the utility of the slope definition of sensitivity. They also show that high sensitivity is advantageous because higher sensitivities give better quantitative resolution for a given amount of measurement error or, conversely, higher sensitivities can tolerate larger measurement errors for a given amount of quantitative resolution. I now turn to addressing the more subjective aspects of the previous authors’ discussion.

### formal definitions

Ekins and Edwards used the definition of sensitivity in the Oxford English Dictionary to rationalize their interpretation of sensitivity. In abridged form, that definition of sensitivity reads: “The degree to which a device … responds to … slight changes in that to which it is designed to respond …” This definition (as well as the unabridged definition) involves a cause/effect, stimulus/response relationship, in which the stimulus is the independent variable and the response is the dependent variable. Moreover, these relationships are consistent with the definition in Webster’s Third New International Dictionary, 1961 edition.

The question is, how does one quantify sensitivity in a manner that preserves the stimulus/response, independent variable/dependent variable relationships inherent in these definitions? One way would be to measure the change in response, R, for a single change in the stimulus, V, and compute the quotient of the change in response divided by the change in stimulus; i.e., S = ΔR/ΔS. As discussed later, this is the way sensitivities of balances (3)(6)(7)(8) and other devices (9)(10) have been described since the early part of this century. Alternatively, one could measure responses for several values of the stimulus, plot values of the response as the dependent variable vs values of the stimulus as the independent variable, and quantify the sensitivity as the slope of the calibration plot. If the calibration curve were nonlinear, one would need to use small changes in the variable of interest to obtain accurate values of the sensitivity for different values of the stimulus. This procedure preserves the cause/effect, independent variable/dependent variable relationships inherent in the formal definitions of sensitivity. This slope definition of sensitivity is a logical extension of the “single-point” procedures that have persisted since the early part of this century (3)(6)(7)(8) and is the general definition of sensitivity given in a variety of analytical texts (11)(12)(13)(14) as well as other treatises (15)(16)(17).

One could define sensitivity as the imprecision of the stimulus, as Ekins and Edwards have done (1). However, to do this introduces a concept (imprecision) not included in the formal definition, inverts the independent variable/dependent variable relationship inherent in the formal definition, and departs from usages dating to the beginning of the century.

### historical perspective

Ekins and Edwards associate the imprecision definition of sensitivity with the “original” usages, seeming to imply that the slope definition is of more recent origin. As stated earlier, the basis for the slope definition can be traced to the early part of this century. In a 1919 edition of a text first published in German in 1901 and later translated into English (3), it was stated, “The balance is more sensitive the greater the displacement of the position of equilibrium brought about by the addition of a small weight; e.g., one milligram.” Kolthoff and Sandell (6) defined sensitivity more explicitly; according to those authors, “The sensitivity of a balance is expressed by the amount of deflection of the beam or pointer by a small excess of weight on one of the pans.” Other authors have given equivalent definitions (7)(8).

Early descriptions of sensitivity in terms of stimulus/response relationships are not limited to analytical balances. In a discussion of galvanometers used with pH electrodes, Clark (9) stated, “… sensitivity … is defined as the number of millimeters deflection caused by one microampere.” In a discussion of photographic emulsions used in analytical spectroscopy, Harrison et al. (10) wrote, “We think of one emulsion as being more sensitive than another if it will produce a higher density from a given input of light … ” In a treatise about atomic fluorescence, sensitivity was described as “the slope of the calibration curve, (dx/dC) … ” and a virtually identical statement was made in regard to neutron activation analysis (16).

Several authors have generalized this slope concept. In an instrumental analysis text, for example, Ewing (11) stated, “The sensitivity of an analytical method or instrument may be defined as the ratio of the change in the response R to the change in the quantity or concentration C that is measured: S = dR/dC or ΔR/ΔC.” This definition and others like it (12)(13)(14)(15)(16)(17)(18) retain the independent variable/dependent variable relationships inherent in the dictionary definitions.

It isn’t clear when the detection limit interpretation of sensitivity began to emerge. Several authors have recognized and commented on the usage. For example, Kolthoff et al. (17) stated: “Alternatively, the sensitivity may be defined as the weight corresponding to a change of one scale division. This may be called the *reciprocal sensitivity*” (emphasis added). Similarly, Miller and Miller (18) state: “It is very important to avoid confusing the limit of detection of a technique with its sensitivity. This very common source of confusion probably arises because there is no generally-accepted English word synonymous with ‘having a low limit of detection.’ … The sensitivity of a technique is correctly defined as the *slope* [their emphasis] of the calibration graph … ”

Given the consistency of the slope definition of sensitivity with dictionary definitions and the historical precedence for this definition, one would hope that others might understand why a proponent of the slope definition might believe that it is his or her party that has been crashed, rather than the other way around.

**Discussion**

### general

In my view, the best usage of any term is one that *(a)* conforms to the formal definition of the term and *(b)* conveys the most useful information about the issues in question. The slope definition of sensitivity satisfies both these criteria; attention is focused here on the latter point.

The information content of the imprecision and slope interpretations is easily compared through use of Fig. 2C⇑ . On the one hand, the information conveyed by the imprecision definition is limited to the open symbols at zero concentration on the three plots. Moreover, that information applies only if the measurement error corresponds to the standard deviation of the measured response. On the other hand, the slope definition leads to a complete error profile that includes not only that at zero concentration but also that at a wide range of values of the variable of interest. Moreover, the slope definition applies not only to random uncertainties but also to systematic errors, e.g., instrumental drift, uncompensated background signals, and reagent blanks. Anyone who embraces the slope definition of sensitivity will find that it leads to a more complete understanding of a measurement process than is inherent by using either the imprecision or detection limit interpretations.

### specific responses

Several specific comments in Ekins and Edwards’ paper merit brief comment.

It is not “nonsensical’ to expect that sensitivity can depend on the way response and dose variables are plotted. As illustrated by data in Figs. 2⇑ and 3⇑ , changes in sensitivity that occur when a plotting format is changed are compensated by complementary changes in the measurement error. As also illustrated by data in Fig. 3⇑ , the slope definition of sensitivity does not involve the assumption that other factors such as measurement errors are either constant or irrelevant; an inherent feature of this definition is that changes in both sensitivity and measurement error, whatever their causes, are taken into account.

Apparent implications (1) that there should be a single sensitivity for any method operated under a given set of conditions and that it should be possible to compare sensitivities among different methods also merit comment. Regarding the first point, I learned in my sophomore course in analytical chemistry that the sensitivity of an analytical balance depends on the load, a point made in analytical texts since the early part of the century (3). Others have made the point in other contexts. As examples, Willard et al. (12) note, “A constantly changing response indicates a constantly changing value for sensitivity …” and Massart et al. (16) note that sensitivities in neutron activation analysis depend on matrix effects and imply that such behavior is common knowledge among analytical chemists, a point with which I agree. Regarding the second point, it obviously is not possible to make meaningful comparisons between sensitivities for different devices such as analytical balances (3), galvanometers (9), and photographic emulsions (10) because they all represent different responses of different sensors to different stimuli. This minor inconvenience is easily compensated by using sensitivity and measurement error to compute detection limits, a step that is necessary for meaningful comparisons among methods, whatever terminology one uses. Such erroneous expectations are a byproduct of misrepresentations of the long-standing interpretation (3)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16) and formal (dictionary) definitions of sensitivity.

Finally, there is a discussion of “semantic contradictions” associated with the meaning of high/low sensitivity (1). If, as in the slope interpretation, sensitivity and detection limits are defined such that they are reciprocally related, then there is no contradiction in suggesting that a high value of one favors a low value of the other. However, if sensitivity is equated to imprecision (1), then there is an inescapable contradiction in suggesting that a high value of one favors a low value of the other. The only logical reason for the favorable connotation usually associated with high sensitivity is the fact that the larger the change in response per unit of change of the stimulus, the lower will be the detection limit for any given value of measurement error. This point is inherent in analytical textbooks since the early part of this century (3)(6)(9)(10).

On a personal note, my colleagues and I teach and use the complementary concepts of sensitivity, measurement uncertainty, and detection limits in our undergraduate and doctoral programs—as I suspect many others do. Casual misuse of sensitivity in the literature makes the task difficult (18); formalized misuse will exacerbate the problem.

## Conclusions

Undoubtedly, there exist people who, as suggested by Ekins and Edwards (1), are and wish to remain oblivious to the internal workings of their instruments and analytical methods. However, scientific journals and groups such as NCCLS and IFCC, which set the standards for our profession, should not cater to this lowest common denominator among us. Instead, such groups should set the standards to be followed by those willing to expend the time and effort needed to fully understand the capabilities and limitations of their instrumentation and methodologies. To the extent that such groups are involved in terminology, they should support usages that are consistent with formal definitions and convey maximal information about that which they describe.

The imprecision (1) and detection limit (4) interpretations of sensitivity are inconsistent with formal definitions and accepted usage, and each conveys minimal, albeit important, information about analytical methods and other measurement processes. Moreover, we already have terminology for imprecision (i.e., standard deviation) and the smallest detectable amounts (i.e., detection limits) and we *need* the term sensitivity in its formally defined sense to adequately describe analytical methods throughout the entire ranges of the quantities they are designed to determine.

The “absurdities” and “confusion” to which Ekins and Edwards refer simply do not exist for those who use sensitivity in the context consistent with formal definitions and usages in the analytical literature since the beginning of this century. Such problems originated with and exist only for those who are unable or unwilling to use sensitivity in the context taught by some of the most highly regarded analytical chemists of our time. Accordingly, there is little if any reason for this journal or official groups such as the NCCLS or the IFCC to take positions different from IUPAC; rather, there are compelling reasons why such groups should adopt and support the slope definition of sensitivity.

## Footnotes

1393 BRWN Bldg., Department of Chemistry, Purdue University, West Lafayette IN 47907-1393. Fax 765-496-1200; e-mail pardue{at}chem.purdue.edu

- © 1997 The American Association for Clinical Chemistry