The content presented here represents the most current version of this section, which was printed in the 24th edition of Standard Methods for the Examination of Water and Wastewater.
1. Introduction

Every laboratory that analyzes environmental water and wastewater samples for radionuclides should operate under a quality system, which establishes the basis for all of the laboratory’s quality assurance (QA) and quality control (QC) activities. The quality system sets forth requirements for the laboratory’s organizational framework, the structure and contents of its quality manual, document controls, method validation, QC of measurement processes, internal audits, documentation, and completed-record archives. Quality system standards commonly implemented at laboratories include Quality Systems for Radiochemical Testing (Volume 1, Module 6 of The TNI Standard)1 and General Requirements for the Competence of Testing and Calibration Laboratories2 (ISO/IEC 17025). The Manual for the Certification of Laboratories Analyzing Drinking WaterCriteria and Procedures, Quality Assurance3 addresses QA/QC requirements established by the US EPA for Safe Drinking Water Act (SDWA) compliance testing.

The laboratory’s quality manual includes the laboratory’s standard operating procedures (SOPs) by reference. It addresses default QC measures applicable to instrument calibration and performance checks, background measurements, and batch QCs for precision and bias (accuracy) of analytical measurements. Also essential is a manual of analytical methods (or at least copies of approved methods).

Analysts are trained in accordance with the quality manual to ensure that they are familiar with its contents and requirements. The quality manual must be readily accessible to them as they work in the laboratory.

2. QC Program Performance Criteria

Successful QC programs define acceptable, attainable performance criteria for precision and bias that both reflect the laboratory’s capabilities and are applicable to the programs for which analysis is performed.

Develop performance criteria based on

• data derived from the laboratory’s initial use of an analytical method or

• criteria set by the party requiring the water analyses (e.g., regulators).

Ensure that criteria are analysis- and matrix-specific. The criteria need to vary to meet specific project needs. For example, The NELAC Institute has compiled allowable statistical deviations for SDWA compliance-monitoring samples (see Table 7020:1). Use published values until enough data have been compiled to set experience-based criteria that reflect actual method performance at the laboratory.

Table

Table 7020:1. Laboratory Precision—One Standard Deviation Values for Various Analyses in Safe Drinking Water Compliance Samplesa

Table 7020:1. Laboratory Precision—One Standard Deviation Values for Various Analyses in Safe Drinking Water Compliance Samplesa

Analyte Spike Level (µ)b Range Expected Mean Acceptable Standard Deviation (σ)
Gross alpha    7–75 pCi/L 0.8586 µ + 1.4802 0.1610 µ + 1.1366
Gross beta    8–75 pCi/L 0.8508 µ + 2.9725 0.0571 µ + 2.9372
Barium-133   10–100 pCi/L 0.9684 µ + –0.1424 0.0503 µ + 1.0737
Cesium-134   10–100 pCi/L 0.9369 µ + 0.0845 0.0482 µ + 0.9306
Cesium-137   20–240 pCi/L 1.0225 µ + 0.2624  0.0347 + 1.5185
Cobalt-60   10–120 pCi/L 1.0257 µ + 0.3051  0.0335 + 1.3315
Iodine-131    3–30 pCi/L 0.9711 µ + 0.8870  0.0624 + 0.6455
Radium-226    1–20 pCi/L 0.9253 µ + 0.3175  0.0942 + 0.0988
Radium-228    2–20 pCi/L 0.9243 µ + 0.2265 0.1105 µ + 0.3788
Strontium-89   10–70 pCi/L 0.9648 µ + 0.1591  0.0379 + 2.6203
Strontium-90    3–45 pCi/L 0.9369 µ + 0.2279 0.0902 µ + 0.5390
Tritium 1000–24000 pCi/L 0.9883 µ + –46.4776  0.0532 µ + 38.8382
Natural uranium (activity)    2–70 pCi/L 0.9568 µ + 0.0773 0.0700 µ + 0.2490
Uranium (mass)    3–104 µg/L 0.9568 µ + 0.1153  0.0700 + 0.3700
Zinc-65   30–360 pCi/L 1.0495 µ + 0.1245 0.0530 µ + 1.8271

aModified from: NELAC PT for Accreditation Fields of Proficiency Testing with PTRLs, Drinking Water. NELAC_DW_RAD_ FOPT_Eff_2021_10_01-2.xls, effective 2022-10-01. The current limits for TNI Proficiency Testing for Safe Drinking Water Act Compliance are published at: https://www.nelac-institute.org/fopt.php. Limits are to be updated in 2022.

bAcceptable limits are µ ± 2σ.

3. Minimum QC Program

Establish and implement QC requirements for the laboratory that are consistent with the data quality objectives (DQOs) and measurement quality objectives (MQOs) of the programs they support. (See MARLAP4 for definition and complete discussion of the concept and use of MQOs.) At a minimum, include in the laboratory’s QC program a number of integrated QC measures that address basic instrument performance, initial and ongoing calibrations, background measurement stability, and the precision and bias (accuracy) of each method (as measured using batch QC samples).

Typical batch QCs include

• negative controls (e.g., reagent or method blanks);

• positive controls [e.g., laboratory-fortified blanks (LFBs), laboratory control samples (LCSs), and matrix spikes (MSs)];

• reproducibility controls (e.g., sample or matrix-spike duplicates);

• sample-specific controls (e.g., acceptable range for tracer or carrier chemical yield, criteria for spectral resolution, agreement between peaks’ observed energy and accepted values, etc.); and

• instrument controls (e.g., controls on background subtraction count determinations, instrument performance checks, and calibration verifications).

Monitor such QCs using statistical QC and tolerance charts. Certain aspects of radiochemical instrument and background QCs are instrument or method specific; address these in individual methods and in the laboratory’s quality manual.

Prepare instrument QC charts4-6 by plotting a reference source’s results on a graph in which the abscissa is time and the ordinate is count rate, total counts, or another appropriate parameter. Determine the true (mean) value by averaging the results of at least 20 measurements with acceptable individual statistics. Insert lines parallel to the time axis at the mean value and the values for ±2 and ±3 standard deviations. When the reference source radionuclide is short-lived and subject to significant decay during the lifetime of the chart, apply corrections to the mean and limits or to measured results that account for the decay.

Sometimes the statistically determined performance is better than the actual performance required to maintain control based on applicable DQOs and MQOs; if so, tolerance limits can be established as the control criterion. On the control chart, draw lines indicating the tolerance limits for the method’s specified performance requirements, along with lines indicating its statistical performance (e.g., ±3 standard deviations). As long as the method’s statistical performance falls within the tolerance limit, the tolerance chart supplies evidence that the measurement parameter in question is under control.

Interpret the QC chart data objectively. When a point falls outside established limits, determine whether the excursion is random or nonstatistical by running repeated measurements and applying statistical tests (e.g., a chi-square test).

Control charts provide evidence of trends and trigger investigation to determine whether a trend affects measurements significantly enough that applicable DQOs and MQOs cannot be attained. When such a change is detected, take corrective actions ranging from data inspection to instrument maintenance and recalibration.

Keep trending criteria as simple as possible while providing meaningful control for the related parameter. For example, multiple successive points less than one standard deviation on one side of the mean generally do not have a negative effect on DQOs/MQOs; thus, there is no need to require monitoring or corrective action for such a condition.

Unambiguously specify in the laboratory’s quality manual all trending and QC criteria, as well as applicable corrective actions and required documentation for each QC measure implemented.

a. Instrument QC: Count check sources for a predetermined time at frequencies defined in the laboratory quality manual (e.g., daily before each use for proportional, scintillation counters, or gamma spectrometers; or monthly for alpha spectrometers). Record the count rate and plot it on the specific system’s control chart. Compare the value with the 2-σ, ±3-σ, or appropriate tolerance value, and repeat the procedure if the ±2-σ values are exceeded. Corrective action is generally required whenever repeated values fall above or below the established statistical control or tolerance limits.

For instruments that produce spectra (e.g., gamma or alpha spectrometers), track such parameters as

• instrument response (count rate or total counts in a given spectral area, peak channel location, or the energy of key photopeaks in the spectrum);

• difference in channels between two specified peaks;

• resolution (channel peak width at specified peak height); and

• certain ratios (e.g., peak-to-Compton ratio).

It may or may not be necessary to track all of these routinely; see Section 7120 for specific recommendations. If the basic parameters are outside limits, it may be useful to evaluate additional parameters.

b. QC of instrument background: Perform a background subtraction count as often as the laboratory quality manual indicates (e.g., weekly or monthly). The duration of background subtraction measurements needs to be as long as or longer than the duration of any sample measurement from which they will be subtracted. For serial counters, such measurements are often performed with each batch of samples. Spectrometry measurements, on the other hand, often benefit from longer, less frequent counts (e.g., weekly or monthly) because these measurements produce more precise estimates of the background for specific peaks or regions of interest.

Check background-measurement results for each detection system as often as the laboratory quality manual indicates (e.g., weekly or monthly). Alternatively, background measurements may be checked with each batch of samples, or subtraction background measurements may be evaluated as background checks. At a minimum, background measurements should be frequent and sensitive enough to identify related issues significant enough to compromise sample results.

Monitor background counts via control charts. Such counts may change slightly over time without compromising data quality. If Poisson detection statistics are used to estimate counting uncertainty and detection statistics (see 7020 C), the assumption is that the background is stable between sample and background-subtraction counts, and that non-Poisson contributions to the background (e.g., electronic noise, variations in ambient background) are minimal. Compare the observed variability of the background (e.g., standard deviation) for more recent background counts to that predicted by Poisson statistics (which can be indicated via error bars for each value) to reveal excess uncertainty and indicate significant problems with the estimates of uncertainty and detection statistics. Establish appropriate evaluation criteria and corrective actions to be taken if backgrounds exceed established limits.4,5,7

Background QC is instrument specific. For example, for gamma spectrometry backgrounds, the QC checks and acceptance criteria needed depend on whether the detector is NaI(Tl) or high-purity germanium (see Section 7120).

c. Negative control samples (method and reagent blanks): A method or reagent blank is “a sample assumed to be essentially target-analyte-free that is carried through the radiochemical preparation, separation and measurement process in the same manner as a routine sample of a given matrix.”4 Analyze method or reagent blanks as batch QC samples frequently enough to detect deviations in measured radioactivity that represent levels of absolute bias that could compromise the use of associated results. (Absolute bias is a persistent deviation of the mean measured net activity of blank results from zero. See MARLAP manual for discussion.)

Unless otherwise required by the program, analyze blank samples at the frequencies specified in the quality manual for a given analysis (generally, one per batch or 5%, whichever is more frequent). However, this guideline can be varied to fit the situation. For example, a laboratory with a heavy workload and well-established blank-analysis program may find that analyzing blanks at a 5% rate (regardless of batch size) is sufficient to determine whether the data meet established criteria. On the other hand, if a test is run less frequently or there are significant concerns about potential contamination or background control, a higher blank-analysis rate might be better.

Ideally,

• the count rate of the blank is equal to the mean background count rate;

• the average of background-subtracted blank results is zero;

• the magnitude of the calculated uncertainty of the distribution is similar to that of the combined standard uncertainty (CSU) for the measurement; and

• the measured values for the activity of approximately 19 out of 20 blanks is less than the critical level.

Note: Recount blanks with activity exceeding the critical-level activity one time to demonstrate that the excursion is due to Poisson variability in the result. If the recount result is less than the critical-level activity, and the control chart for blanks does not reveal problems, there is no need to recount associated samples in the batch (See 7020 C). Define evaluation and acceptance criteria—and associated corrective actions—in the laboratory’s quality manual and ensure that they are consistent with the data quality requirements of the programs they support.

d. QC for precision: The IUPAC Gold Book defines precision as the “closeness of agreement between independent test results obtained by applying the experimental procedure under stipulated conditions. Precision may be expressed as the standard deviation.”8 The laboratory’s internal precision in performing an analytical procedure can be evaluated by analyzing duplicate aliquots of one or more samples or QC samples with each batch of samples.

Submit duplicate samples to analysts on a single- or double- blind basis to effect such an evaluation. (In single-blind analysis, the analyst is aware that a duplicate sample is present. In double-blind analysis, the analyst is ideally unaware that any of the samples is a QC sample.) Duplicate samples can have either detectable or nondetectable amounts of radioactivity. Use statistical treatment of duplicate results to effectively evaluate control of analytical precision and the adequacy of uncertainty estimates both in the presence or absence of activity.

Unless otherwise required by the program, analyze duplicate samples at the minimum rate specified in the quality manual (generally 1 duplicate sample per batch or 5%, whichever is more frequent). Vary this guideline to fit the situation. For example, SDWA regulations stipulate more frequent duplicate analyses (e.g., 10%). Similarly, in a laboratory with a heavy workload and well-established duplicate-analysis program may find that analyzing duplicates at a 5% rate is sufficient to determine whether the data meet established criteria. On the other hand, if a test is run less frequently or there are significant concerns about the reproducibility of the matrix, a higher duplicate-analysis rate would be prudent.

For results above the detection threshold where counting-related uncertainty is small (e.g., ten times the critical level activity), the absolute difference between duplicate measurements should be less than 3 times the acceptable standard deviation for the specific analysis (see Table 7020:1):

where:

ACs = sample result (e.g., pCi/l),

ACdup = duplicate sample result (e.g., pCi/l), and

u(ACtarget) = the targeted combined standard uncertainty of sample result (k = 1).

A more robust measure, the normalized absolute difference (NAD) between 2 results, evaluates statistical agreement between two results of any magnitude based on their respective uncertainties:

where:

NAD = normalized absolute difference statistic,

u2(ACs) = the square of the combined standard uncertainty of sample result (k = 1), and

u2(ACdup) = the square of the combined standard uncertainty of duplicate sample result (k = 1).

Note: MARLAP refers to NAD as Zrep. It has also been called the duplicate error ratio (DER) and replicate error ratio (RER).

If the NAD exceeds a value of 3, the control limit, conclude that the 2 results differ by more than 3 standard deviations. Examine calculations, control charts, and procedures. Take action as defined in the laboratory’s quality manual, including reanalyzing samples affected by the problem unless there are indications that sample nonhomogeneity prevents reproducible laboratory subsampling (e.g., presence of hot particles in the sample). The NAD is expected to exceed a value of 2 (indicating that the 2 results differ by more than 2 standard deviations) in approximately 5% of all measurements. (Note: Two standard deviations corresponds to the 95.4% confidence level; expect approximately 1 in 20 results to exceed this criterion. Three standard deviations correspond to the 99.7% confidence level; expect approximately 3 in 1000 measurements to exceed this criterion.) Repeated excursions above the warning limit exceed the expected rate of 5% indicating a problem in the analytical system. In such cases, take corrective action similar to those performed for exceeding the control limit.

Relative percent difference (RPD) is also used to evaluate duplicate results. This measure does not take uncertainty into account, but it does provide useful information for programs, such as SDWA compliance testing, where only Poisson uncertainty of results is reported:

where:

RPD = relative percent difference,

ACS = sample result (e.g., pCi/L), and

ACdup = duplicate sample result (e.g., pCi/L).

Define all evaluation and acceptance criteria, and associated corrective actions, in the laboratory quality manual; ensure that requirements are consistent with the DQOs and MQOs of the programs they support. More detailed discussions of QC statistics for duplicates are given elsewhere.3-5

e. Positive controls (LFBs and matrix spikes): Analytical methods are said to be in control when they produce precise, unbiased results. Two QC measures are commonly used to assess the ongoing accuracy of analytical methods: LFBs and matrix spikes. Define these QC measures and their evaluation in the laboratory quality manual.

1) Laboratory-fortified blanks (also called laboratory control samples)—Unless otherwise required by the program, LFBs consist of demineralized water spiked with a known activity of a traceable radiostandard. LFBs are generally analyzed at a frequency of 1 per batch of 20 or fewer samples. The LFB is treated and analyzed using the same processes as all the other samples in the batch. Results are compared to known values based on percent recovery:

where:

% Recovery = ratio of observed to known values (%),

observed result = measured result [expressed in units used to report associated samples (e.g., pCi/L)], and

known result = theoretical result calculated from the spike added [expressed in the same units used to report associated samples (e.g., pCi/L)].

Note: Calculate the result using an aliquot size similar to those used for the associated samples.

2) Matrix spikes—The matrix spike consists of a duplicate aliquot of one batch sample that has been spiked with a known activity of a traceable radiostandard. The purpose of a matrix spike is to establish whether the method or procedure is appropriate for the analysis of the particular matrix. Unless otherwise required by the program, and as specified in the specific method, prepare 1 matrix spike per batch of 20 or fewer samples for each method in which no chemical yield carrier or tracer is added to samples. Process the sample with the rest of the batch, and evaluate results based on percent recovery:

where:

% MS Recovery = recovery of the matrix spike from the percent recovery of activity added to the matrix spike sample,

ACMS = measured activity in matrix spike (e.g., pCi/L),

ACSmp = measured activity in original sample (e.g., pCi/L),

ASpk = activity added to matrix spike (e.g., pCi), and

VMS = volume of matrix spike sample aliquot taken for analysis (e.g., L).

3) Evaluation and control charting of LFB and matrix spike results—Establish evaluation and acceptance criteria for LFBs and matrix spikes in the laboratory’s quality manual. Plot data from these standards or known-addition samples on mean or tolerance control charts3,5,9 and determine whether results are within established limits (or if trends indicate problematic changes in the analytical system). If established control limits are exceeded, investigate results for potential blunders, method bias, or excess uncertainty, and take corrective action to address the root cause and prevent recurrence of the problem, up to and including selection of a method appropriate to processing the matrix in question.

f. Sample-specific QC measures: Additional QC measures are routinely used to assess the quality of radiochemical measurements on a sample-by-sample basis. While batch QCs affect the status of a batch of samples, sample-specific QC only affects the sample in question [unless it is a batch QC sample (e.g., blank, duplicate sample, LFB, or matrix spike)]. Sample-specific QC includes controls on chemical yield or spectral resolution [i.e., lower and upper limits for chemical yield and full-width-at-half-maximum (FWHM) of observed peaks in spectral results]. Define evaluation and acceptance criteria, and associated corrective actions in the laboratory quality manual and ensure that requirements are consistent with the DQOs of the programs they support.

g. External proficiency-testing programs: Participation in external proficiency-testing programs ensure that laboratories’ measurement systems are comparable. For example, to be certified in the United States by EPA (or agreement state) to perform drinking water compliance analyses under the SDWA, a laboratory must participate annually in a proficiency testing (PT) program administered by a nationally accredited PT sample provider. To be accredited as a NELAP laboratory, a laboratory must participate every 6 months in a PT program administered by a nationally accredited PT sample provider. Acceptable performance for each parameter for which the laboratory is (or seeks to be) certified is demonstrated by successful analysis of at least 2 of the most recent 3 proficiency samples. Control limits have been established by The NELAC Institute. Performance is evaluated by the PT provider. (See Table 7020:1.)

Laboratories are also encouraged to participate in additional intercomparison programs [e.g., the Mixed Analyte Performance Evaluation Program (MAPEP)10 and those sponsored by the International Atomic Energy Agency (IAEA)11 and World Health Organization (WHO)].

h. Selection of radionuclide standards and sources: Confirm that radionuclide standard sources are traceable to the SI through a national metrology institute such as the National Institute of Standards and Technology (NIST) in the US.

Use such standard sources, or dilutions thereof, for initial calibration. Purchase these sources from suppliers listed in annually published buyers’ guides. Before purchasing standards from commercial suppliers, it is important to specifically inquire whether the radionuclide standard of interest is traceable to the SI through a national metrology institute (e.g., NIST). Also, consider whether the chemical form and radionuclide purity are adequate.4

Calibrated radionuclide standard solutions are generally prepared for storage and shipment by the supplier in flame-sealed glass ampoules. Perform all dilutions and store radionuclides in well-sealed containers. Only use containers constructed of material that does not adsorb the radionuclides of interest.

Standard sources are radioactive sources with adequate activity and an accurately known radionuclide content and radioactive decay rate (rate of particle or photon emission). Ensure that each radionuclide standard has a calibration certificate containing the following information:

Description:

Principal radionuclide or parameter

Chemical and physical form

Solvent (where applicable)

Carrier identity and concentration

Mass and specific gravity or volume

Standardization:

Activity per mass or volume

Date and time

Method of standardization

Accuracy:

Combined standard uncertainty

Coverage factor/confidence level

Purity:

Activity of decay progeny

Identity and concentration of other impurities and whether they are included in principal activity

Assumptions:

Decay scheme

Half-life and uncertainty

Equilibrium ratios

Production:

Production method

Date of separation

Usable lifetime/expiration date

After preparation and before use, verify the activity concentration of liquid radioactivity standards and tracers by comparing them to a standard obtained from a second, independent source (where possible). Also, verify that levels of impurities meet the requirements of the method in question. Define verification procedures and acceptance criteria in the laboratory’s quality manual.

Use instrument check sources to demonstrate the continuing stability of the instrument and, by inference, existing calibrations, by evaluating key operating parameters (e.g., checks of detector response and/or energy calibration). Use check sources that

• are of sufficient radiochemical purity;

• contain radionuclides that are long-lived enough to permit use over the period of their intended use; and

• have high enough activity to minimize counting uncertainty.

Check sources, however, need not have known disintegration rates (i.e., need not be a traceable standard source).

Standard reference materials are radioactive materials with accurately known radionuclide content or radioactive decay rate (rate of particle decay or radiation emission). They are used as internal LCSs, internal tracers, or matrix and blind known additions. They provide traceability to the SI through national metrology institutes such as NIST in the US.

Generally, it is difficult to perform collaborative (interlaboratory) analyses of real-matrix wastewater samples because the composition of elements and solids varies from one facility to the next. The methods included herein have been evaluated via homogeneous samples. They presume that nonhomogeneous samples have been homogenized to ensure that any subsamples taken are representative of the sample delivered to the laboratory. Although reference samples used for collaborative testing may not contain radionuclides exhibiting interferences (e.g., short-lived radionuclides would decay during preparation and shipment), include in analytical methods critical steps to eliminate interferences that may be present in samples for which the method is applicable.

Section 1010 B, which discusses statistics applicable to chemical analyses, also applies to radioactivity examinations. However, certain statistical concepts particular to radioactivity measurements are discussed below.

1. Propagation of Uncertainty

It is generally necessary to calculate the uncertainty of a value that is not the result of a single, direct measurement but rather is derived from a number of measurements via a mathematical formula. The uncertainty of each direct measurement is either known or can be computed or otherwise estimated. The combined uncertainty of each calculated result may then be derived mathematically from them. In statistics, this is known as propagation of uncertainty.

The propagation of uncertainty typically involves combining the sources of uncertainty associated with determining the concentration of radionuclides in environmental samples (e.g., soil, air, milk, or water). These ideally include uncertainty for each measurement performed at the laboratory (e.g., sample preparation and subsampling, chemical separations and preparation of the sample test source, and analysis); they do not, however, reflect uncertainty associated with sampling before receipt at the laboratory. Consider any variable that contributes uncertainty to the final measurement when evaluating the overall uncertainty of the measurement.

Hereafter, the following notation is used in formulas to express uncertainties. The symbol u(x) denotes the standard uncertainty of variable x, whereas u2(x) denotes the square of the standard uncertainty of x. Similarly, ucC(x) denotes the standard uncertainty of x due only to counting (i.e., count uncertainty) while uc(x) denotes the combined standard uncertainty of x, which reflects other sources of uncertainty beyond just count uncertainty.

The law of propagation of uncertainty is applicable because variance is an additive property. According to the propagation of uncertainty formula, when noncorrelated quantities are added or subtracted, the total variance equals the sum of the variances of all components that contribute to the measurement’s overall uncertainty:

Thus, the combined standard uncertainty (CSU) for a measurement, uc(A), is simply the square root of the variance:

Note:

A number of propagation of uncertainty formulas can be used to determine uncertainties for radionuclide concentrations in water (see Table 7020:2). The one most widely used in nuclear-counting statistics is the first formula, where X = the total number of counts observed when measuring the sample and Y = the background counts. The propagation and reporting of uncertainty are dealt with in great detail elsewhere.1-3

Table

Table 7020:2. Propagation-of-Uncertainty Formulas

Table 7020:2. Propagation-of-Uncertainty Formulas

Model Equation Uncertainty Expression
Q = X + Y or Q = X – Y u(Q)=σX2+σY2
Q = aX + bY or Q = aX – bY u(Q)=a2σX2+b2σY2
Q = XY u(Q)=XYσX2X2+σY2Y2
Q=XY u(Q)=XYσX2X2+σY2Y2
2. Standard Deviation and Counting Uncertainty

A measurement’s variability is described by the standard deviation, which is most accurately estimated by replicate measurements of the quantity in question. Because replicate measurements of each quantity are impractical for most routine sample measurements, the standard uncertainty associated with measurements is often estimated based on knowledge of the characteristics of the quantity being measured. For example, a lower boundary on the most significant source of uncertainty for very low-activity measurements, often called counting uncertainty, can be estimated based on the inherent variability of radioactivity measurements due to the random nature of radioactive decay, which is described by the Poisson distribution. In the ideal case, the standard deviation (uncertainty) of this distribution for a large number of events (N) equals its square root, or

When the number of counts in a measurement (N) is more than about 20, the Poisson distribution increasingly approximates the normal (Gaussian) distribution. For the measurement of radioactivities very close to background, when there are no other significant non-Poisson contributions to the count uncertainty, this approximation is used to simplify the computation of standard counting uncertainties and thus may facilitate reasonably accurate estimates of the uncertainty of results without needing to perform replicate counts. Ensure that this condition applies by evaluating checks of the background stability, as described in 7020 A.3b. If other sources of uncertainty are present, account for their contribution to the overall uncertainty when estimating measurement uncertainties. See the MARLAP Manual, Chapter 20.

More often, the variable of concern is the standard deviation of the count rate for a single measurement, R, due to uncertainty (counts per unit time):

where:

t = the duration of the count.

The standard deviation of the count rate, when the appropriate substitutions are made, can be expressed as follows:

In practice, all counting instruments have a background counting rate (RB) when no sample is present. With a sample of a count rate Rs, the net count rate (Rn) due to the sample is:

Via uncertainty propagation methods, the standard deviation of Rn (the sample’s net count rate) is calculated as follows:

where:

RS = the count rate of the sample,

RB = the background subtraction count rate,

tS = elapsed duration of the sample count, and

tB = elapsed duration of the background subtraction count.

The duration of the count for a given set of conditions depends on the detection limit required (see below). The count time can be divided into equal periods to check the constancy of the observed count rate. For low-level counting, where the net count rate is of the same order of magnitude as the background, ensure that the duration of the background subtraction count is as long as, or longer than, the sample count to which it will be applied. The uncertainty thus calculated, however, reflects a minimum value for uncertainty associated with the radioactive inherent variability of the disintegration process. In most cases, it is smaller than the standard deviation of the total analysis. At or near background, the counting uncertainty is the major contributor to the total or combined uncertainty. As concentration levels increase, the relative counting uncertainty decreases and other sources of uncertainty become major contributors to the measurement’s combined uncertainty.

To calculate the expanded uncertainty, multiply the standard (1σ) uncertainty by a coverage factor (k) to report expanded uncertainty, U(AC), at different confidence levels.

where:

U(AC) = expanded combined uncertainty of the activity concentration,

k = coverage factor (e.g., 1.96 for 95% confidence), and

u(AC) = standard uncertainty of the activity concentration.

Report all radioactivity measurements with their associated CSU (X.XX ± k × CSU) in appropriate activity concentration units. Clearly report the respective value for the coverage factor for the uncertainty (k) or the confidence interval with the reported uncertainty.

3. Detection Limits

a. Detection decisions: In analyte detection, the critical value is defined in the MARLAP Manual glossary as:

“The minimum measured value (e.g., of the instrument signal or the analyte concentration) required to give confidence that a positive (nonzero) amount of analyte is present in the material analyzed. The critical value is sometimes called the critical level or decision level.”1

Various conventions have been used to estimate the critical value. The procedure recommended here is based on hypothesis testing.1,5 If there are no interferences and the observed uncertainty for a representative blank approximates the Poisson uncertainty, then the critical value is estimated as follows:

where:

Ka = value for upper percentile of standard normal distribution corresponding to a preselected risk of concluding falsely that activity is present when it is not (a),

NB = total counts during the counting period of a representative blank or background (cpm),

tB = duration of the count of the representative blank or background (min),

ts = duration of the count of a sample (min), and

RB = count rate of a representative blank or background (cpm).

Generally, for the 95% confidence level (α = 0.05 and Kα = 1.645), the amount of radioactivity needed to provide confidence that a positive (nonzero) amount of analyte detected is actually present in the analyzed material is

When the sample and background count durations are equal, the above equation simplifies to

This formulation applies only to situations in which a large number of background counts are observed during the counting period (e.g., > 100 counts). For formulations that address, for example, situations commonly encountered during alpha counting when < 100 background counts are observed during the counting period, see Chapter 20 of the MARLAP Manual. The manual also addresses how to accommodate uncertainty in excess of that predicted by Poisson counting uncertainty.

In practice, the critical level is reported as a concentration in the same units as the sample result. To convert Lc to the critical level concentration, Lc (pCi/L or pCi/g), the Lc is divided by the same factors used to convert the net count rate to activity concentration. Thus, the critical level concentration is calculated as:

where:

Lc(pCi/L or pCi/g) = critical level concentration (pCi/L or pCi/g)

Lc = critical level concentration in counts per minute

ε = counting efficiency (cpm/decay particle),

A = abundance (decay particles per disintegration),

Y = chemical yield (fractional),

I = correction factor for ingrowth (fractional),

D = correction for decay (fractional),

V = volume or mass of sample aliquot (e.g., L or g), and

F = factor to convert dpm to desired reporting units (e.g., 2.22 dpm/pCi).

This is a generalized formula. Please refer to equations defined in the specific methods.

b. Minimum detectable activity: The minimum detectable activity (MDA) is the smallest amount of radioactivity in a sample that yields a net count for which there is a predetermined level of confidence that radioactivity will be measured. Two types of errors occur when making detection decisions: Type I (false detection) and Type II (false nondetection).

Various conventions have been used to estimate the MDA. Concepts such as the lower limit of detection (LLD) are based on equivalent concepts, although the specific derivation may vary. This discussion will focus on the MDA and the minimum detectable concentration (MDC).

The recommended procedure for estimating MDA is based on hypothesis testing.1,5 If there are no interferences, and the observed uncertainty for a representative blank approximates the Poisson uncertainty, approximate the MDA (in counts) as follows:

where:

Kβ = corresponding value for predetermined degree of confidence for detecting presence of activity (1–β).

If count duration for the sample and representative blank or background are unequal, and both Type I and II errors are set to 5% (α = β = 0.05 and Kα= Kβ = 1.645), then the MDA is:

When the durations of both sample and representative blank or background counts are equal, this simplifies to:

This formulation applies to situations in which more than approximately 100 background counts will be observed during the counting period. For situations in which fewer background counts will occur (e.g., alpha counting or very short counting intervals) and for recommendations on accommodating uncertainty in excess of that predicted by the Poisson counting uncertainty, see Chapter 20 of the MARLAP Manual.

In practice, MDA is reported in terms of concentration in the same units as the sample result. To convert MDA to MDC, divide MDA by the same factors used to convert the net count rate to activity concentration. Thus calculated, the MDC is:

where:

MDC = minimum detectable concentration (pCi/L or pCi/g)

MDA = minimum detectable activity in net counts per minute (cpm)

ε = counting efficiency (cpm/decay particle),

A = abundance (decay particles per disintegration),

Y = chemical yield (fractional),

I = correction factor for ingrowth (fractional),

D = correction for decay (fractional),

V = volume or mass of sample aliquot (e.g., L or g), and

F = factor to convert dpm to desired reporting units (e.g., 2.22 dpm/pCi).

This is a generalized formula. Please refer to equations defined in the specific methods.

c. SDWA detection limit: When radioactivity concentrations are monitored to determine compliance with the SDWA, the required detection capability is defined in terms of the detection limit [“that concentration which can be counted with a precision of plus or minus 100 percent at the 95 percent confidence level (1.96σ, where σ is the standard deviation of the net counting rate of the sample)”].6 The detection limit is calculated as4:

where:

DLDW = SDWA detection limit, in units or activity/mass or volume,

RB = count rate for background subtraction count (cpm),

ε = counting efficiency (cpm/decay particle),

A = abundance (decay particles per disintegation),

Y = chemical yield (fractional),

I = correction factor for ingrowth (fractional),

V = volume or mass of sample aliquot (e.g., L or g),

D = correction for decay (fractional), and

F = factor for converting from dpm to desired reporting units (e.g., 2.22 dpm/pCi).

This is a generalized formula. Please refer to equations defined in the specific methods.

The results of radioactivity analyses usually are reported in terms of activity per unit volume or mass. The SI-recognized unit for activity is the Becquerel (Bq), which is equal to 1 disintegration per second. In the United States, a commonly used unit for reporting environmental concentrations is the picocurie (pCi), which is equal to 0.037 Bq, 2.22 disintegrations per minute (dpm), or 1 × 10–6 microcuries (µCi).

Specific formulas for calculating activity per volume or mass are presented in the individual methods; while each is specific to the method, they are based on the following general formula:

where:

AC = activity per unit volume, in units or activity/mass or volume,

RS = count rate for the sample count (cpm),

RB = count rate for background subtraction count (cpm),

ε = counting efficiency (cpm/decay particle),

A = abundance (decay particles per disintegration),

Y = chemical yield (fractional),

I = correction factor for ingrowth (fractional),

V = volume or mass of sample aliquot (e.g., L or g),

D = correction for decay (fractional), and

F = factor for converting from dpm to desired reporting units (e.g., 2.22 dpm/pCi).

Always report every result with an estimate of its measurement uncertainty [i.e., the combined standard uncertainty (CSU)], generally in terms of absolute activity concentration (pCi/L or Bq/kg). Because it is the CSU that determines a result’s uncertainty, round each result based on its uncertainty. Generally, report uncertainty to 1 or 2 significant figures; adjust the magnitude of the measured activity to match the final significant digit of the uncertainty. For example, if the uncertainty is reported to 2 significant figures, report a measured result of 0.12345 0.06789 as 0.123 0.068. Similarly, report 12 345 ± 6789 as 12 300 ± 6800.

In general, never censor results via comparison to an MDC, LLD, or critical value; instead, report positive, zero, and negative results as calculated (i.e., do not report “not detected,” “ND,” or “less than” values).

Program requirements often dictate the data-reporting format, however. For example, SDWA compliance-testing regulations require that only the counting uncertainty be reported. In the absence of such requirements, report radiochemical data for other programs with the estimate of the result’s CSU, as described above.

The following formula illustrates calculation of (expanded) counting uncertainty at the 95% confidence level, UcC(AC):

where:

ts = count duration for the sample (min),

tB = count duration for the background subtraction count (min), and RS, RB, ε, AC, A, Y, I, V, D, and F are as previously defined.

1. The TNI Standard: Management and Technical Requirements for Laboratories Performing Environmental Analysis, Vol. 1, Module 6, Quality Systems for Radiochemical Testing. Weatherford, Texas: The NELAC Institute; 2016 [accessed 2022 January 17]. https://nelac-institute.org/content/CSDP/standards.php?ap3=1_2 Google Scholar
2. International Standards Organization; International Electrotechnical Commission. ISO/IEC 17025. General requirements for the competence of testing and calibration laboratories. Geneva, Switzerland: 2017 [accessed 2021 April 25]. https://www.iso.org/publication/PUB100424.html Google Scholar
3. Manual for the Certification of Laboratories Analyzing Drinking Water—Criteria and Procedures, Quality Assurance, 5th ed. EPA-815-R05-004. Cincinnati (OH): U.S. Environmental Protection Agency; 2005. Google Scholar
4. U.S. Environmental Protection Agency, U.S. Department of Defense, U.S. Department of Energy, U.S. Department of Homeland Security, U.S. Nuclear Regulatory Commission, U.S. Food and Drug Administration, U.S. Geological Survey, National Institute of Standards and Technology. 2004. Multi-Agency Radiological Laboratory Analytical Protocols Manual (MARLAP); NUREG-1576, EPA 402-B-04-001A, NTIS PB2004-105421 Google Scholar
5. Kanipe LG. Handbook for analytical QC in radioanalytical laboratories; E-EP/77-4, TVA, EPA-600/7-77-038, Section 3. Washington DC: U.S. Environmental Protection Agency, Office of Research and Development; 1977. Google Scholar
6. ANSI N42.22-1995 (R2002) American National Standard–Traceability of radioactive sources to the National Institute of Standards and Technology (NIST) and associated instrument quality control. New York (NY): American National Standards Institute; 2002. Google Scholar
7. ASTM D7282. Standard practice for set-up, calibration, and quality control of instruments used for radioactivity measurements. West Conshohocken (PA): ASTM International; 2006. Google Scholar
8. International Union of Pure and Applied Chemistry. 1997. Compendium of chemical terminology: IUPAC Recommendations, 2nd ed. (The Gold Book.) A.D. McNaught & A. Wilkinson, eds. Blackwell Science, Oxford, United Kingdom. Google Scholar
9. National Council on Radiation Protection and Measurements. A Handbook of Radioactivity Measurements Procedures; NCRP Rep. No. 58. Bethesda (MD): NCRP; 1985. Google Scholar
10. Radiological and Environmental Sciences Laboratory. Idaho Falls (ID): U.S. Department of Energy, Idaho Operations Office. Google Scholar
11. International Atomic Energy Agency, Analytical QC Services, Seibersdorf, P.O. Box 100, A-1400, Vienna. Google Scholar
1. U.S. Environmental Protection Agency, U.S. Department of Defense, U.S. Department of Energy, U.S. Department of Homeland Security, U.S. Nuclear Regulatory Commission, U.S. Food and Drug Administration, U.S. Geological Survey, National Institute of Standards and Technology. 2004. Multi-Agency Radiological Laboratory Analytical Protocols Manual (MARLAP); NUREG-1576, EPA 402-B-04-001A, NTIS PB2004-105421. Google Scholar
2. Joint Committee for Guides in Metrology. Evaluation of measurement data—guide to the expression of uncertainty in measurement (GUM 1995 with Minor Corrections); JCGM 100:2008. Sevres, France International Bureau of Weights and Measures; 2008. Google Scholar
3. Taylor BIN, Kuyatt CE. Guidelines for evaluating and expressing the uncertainty of NIST measurement results; NIST Technical Note 1297. Gaithersburg (MD): National Institute of Standards and Technology, 1994. Google Scholar
4. Protocol for EPA evaluation of alternate test procedures for analyzing radioactive contaminants in drinking water, EPA 815-R-15-008, Appendix D. Washington DC: U.S. Environmental Protection Agency; 2015. Google Scholar
5. Currie LA. Limits for qualitative detection and quantitative determination—application to radiochemistry. Anal Chem. 1968;40(3):586593. Google Scholar
6. U.S. Environmental Protection Agency, 2000. National Primary Drinking Water Regulations; Radionuclides; Final Rule. 40 CFR, Parts 9, 141 & 142. Fed Reg. 65, No. 236, Part II. Google Scholar

Related

No related items

CITATION

Standard Methods Committee of the American Public Health Association, American Water Works Association, and Water Environment Federation. 7020 quality system In: Standard Methods For the Examination of Water and Wastewater. Lipps WC, Baxter TE, Braun-Howland E, editors. Washington DC: APHA Press.

DOI: 10.2105/SMWW.2882.137

SHARE

FROM THE DISCUSSION FORUM: