The content presented here represents the most current version of this section, which was printed in the 24th edition of Standard Methods for the Examination of Water and Wastewater.

This section is a compilation of quality assurance (QA) and quality control (QC) procedures or requirements to which Standard Methods sections refer, especially those sections addressing chemical analyses. Radiochemical, toxicity, and microbiological analyses often rely on specific QA and QC requirements presented more completely in Sections 7020, 8020, and 9020, respectively. It is not the intent of this section to supersede specific requirements outlined in the x020 sections of Parts 2000 through 10000, or the specifications of individual methods. Verify the applicable method QC requirements by referring to the method being used, the associated x020 section, and finally this section when not otherwise specified.

Quality assurance (QA) for laboratory operations is a program that specifies the planned and systematic measures and activities required to produce defensible data with known precision and accuracy. Laboratories operating under accreditation or certification from a national, state or other regional accrediting organization must include QA program elements specifically required for that accreditation or certification. Elements of the QA program are defined in a laboratory’s QA manual, written procedures, work instructions, and records. The manual should include a policy that defines the statistical level of confidence used to express data precision and bias, as well as method detection levels (MDLs) and minimum reporting limits (MRLs). The overall system includes all QA policies and quality control (QC) processes needed to demonstrate the laboratory’s competence and to ensure and document the quality of its analytical data. Quality systems are essential for laboratories seeking accreditation under state, federal, or international laboratory certification programs.

QA includes both QC (1020 B) and quality assessment (1020 C). For information on evaluating data quality, see Section 1030.

1. Quality Assurance Plan

Establish a QA program and prepare a QA manual (or plan). The QA manual and associated documents include the following items:15

• cover sheet with approval signatures

• quality policy statement

• organizational structure

• staff responsibilities

• document control

• analyst training and performance requirements

• tests performed by the laboratory

• procedures for handling and receiving samples

• sample control and documentation procedures

• procedures for achieving traceable measurements;

• major equipment, instrumentation, and reference measurement standards used

• standard operating procedures (SOPs) for each analytical method

• procedures for generating, approving, and controlling policies and procedures

• procedures for procuring reference materials and supplies

• procedures for procuring subcontractors’ services

• internal QC activities

• procedures for calibrating, verifying, and maintaining instrumentation and equipment

• data-verification practices, including inter-laboratory comparison and proficiency-testing programs

• procedures for feedback and corrective actions whenever testing discrepancies are detected

• procedures for permitted exceptions to documented policies

• procedures for system and performance audits and reviews

• procedures for assessing data precision and accuracy and determining MDLs;

• procedures for data reduction, validation, and reporting;

• procedures for archiving records

• procedures and systems for controlling the testing environment

• procedures for dealing with complaints from data users

Also, the QA manual defines the responsibility for, and frequency of, management review and updates to the QA manual and associated documents.

On the title page, include approval signatures, revision numbers, approval date, and effective date. In the QA manual, include a statement that the manual has been reviewed and determined to be appropriate for the scope, volume, and range of testing activities at the laboratory,2,3 as well as an indication that management has committed to ensuring that the quality system defined in the QA manual is implemented and followed at all times.

The QA manual also should clearly specifies and documents the managerial responsibility, authority, quality goals, objectives, and commitment to quality. Write the manual so it is clearly understood and ensures that all laboratory personnel understand their roles and responsibilities.

Implement and follow sample-tracking procedures, including legal chain-of-custody procedures (as required by data users), to ensure that chain of custody is maintained and documented for each sample. Institute procedures to trace a sample and its derivatives through all steps: from collection through analysis, reporting of final results, and sample disposal. Routinely practice adequate and complete documentation, which is critical to ensure that data are defensible, to meet laboratory accreditation/certification requirements, and to ensure that all tests and samples are fully traceable.

Standard operating procedures describe the analytical methods to be used in the laboratory in sufficient detail that a competent analyst unfamiliar with a method can conduct a reliable review or obtain acceptable results. An SOP must address the following items24 when they are applicable to the method being described:

• title of referenced, consensus test method

• sample matrix or matrices

• MDL or LOQ

• scope and application

• summary of SOP

• definitions

• interferences

• safety considerations

• waste management

• apparatus, equipment, and supplies

• reagents and standards

• sample collection, preservation, shipment, and storage requirements

• specific QC practices, frequency, acceptance criteria, and required corrective action if acceptance criteria are not met

• calibration and standardization

• details on the actual test procedure, including sample preparation

• calculations

• qualifications and performance requirements for analysts (including number and type of analyses)

• data assessment/data management

• references

• any tables, flowcharts, and validation or method-performance data

At a minimum, validate a new SOP before use by first determining the MDL and performing an initial demonstration of capability using relevant regulatory guidelines. (NOTE: MDL does not apply to biological, microbiological, radiological, and some physical and chemical tests.)

Use and document preventive-maintenance procedures for instrumentation and equipment. An effective preventive-maintenance program reduces instrument malfunctions, maintains more consistent calibration, is cost-effective, and reduces downtime. In the QA manual or appropriate SOP, include measurement traceability to the International System of Units (SI) through a National Metrology Institute, such as the National Institute of Standards and Technology (NIST). Standard reference materials (SRMs) or commercially available reference materials must be certified and traceable to SI standards to establish the integrity of the laboratory calibration and measurement program. Formulate document-control procedures, which are essential to data defensibility, to cover the entire process: document generation, approval, distribution, storage, recall, archiving, and disposal. Maintain logbooks for each test or procedure performed, with complete documentation on preparation and analysis of each sample, including sample identification, associated standards and QC samples, method reference, date/time of preparation/analysis, analyst, weights and volumes used, results obtained, and any problems encountered. Keep logbooks that document maintenance and calibration for each instrument or piece of equipment. Calibration procedures, corrective actions, internal QC activities, performance audits, and data assessments for precision and accuracy (bias) are discussed in 1020 B and C.

Data reduction, validation, and reporting are the final steps in the data-generation process. The data obtained from an analytical instrument must first be subjected to the data-reduction processes described in the applicable SOP before the final result can be obtained. In the QA manual or SOP, specify calculations and any correction factors, as well as the steps to be followed when generating the sample result. Also, specify all the data-validation steps to be followed before the final result is made available. Report results in standard units of mass, volume, or concentration, as specified in the method or SOP or as required by regulators or clients. Report results below detection or quantitation levels in accordance with the procedures prescribed in the specific SOP, regulatory requirements, or general laboratory policy.

A statement of uncertainty may be required with each result in specific SOPs, by specific clients, or by a regulatory authority. Uncertainty expression requires statistically relevant data, which may be prescribed within a specific method. Refer to 1030 B for an overview and references on uncertainty.

See references and bibliography in this section for other useful information and guidance on establishing a QA program and developing an effective QA manual.

Include in each analytical method or SOP the minimum required QC for each analysis. A good QC program consists of at least the following elements, as applicable:

• initial demonstration of capability (IDC)

• ongoing demonstration of capability

• MDL determination

• reagent blank (also referred to as method blank)

• laboratory-fortified blank (LFB) [also referred to as blank spike or laboratory control sample (LCS)]

• laboratory-fortified matrix (also referred to as matrix spike)

• laboratory-fortified matrix duplicate (also referred to as matrix spike duplicate) or duplicate sample

• internal standard

• surrogate standard (for organic analysis) or tracer (for radiochemistry)

• calibration

• control charts

• corrective action

• frequency of QC indicators

• QC acceptance criteria

• definitions of a batch

Sections 1010 and 1030 describe calculations for evaluating data quality.

1. Initial Demonstration of Capability

Each analyst in the laboratory should conduct an initial demonstration of capability (IDC) at least once before analyzing any sample to demonstrate proficiency in performing the method and obtaining acceptable results for each analyte. The IDC also is used to demonstrate that the laboratory’s modifications to a method produces results as precise and accurate as those produced by the reference method. As a minimum, include a reagent blank and at least 4 LFBs at a concentration between 10 times the MDL and the midpoint of the calibration curve (or other level specified in the method). Run the IDC after analyzing all required calibration standards. Ensure that the reagent blank does not contain any analyte of interest at a concentration greater than half the MQL (or other level specified in the method). Ensure that precision (percent relative standard deviation) and accuracy (percent recovery) calculated for LFBs are within the acceptance criteria listed in the method being used or generated by the laboratory (if there are no established mandatory criteria).

To establish laboratory-generated accuracy and precision limits, calculate the upper and lower control limits from the mean and standard deviation of percent recovery for at least 20 data points:

Laboratory-generated acceptance criteria for the IDC (in the absence of established mandatory criteria) generally meets industry- acceptable guidelines for percent recovery and percent relative standard deviation (%RSD) criteria (e.g., 70% to 130% recovery and 20% RSD). Another option is to obtain acceptance criteria from a proficiency testing (PT) sample provider on the inter-laboratory PT studies and translate the data to percent recovery limits per analyte and method being used.

Also, verify that the method is sensitive enough to meet measurement objectives for detection and quantitation by determining the lower limit of the operational range.

2. Operational Range

Before using a new instrument or instrumental method, determine its operational (calibration) range (upper and lower limits). Use concentrations of standards for each analyte that provide increasing instrument response (linear, weighted, or second-order). Laboratories must define acceptance criteria for the operational range in their QA plans.

3. Ongoing Demonstration of Capability

The ongoing demonstration of capability, sometimes called a laboratory control sample, laboratory control standard, QC check sample, or laboratory-fortified blank, is used to ensure that the laboratory remains in control while samples are analyzed and separates laboratory performance from method performance on the sample matrix. For initial calibration, the calibration must be verified by comparing it to a second-source calibration standard solution. The laboratory control standard used for ongoing demonstration of capability generally can be either from the same source as the initial calibration standard or from a separate source. Some methods may require that both calibration and spiking solutions be verified with a second (external) source. When verifying the initial calibration control solution, its concentration must be within 10% of the second source’s value. See 1020 B.6 below for further details on the LFB. Analyze QC check samples on at least a quarterly basis.

4. Method Detection Level Determination and Application

Before analyzing samples, determine the MDL for each analyte of interest and method to be used. Some test methods are not amenable to MDL determinations; in such cases, follow the directions in each respective method to determine reporting levels.

As a starting point for selecting the concentration to use when determining the MDL, use an estimate of 5 times the estimated true detection level. Start by adding the known amount of constituent to reagent water or sample matrix to achieve the desired concentration. Prepare and analyze at least 7 portions of this solution over a minimum 3-d period to ensure that the MDL determination is more representative of routine measurements in the laboratory. The replicate measurements should be in the range of 1 to 5 times the estimated MDL. Calculate the estimated standard deviation, s, of the 7 replicates, and from a table of one-sided t distribution, select t for (7–1) = 6 degrees of freedom at the 99% confidence level. This value, 3.14, is then multiplied by s to calculate the spiked samples MDL or MDLS:

Ideally, estimate s using pooled data from several analysts rather than data from one analyst (if the laboratory routinely has multiple analysts running a given test method).

The pooled estimate of s, which is defined here as Spooled, is a weighted average of the individual analysts’ s. Spooled is calculated from the deviations from the mean of each analyst’s data subset squared, which are then summed, divided by the appropriate number of degrees of freedom, and the square root determined. Using Spooled to calculate multiple-analyst standard deviation allows each analyst’s error and bias to affect the final result only as much as they have contributed to that result.1

where Nt is the number of analysts whose data are being used to compute the pooled standard deviation.

Perform MDL determinations iteratively. If the calculated MDL is not within a factor of l0 of the known addition, repeat determinations at a more suitable concentration. Ideally, conduct MDL determinations or verifications at least annually or on an ongoing basis (or other specified frequency) for each analyte, major matrix category, and method in use at the laboratory. Perform or verify MDL determination for each analyst and instrument, as well as whenever significant modification to the method’s instrument or operating conditions also modifies detection or chemistry. Include all sample-preparation steps in the MDL determination.

Alternatively, or when required, analyze an additional 7 blank samples based on the procedure outlined by the US Environmental Protection Agency.2 In addition to calculating the MDLS, calculate the MDL based on the blanks, or MDLb, as follows.

If none of the method blanks give a numerical result (positive or negative), then MDLb is not applicable, and MDL = MDLS. If some blanks give numerical results, then MDLb equals the highest method blank result. If all of the method blanks give numerical results, calculate MDLb as

where:

X = mean of blank results (set negative mean values to 0), and

Sb = sample standard deviation of the blank results.

The MDL is the greater of the two results obtained from the MDLS and MDLb calculations.

When analyzing greater than 7 samples for determining the MDLS and MDLb, correct the critical t-distribution value from 3.14 using the Student t-distribution table for 99% confidence (one tail) and n – 1 degrees of freedom.

Generally, apply the MDL to reporting sample results as follows (unless there are regulatory or client constraints to the contrary):

• Report results below the MDL as “not detected” (ND).

• Report results between the MDL and MQL (MRL, LOQ, etc.) with qualification for the quantified value given.

• Report results above the MQL with a value (and its associated uncertainty if required).

5. Reagent Blank

A reagent blank (method blank) consists of reagent water (see Section 1080) and all reagents (including preservatives) that normally are in contact with a sample during the entire analytical procedure. The reagent blank is used to determine whether, and the extent to which, reagents and the preparative analytical steps contribute to measurement uncertainty. As a minimum, include one reagent blank with each sample set (batch) or on a 5% basis, whichever is more frequent. Analyze a blank after the daily calibration standard and after highly contaminated samples if carryover is suspected. Evaluate reagent blank results for contamination. If unacceptable contamination is present in the reagent blank, identify and eliminate the source. Typically, sample results are suspect if analytes in the reagent blank are greater than the MQL. Samples analyzed with a contaminated blank must be re-prepared and reanalyzed. Refer to the method being used for specific reagent-blank acceptance criteria. General guidelines for qualifying sample results with regard to reagent blank quality are as follows:

• If the reagent blank is less than the MDL and sample results are greater than the MQL, then no qualification is required.

• If the reagent blank is greater than the MDL but less than the MQL and sample results are greater than the MQL, then qualify the results to indicate that analyte was detected in the reagent blank.

• If the reagent blank is greater than the MQL, further corrective action and qualification is required.

6. Laboratory-Fortified Blank/Laboratory Control Standard

A laboratory-fortified blank [laboratory control standard (LCS)] is a reagent water sample (with associated preservatives) to which a known concentration of the analytes of interest has been added. An LFB is used to evaluate laboratory performance and analyte recovery in a blank matrix. Its concentration should be high enough to be measured precisely, but not high enough to be irrelevant to measured environmental concentrations. Preferably, rotate LFB concentrations to cover different parts of the calibration range. As a minimum, include one LFB with each sample set (batch) or on a 5% basis, whichever is more frequent. (The definition of a batch is typically method-specific.) Process the LFB through all sample preparation and analysis steps. Use an added concentration of at least 10 times the MDL/MRL, less than or equal to the midpoint of the calibration curve, or level specified in the method. A low-level LFB fortified at 2 to 5 times the detection limit (MDL) can be used as a check for false negatives and for MDL/MRL verification. Control limits for low-level LFB may be variable, depending on the method, but are typically expected to be 50% to 150%. Ideally, the LFB concentration should be less than the MCL (if the contaminant has an MCL). Depending on the method’s specific requirements, prepare the addition solution from either the same reference source used for calibration or from an independent source. Evaluate the LFB for percent recovery of the added analytes by comparing the results to the method-specified limits, control charts, or other approved criteria. If LFB results are out of control, take corrective action, including re-preparation and re-analysis of associated samples if required. Use LFB results to evaluate batch performance, calculate recovery limits, and plot control charts (see 1020 B.13).

7. Laboratory-Fortified Matrix

A laboratory-fortified matrix (LFM) is an additional portion of a sample to which a known amount of the analytes of interest are added before sample preparation. Some analytes are not appropriate for LFM analysis. See the tables in Sections 2020, 4020, 5020, 6020, 7020, and specific methods for guidance on when an LFM is relevant.

The LFM is used to evaluate analyte recovery in a sample matrix. If an LFM is feasible and the method does not specify LFM frequency requirements, then include at least one LFM with each sample set (batch) or on a 5% basis, whichever is more frequent. Add a concentration that is at least 10 times the MDL/MRL, less than or equal to the midpoint of the calibration curve, or method-specified level to the selected samples. To allow analysts to separate the matrix’s effect from laboratory performance, use the same concentration as for the LFB. Prepare the LFM from the same reference source used for the LFB/LCS. If the sample contains no detectable analyte of interest or when the analyte level is unknown but expected to be near the LOQ, adjust the LFM concentration to more than 5 times the LOQ to ensure that the selected sample’s level does not adversely affect recovery. If the sample is known or expected to contain the analyte of interest, then add approximately as much analyte to the LFM sample as the concentration expected to be found in the known sample. Evaluate the results obtained for LFMs for accuracy or percent recovery. If LFM results are out of control, then take corrective action to rectify the matrix effect, use another method, use the method of standard addition, or flag the data if reported. Refer to the method being used for specific acceptance criteria for LFMs until the laboratory develops statistically valid, laboratory-specific performance criteria. Base sample batch acceptance on results of LFB analyses rather than LFMs alone, because the LFM sample matrix may interfere with method performance.

8. Duplicate Sample and Laboratory-Fortified Matrix Duplicate

Duplicate samples are analyzed randomly to assess precision on an ongoing basis. If an analyte is rarely detected in a matrix type, use an LFM duplicate. An LFM duplicate is a second portion of the sample described in 1020 B.7 to which a known amount of the analytes of interest are added before sample preparation. If sufficient sample volume is collected, this second portion of sample is added and processed in the same way as the LFM. If there is not enough sample for an LFM duplicate, then use a portion of a different sample (duplicate) to gather data on precision. As a minimum, include one duplicate sample or one LFM duplicate with each sample set (batch) or on a 5% basis, whichever is more frequent, and process it independently through the entire sample preparation and analysis. Evaluate LFM duplicate results for precision and accuracy (precision alone for duplicate samples). If LFM duplicate results are out of control, then take corrective action to rectify the matrix effect, use another method, use the method of standard addition, or flag the data if reported. If duplicate results are out of control, then re-prepare and reanalyze the sample and take additional corrective action, as needed. When the value of one or both duplicate samples is less than or equal to 5 times the MRL, the laboratory may use the MRL as the control limit, and the duplicate results are not used. Refer to the method being used for specific acceptance criteria for LFM duplicates or duplicate samples until the laboratory develops statistically valid, laboratory-specific performance criteria. If the method being used does not provide limits, calculate preliminary limits from the IDC. Base sample batch acceptance on results of LFB analyses rather than LFM duplicates alone, because the LFM sample matrix may interfere with method performance.

9. Internal Standard

Internal standards are used for organic analyses by gas chromatography/mass spectrometry (GC/MS), high-performance liquid chromatography (HPLC), liquid chromatography/mass spectrometry (LC/MS), some GC analyses, some ion chromatography (IC) analyses, and some metals analyses by inductively coupled plasma mass spectrometry (ICP-MS). An internal standard is a unique analyte included in each standard and added to each sample or sample extract/digestate just before sample analysis. Internal standards must mimic the analytes of interest and not interfere with the analysis. Choose an internal standard whose retention time or mass spectrum is separate from the analytes of interest and that elutes in a representative area of the chromatogram. Internal standards are used to monitor retention time, calculate relative response, or quantify the analytes of interest in each sample, sample extract, or sample digestate. When quantifying by the internal standard method, measure all analyte responses relative to this internal standard, unless interference is suspected. If internal standard results are out of control, take corrective action, including reanalysis if required. Refer to the method being used for specific internal standards and their acceptance criteria.

10. Surrogates, Tracers, and Carriers

Surrogates, tracers, and carriers are used to evaluate method performance in each sample. Surrogates are used for organic analyses; tracers and carriers are used for radiochemistry analyses. A surrogate standard is a known amount of a unique compound added to each sample before extraction. Surrogates mimic the analytes of interest and are compounds unlikely to be found in environmental samples (e.g., fluorinated compounds or stable, isotopically labeled analogs of the analytes of interest). Tracers generally are different isotopes of the analyte or element of interest that are measured based on their characteristic radioactive emissions. Carriers generally are stable isotopes of the element being determined, or analogs thereof, that are measured by chemical or physical means (e.g. gravimetrically or spectroscopically). Surrogates and tracers are introduced to samples before extraction to monitor extraction efficiency and percent recovery in each sample. If surrogate or tracer results are out of control, then take corrective action, including re-preparation and reanalysis if required. Refer to a specific SOP for surrogates, tracers, or carriers and their respective acceptance criteria until the laboratory develops statistically valid, laboratory-specific performance criteria.

11. Calibration Curves

For tests that use calibration curves, the following guidance is relevant.

a. Instrument calibration: Perform instrument maintenance and calibration according to method or instrument manual instructions. Conduct instrument performance according to method or SOP instructions.

b. Initial calibration: Perform initial calibration using at least 3 concentrations of standards for linear curves, at least 5 concentrations of standards for nonlinear curves, or as specified by the method being used. Set the lowest standard concentration at the reporting limit or, if applicable and the QC at the lowest standard is met, the reporting limit becomes the lowest standard concentration. The highest concentration standard defines the upper end of the calibration range. Ensure that the calibration range encompasses the analytical concentration values expected in samples or required dilutions. Choose calibration standard concentrations with no more than one order of magnitude between concentrations.

A variety of calibration functions may be appropriate: response factor (RF) for internal standard calibration, calibration factor (CF) for external standard calibration, or calibration curve. Calibration curves may be linear through the origin, linear not through the origin, or nonlinear through or not through the origin. Some nonlinear functions can be linearized by using mathematical transformations of the data (e.g., log transformation).

If using response factors or calibration factors, the calculated %RSD for each analyte of interest must be less than the method- specified value. When using response factors (e.g., for GC/MS analysis), evaluate the instrument’s performance or sensitivity for the analyte of interest against minimum acceptance values for response factors. Refer to the method being used for the calibration procedure and acceptance criteria on the response or calibration factors for each analyte.

If linear regression is used, many methods continue to specify a minimum correlation coefficient for evaluating the quality of the calibration model (y = mx + b). If the minimum correlation coefficient is not specified, then the minimum value is 0.995. Compare each calibration point to the curve by recalculating its concentration. If any recalculated concentration is not within the method’s acceptance criteria, identify the source of any outlier and correct before sample quantitation. Alternatively, a method’s calibration can be judged against a reference method by measuring the method’s calibration linearity or %RSD among the response factors at each calibration level or concentration.3 Additional alternative approaches have emerged where some methods may evaluate either linear or nonlinear calibration quality using specifications for relative error (RE) or percent relative standard error (%RSE).4,5

Use an initial calibration with any of the above functions (response factor, calibration factor, or calibration curve) to quantitate the analytes of interest in samples. Use calibration verification (see ¶ c below) only for initial calibration checks, not for sample quantitation, unless otherwise specified by the method being used. Perform initial calibration when the instrument is set up and whenever calibration-verification criteria are not met.

c. Calibration verification: In calibration verification, analysts periodically use a calibration standard to confirm that instrument performance has not changed significantly since initial calibration. Base this verification on time (e.g., every 12 h) or on the number of samples analyzed (e.g., after every 10 samples). Verify calibration by analyzing one standard at a concentration near or at the midpoint of the calibration range. Evaluate the calibration- verification analysis based either on allowable deviations from the values obtained in the initial calibration or from specific points on the calibration curve. If the calibration verification is out of control, then take corrective action, including reanalysis of any affected samples. Refer to the method being used for the frequency of and acceptance criteria for calibration verification.

12. QC Calculations

The following is a compilation of equations frequently used in QC calculations.

a. Initial calibrations:

Relative response factor (RRF):

where:

RRF = relative response factor,

A = peak area or height of characteristic ion measured,

C = concentration,

is = internal standard, and

x = analyte of interest.

Response factor (RF):

where:

RF = response factor,

A = peak area or height,

C = concentration, and

x = analyte of interest.

Calibration factor (CF):

Relative standard deviation (RSD, %):

where:

s = standard deviation,

n = total number of values,

xi = each individual value used to calculate mean, and

x¯ = mean of n values.

b. Calibration verification:

Percent difference (D, %) for response factor:

where:

RF¯i = average RF or RRF from initial calibration, and

RFc = relative RF or RRF from calibration verification standard.

Percent difference (D) for values:

c. Percent recovery for Laboratory-fortified blank (laboratory control sample):

d. Percent recovery for Surrogates:

e. Percent recovery for Laboratory-fortified matrix (LFM) sample (matrix spike sample):

f. Duplicate sample:

Relative percent difference (RPD):6

g. Method of standard additions:

where:

C = concentration of the standard solution (mg/L),

S1 = signal for fortified portion,

S2 = signal for unfortified portion,

V1 = volume of standard addition (L), and

V2 = volume of sample portion used for method of standard addition (L).

13. Control Charts

Control charts present a graphical record of quality7 by displaying QC results over time to demonstrate statistical control of an analytical process and to detect apparent changes in the analytical process that may erode such control.8 These charts are essential QC tools for tests that use accuracy and precision QC measures. Computer-generated and -maintained lists or databases with QC values, limits, and trending may be used as an alternative to plotting control charts.

Control charts for batch QC are often based on a single QC result per batch, and decisions on whether to accept or reject that batch may depend on this one result. This special case is referred to as control charts for individuals because the rational subgroup size is 1. When the distribution of QC data is markedly asymmetrical (e.g., method blanks), use control charts for individuals with caution.8

Two types of control charts commonly used in laboratories are: accuracy (means) charts for QC samples and precision (range) charts for replicate or duplicate analyses.

a. Accuracy (means) chart: The accuracy chart for QC samples (e.g., reagent blanks, LCSs, calibration check standards, LFBs, LFMs, and surrogates) is constructed from the average and standard deviation of a specified number of measurements of the analyte of interest (Figure 1020:1). The accuracy chart includes upper and lower warning levels (WLs) and upper and lower control levels (CLs). Common practice is to use ±2s and ±3s limits for the WL and CL, respectively, where s represents the standard deviation of a finite sample set (see 1010 B.1). These calculated limits should not exceed those required in the method. The value for s is the average standard deviation derived from a series of trial runs performed before establishing a control chart. Ideally, conduct at least 7 trials using the same number of measurements per trial as anticipated when using the control chart. Set up an accuracy chart by using either the calculated values for mean and standard deviation or else the percent recovery. (Percent recovery is necessary if the concentration varies.) Construct a chart for each analytical method. Construct matrix-specific QC charts separately for each matrix. Ideally, to provide the greatest benefit to the laboratory and enable the earliest possible detection of an out-of-control condition, enter results on the chart each time the QC sample is analyzed. It is advisable to recalculate the initial estimate of s when the number of trials reaches 20 to 50 results.

b. Precision (range) chart: The precision chart also is constructed from the average and standard deviation of a specified number of measurements [e.g., %RSD or relative percent difference (RPD)] for replicate or duplicate analyses of the analyte of interest. If the standard deviation of the method is known, use the factors from Table 1020:1 to construct the central line and WLs and CLs as in Figure 1020:2. The standard deviation (s) value used with the factors from Table 1020:1 is the arithmetic average of the individual standard deviations used in the trials derived from stated or measured values for reference materials. The number of measurements (n) used to determine the estimated standard deviation (s) is specified in Table 2020:1 relative to statistical confidence limits of 95% for WLs and 99% for CLs. Perfect agreement between replicates or duplicates results in a difference of zero when the values are subtracted, so the baseline on the chart is zero. Therefore for precision charts, only upper WLs and upper CLs are meaningful. The standard deviation is converted to the range so analysts need only subtract the two results to plot the value on the precision chart. The mean range is computed as:

the upper CL as

and the upper WL as

where:

R¯ = mean range

d2 = factor to convert s to the mean range (1.128 for duplicates, as given in Table 1020:1),

s(R) = standard deviation of the range, and

D4 = factor to convert mean range to CL (3.267 for duplicates, as given in Table 1020:1).

Note: When computed lower CL or lower WL values are negative, record the value as zero because the range value, R, is positive by definition.

Table

Table 1020:1 Factors for Computing Lines on Range Control Charts

Table 1020:1 Factors for Computing Lines on Range Control Charts

Number of Observations (n) Factor for Central Line (d2) Factor for Control Limits (D4)
2 1.128 3.267
3 1.693 2.575
4 2.059 2.282
5 2.326 2.114
6 2.534 2.004

A precision chart is rather simple when duplicate analyses of a standard are used (Figure 1020:2). For duplicate analyses of samples, the plot appears to be different because of variations in sample concentration. If a constant RSD in the concentration range of interest is assumed, then R¯, D4R¯, etc., may be computed as above for several concentrations, a smooth curve drawn through the points obtained, and an acceptable range for duplicates determined (Figure 1020:3). A separate table, as suggested below the figure, will be needed to track precision over time.

More commonly, the range can be expressed as a function of RSD (coefficient of variation). The range can be normalized by dividing by the average. Determine the mean range for the pairs analyzed by

Then draw lines on the chart at R¯ + 2SR and R¯ + 3SR and, for each duplicate analysis, calculate normalized range and enter the result on the chart (Figure 1020:4).

c. Chart analyses: If the WLs are at the 95% confidence level, then an average of 1 out of 20 points would exceed that limit, whereas only 1 out of 100 on average would exceed the CLs. There are a number of “rules” (e.g., Western Electric) that may be used to examine control-chart data for trends and other apparent out-of-control changes in method performance.8 The tradeoff is between missing a change in method performance (false negative) versus investigating and acting on an apparent change in method performance when nothing had actually changed (false positive). The choice of rules to evaluate control charts should balance the risk between false positives and false negatives in method performance; this choice also may be influenced by the rules available in the software or statistical package used to analyze control charts. The following are typical guidelines, based on these statistical parameters (Figure 1020:5):

• Control limit—If one measurement exceeds a CL, repeat the analysis immediately. If the repeat measurement is within the CL, continue analyses; if it exceeds the CL, discontinue analyses and correct the problem.

• Warning limit—If 2 out of 3 successive points exceed a WL, analyze another sample. If the next point is within the WL, continue analyses; if the next point exceeds the WL, evaluate potential bias and correct the problem.

• Standard deviation—If 4 out of 5 successive points exceed 1s, or are in decreasing or increasing order, analyze another sample. If the next point is less than 1s, or changes the order, continue analyses; otherwise, discontinue analyses and correct the problem.

• Trending—If 7 successive samples are on the same side of the central line, discontinue the analyses and correct the problem.

The above considerations apply when the conditions are either above or below the central line, but not on both sides (e.g., 4 of 5 values must exceed either +1s or –1s). After correcting the problem, reanalyze the samples analyzed between the last in-control measurement and the out-of-control one.

Another important function of the control chart is assessing improvements in method precision. If measurements never or rarely exceed the WL in the accuracy and precision charts, recalculate the WL and CL using the 10 to 20 most recent data points. Trends in precision can be detected sooner if running averages of 10 to 20 are kept. Trends indicate systematic error; random error is revealed by random exceedance of WLs or CLs.

14. QC Evaluation for Small Sample Sizes

Small sample sizes (e.g., for field blanks and duplicate samples) may not be suitable for QC evaluation with control charts. QC evaluation techniques for small sample sizes are discussed elsewhere.5

15. Corrective Action

QC data that are outside the acceptance limits or exhibit a trend are evidence of unacceptable error in the analytical process. Take corrective action promptly to determine and eliminate the source of the error. Do not report data until the cause of the problem is identified and either corrected or qualified (Table 1020:2). Qualifying data does not eliminate the need to take corrective actions, but allows analysts to report data of known quality when it is either impossible or impractical to reanalyze the samples. Maintain records of all out-of-control events, determined causes, and corrective action taken. The goal of corrective action is not only to eliminate such events, but also to reduce repetition of the causes.

Table

Table 1020:2. Example Data Qualifiers

Table 1020:2. Example Data Qualifiers

Symbol Explanation
B Analyte found in reagent blank. Indicates possible reagent or background contamination.
E Estimated reported value exceeded calibration range.
J Reported value is an estimate because concentration is less than reporting limit or because certain QC criteria were not met.
N Organic constituents tentatively identified. Confirmation is needed.
PND Precision not determined.
R Sample results rejected because of gross deficiencies in QC or method performance. Resampling and/or re-analysis is necessary.
RND Recovery not determined.
U Compound was analyzed for, but not detected.

Corrective action begins with analysts being responsible for knowing when the analytical process is out of control. Initiate corrective action when a QC check exceeds acceptance limits or exhibits trending, and report an out-of-control event (e.g., QC outliers, hold-time failures, loss of sample, equipment malfunctions, and evidence of sample contamination) to supervisors. Recommended corrective actions for unacceptable QC data are as follows:

• Check the data for calculation or transcription error. Correct results if an error occurred.

• Determine whether a sample was prepared and analyzed according to the approved method and SOP. If not, prepare and analyze again.

• Check calibration standards against an independent standard or reference material. If the calibration standards fail, re-prepare calibration standards, recalibrate, or both, and reanalyze affected samples.

• If an LFB fails, analyze another LFB.

• If a second LFB fails, check an independent reference material. If the second source is acceptable, re-prepare and reanalyze affected samples.

• If an LFM fails, check the LFB. If the LFB is acceptable, then qualify the data for the LFM sample, use another method, or use the method of standard addition.

• If an LFM and associated LFB fail, re-prepare and reanalyze the affected samples.

• If a reagent blank fails, analyze another reagent blank.

• If second reagent blank fails, re-prepare and reanalyze the affected sample(s).

• If a surrogate or internal standard known addition fails and there are no calculation or reporting errors, re-prepare and reanalyze the affected samples.

If data qualifiers are used to qualify samples not meeting QC requirements, the data may or may not be usable for the intended purposes. It is the laboratory’s responsibility to provide the client or end-user of the data with sufficient information to determine the usability of qualified data.

Quality assessment is the process used to ensure that QC measures are being performed as required and to determine the quality of the laboratory’s data. It includes proficiency samples, laboratory comparison samples, and performance audits. These are applied to test the precision, accuracy, and detection limits of the methods in use, and to assess adherence to SOP requirements.

1. Laboratory Check Samples (Internal Proficiency)

Evaluate the proficiency for each analyte and method in use by periodically analyzing laboratory check samples. To determine each method’s percent recovery, use either check samples containing known amounts of the analytes of interest supplied by an outside organization or else blind additions prepared independently in the laboratory.

In general, method performance is established before a method is used to generate usable data; acceptable percent recovery consists of values that fall within the established acceptance range. For example, if the acceptable range of recovery for a substance is 85% to 115%, then analysts are expected to achieve a recovery within that range on all laboratory check samples and to take corrective action if results are outside it.

2. Laboratory Comparison Samples

A good QA program requires participation in periodic inter- and intra-laboratory comparison studies. Commercial and some governmental programs supply laboratory comparison samples containing one or more constituents in various matrices. For routine procedures, semi-annual analyses are customary. If failures occur, take corrective action and analyze laboratory check samples more frequently until acceptable performance is achieved.

3. Compliance Audits

Compliance audits are conducted to evaluate whether the laboratory meets the applicable SOP or consensus-method requirements that the laboratory claims to follow. Compliance audits can be conducted by internal or external parties. A checklist can be used to document how a sample is treated from time of receipt to final reporting of the result. For example, Table 1020:3 provides a partial list of audit items for a hypothetical analytical procedure. The goal of compliance audits is to detect any deviations from the SOP or consensus method so corrective actions can be taken.

Table

Table 1020:3. Example Audit of a Soil Analysis Procedure

Table 1020:3. Example Audit of a Soil Analysis Procedure

Procedure Comment Remarks
1. Sample entered into logbook Yes Lab number assigned
2. Sample weighed Yes Dry weight
3. Drying procedure followed No Maintenance of oven not done
4. a. Balance calibrated Yes once per year
b. Cleaned and zero adjusted Yes Weekly
5. Sample ground Yes To pass 50 mesh
6. Ball mill cleaned Yes Should be after each sample
7. Etc.
4. Laboratory Quality Systems Audits

A quality systems-audit program is designed and conducted to review all elements of the laboratory quality system and address any issues revealed by different facets of the review. Quality systems audits should be conducted by qualified auditors who are knowledgeable about the section or analysis being audited. Audit all major elements of the quality system at least annually. Quality system audits may be conducted internally or externally; both types should occur on a regularly scheduled basis and should be handled properly to protect confidentiality. Internal audits are used for self-evaluation and improvement. External audits are used for accreditation, education on client requirements, and approval of the data’s end use. Corrective actions should be taken on all audit findings and their effectiveness reviewed at or before the next scheduled audit.

5. Management Review

Review and revision of the quality system is vital to its maintenance and effectiveness. Conducted at least annually by laboratory managers, this review should assess the effectiveness of the quality system and corrective action implementation, and should include internal and external audit results, performance evaluation sample results, input from end user complaints, and corrective actions. This periodic review and revision is vital to the maintenance and implementation of an effective laboratory quality system.

1. Guidance for quality assurance plans; EPA/240/R-02/009 (QA-G-5). Washington DC: Office of Environmental Information, U.S. Environmental Protection Agency; 2002. Google Scholar
2. General requirements for the competence of testing and calibration laboratories; ISO/EIC/EN 17025. Geneva, Switzerland: International Organization for Standardization; 2005. Google Scholar
3. Guidance for the preparation of standard operating procedures (SOPs) for quality-related documents; EPA/600/B-07/001 (QA/G-6). Washington DC: U.S. Environmental Protection Agency; 2007. Google Scholar
4. Manual for the certification of laboratories analyzing drinking water, 5th ed. EPA-815-R-05-004. Washington DC: U.S. Environmental Protection Agency; 2005. Google Scholar
5. Supplement to 5th edition of manual for certification of laboratories analyzing drinking water; EPA 815-F-08-006. Cincinnati (OH): Office of Water, Office of Groundwater and Drinking Water, Technical Support Center, U.S. Environmental Protection Agency; 2008. Google Scholar
Delfino JJ. Quality assurance in water and wastewater analysis laboratories. Water Sew Works. 1977;124(7):7984. Google Scholar
Inhorn SL, ed. Quality assurance practices for health laboratories. Washington DC: American Public Health Association; 1978. Google Scholar
U.S. Environmental Protection Agency. National Environmental Laboratory Accreditation Conference (NELAC) Notice of Conference and Availability of Standards. 1994. Fed. Reg. 59(231). Google Scholar
Good automated laboratory practices. Research Triangle Park (NC): U.S. Environmental Protection Agency; 1995. Google Scholar
R101: General requirements for accreditation; A2LA. Gaithersburg (MD): American Association for Laboratory Accreditation; 2014. Google Scholar
1. Skoog DA, West DM, Holler FJ, Crouch SR. Fundamentals of analytical chemistry, 10th ed. Boston (MA): Cengage Learning; 2022. Google Scholar
2. U.S. Environmental Protection Agency. 2016. Changes to method detection limit (MDL) procedure, III.H. In: Clean Water Act Methods Update Rule for the analysis of effluent. 40 CFR 136. https://www.ecfr.gov/current/title-40/chapter-I/subchapter-D/part-136?toc=1 Google Scholar
3. Solution to analytical chemistry problems with Clean Water Act Methods; EPA 821-R-07-002. Washington DC: U.S. Environmental Protection Agency; 2007. Google Scholar
7. U.S. Environmental Protection Agency. Method modifications and analytical requirements. 40 CFR, Part 136.6. [accessed 12 November 2021]. https://www.ecfr.gov/current/title-40/chapter-I/subchapter-D/part-136/section-136.6 Google Scholar
8. Method 8000D. Determinative chromatographic separations. Revision 5, Hazardous waste test methods, SW-846. Washington DC: U.S. Environmental Protection Agency; March 2018. Google Scholar
6. National functional guidelines for inorganic superfund data review; EPA-540/R-13-001. Washington DC: Office of Emergency and Remedial Response, Contract Laboratory Program, U.S. Environmental Protection Agency; 2010. Google Scholar
4. Guide for Quality Control Charts; ANSI/ASQC B1-1996. Washington DC; American National Standards Institute, American Society of Quality Control; 1996. Google Scholar
5. Wise SA, Fair DC. Chapter 15. In: Innovative control charting: practical SPC solutions for today’s manufacturing environment. Milwaukee, WI: American Society for Quality; 1977. Google Scholar
U.S. Environmental Protection Agency. Quality Assurance/Quality Control guidance for removal activities, sampling QA/QC plan and data validation procedures, Interim Final; EPA-540/G-90/004. Washington DC; 1990. Google Scholar
Jarvis AM, Siu L. Environmental radioactivity laboratory intercomparison studies program; EPA-600/4-81-004. Las Vegas (NV): U.S. Environmental Protection Agency; 1981. Google Scholar
International Organization for Standardization. 2005. General Requirements for the Competence of Testing and Calibration Laboratories; ISO/IEC 17025. Geneva, Switzerland. Google Scholar
ASTM D2777-13. Standard practice for determination of precision and bias of applicable test methods of Committee D19 on Water. West Conshohocken (PA): ASTM International; 2013. Google Scholar

Related

No related items

CITATION

Standard Methods Committee of the American Public Health Association, American Water Works Association, and Water Environment Federation. 1020 quality assurance In: Standard Methods For the Examination of Water and Wastewater. Lipps WC, Baxter TE, Braun-Howland E, editors. Washington DC: APHA Press.

DOI: 10.2105/SMWW.2882.005

SHARE

FROM THE DISCUSSION FORUM: