Quality Assurance (QA):
Quality Assurance (QA) programs are a requirement for every laboratory by the FDA and CLIA, which took effect as of 1988. They are the external standards set to ensure the quality of the laboratory results that are reported. In order to make sure the lab is in compliance with these regulations and remain certified, regular inspections occur to make sure the laboratory is adhering to these standards. This is mandated for good-quality patient care. In addition to standards and inspections, laboratory personnel must participate in proficiency testing (example CAP and/or API) and all of these things are documented and an active part of the life and organization of the laboratory. Quality of service to the patient and reliable results should be the top goal of each person working in the laboratory. All clinical laboratory personnel should be willing to work together as a team to achieve this goal via good communication and internal programs and training designed to meet these external mandates. This includes both nonanalytical components as well as analytical components, including quantitative data, or quality control.
Nonanalytical Factors (Preanalytical and Postanalytical):
Analytical Factors:
Nonanalytical Factors (Preanalytical and Postanalytical):
- Qualified personnel (certified, trained, ongoing competency and proficiency testing, annual safety training)
- External certification (ASCP, AMT, etc...) and certification maintenance (dues, CEU's, documentation)
- New employee orientation
- Hospital or clinic orientation
- Departmental orientation
- Safety orientation
- Established laboratory policies and standard operating procedures (SOPs)
- Should be updated regularly and signed by personnel
- Knowledge of MSDS sheets and their location
- Proper procedures for specimen collection, storage and labeling
- Strict adherence is critical to accuracy of test results
- Preanalytical errors are the most common cause of laboratory errors
- Correct storage temperatures
- Room temp.
- Refrigerated
- Frozen
- Kept out of direct light
- On ice slushy
- Preventive maintenance of laboratory equipment
- Cleaned
- Maintained
- Checked for accuracy
- Calibrated
- Monitored
- Proper methodology, understanding and technique
- Always check the procedure
- Established Quality Control and Quality Assessment Procedures and Techniques/Routines
- Each procedure is based on QC
- Normal and abnormal controls
- Low and high controls
- Positive and negative controls
- Accuracy in Test Result Reporting and Verification of Results
- Established critical values
- Established reference ranges
- Delta check system
- Excellent communication
Analytical Factors:
- Basic statistics in quality control
- Measurements
- Accuracy
- How close the test result is to the true value or standard (reference range)
- Standards with known values are used to check for this
- Freedom from error
- Calibration
- Comparison of the instrument measurement or reading to the known physical constant
- Quality Control
- Process that monitors accuracy
- Process that monitors reproducibility (precision)
- Uses control specimens
- Control
- QC material that is similar in composition to a patient sample
- Value is known
- Tested exactly as patient specimens are tested daily or along with the unknown patient specimen (sample)
- The best measurement of precision
- At least 2 levels
- May be normal or abnormal
- May be low or high
- Precision
- How close the test results are to one another or how consistent repeated analysis of the same QC materials are when testing is performed
- Reproducibility of test results
- Blind Sampling (5x repeats)
- AMR's (20x repeats)
- Freedom from variability
- Standards
- Extremely purified substances of a known composition
- Best way to measure accuracy
- Used to establish reference points in construction of graphs such as calibration curves or Levy-Jennings charts
- Proficiency Testing
- Means in which QC between different laboratories is compared
- Results are graded
- For moderate-complexity and high-complexity testing
- Verifies accuracy and reliability of test results
- Occurs at least twice a year but may be ongoing
- Troubleshooting equipment
- Correct documentation
CLIA QC Requirements:
- Quantitative Testing
- At least 2 levels of QC at different concentrations must be run daily (every 24 hours) (example: positive and negative, low and high), except:
- Electrolytes (every 8 hours)
- After an ICT sample diluent change (perform QC)
- After an ICT reference solution change or addition (perform QC)
- After a calibration lot change (perform QC)
- After a QC lot change (perform QC on new lot)
- Haptoglobin (every 8 hours)
- Electrolytes (every 8 hours)
- At least 2 levels of QC at different concentrations must be run daily (every 24 hours) (example: positive and negative, low and high), except:
- Qualitative Testing:
- Positive and negative controls run at least daily
- Some are run once every 8 hours
- Semi-Quantitative Testing:
- Graded or titered
- A control material of graded or titered reactivity
- Run at least once daily
- QC is run with each new lot
- QC is run with each new shipment of reagent (both positive and negative controls)
- Blood Gases: 1 control (combination of low and high) is run every 8 hours of patient testing
- one control sample is run whenever a patient sample is tested
Calibration and Calibrators:
To calibrate something means to match up or evaluate the instrument's readings with a set standard to make sure it is putting out accurate and precise test results. It is so that you can compare current results with historical data to see patterns and if anything needs to be adjusted in terms of ranges. It insures that data is valid and reliable. This is important in reducing bias.
Calibration is a critical process that serves as the mediator between analysis and concentration of an analyte. For this reason, it utilizes a series of steps, or solutions/calibrations, which contain the analyte and specific, known concentrations and observes the analytical (potentiometric) signal read or produced at each step by the measuring device (potentiometer or optical reader) as each solution/calibration is read by the instrument. It creates a reading at every level or concentration, which is expressed as a calibration curve. This is useful for noticing a trend, or pattern, reflecting changes or problems with analytes, QC, or chemistry equipment. The calibration curve is expressed as linear (straight line) or nonlinear (curved or peaks and valleys). This is an algorithm that can be either logarithmic or exponential.
Accuracy is how closely an instrument's data or measurements are to the set standard or true value.
Precision is when measurements are repeated over and over again and yield the same results.
Reference standards or ranges are based on accuracy and precision, or known values.
Calibration is a critical process that serves as the mediator between analysis and concentration of an analyte. For this reason, it utilizes a series of steps, or solutions/calibrations, which contain the analyte and specific, known concentrations and observes the analytical (potentiometric) signal read or produced at each step by the measuring device (potentiometer or optical reader) as each solution/calibration is read by the instrument. It creates a reading at every level or concentration, which is expressed as a calibration curve. This is useful for noticing a trend, or pattern, reflecting changes or problems with analytes, QC, or chemistry equipment. The calibration curve is expressed as linear (straight line) or nonlinear (curved or peaks and valleys). This is an algorithm that can be either logarithmic or exponential.
- Linear: the signal rises or falls in a linear pattern as the concentration of analyte increases or decreases
- Nonlinear/Curved: the signal rises or falls in a nonlinear pattern as the analyte rises or falls
- Interpolation: the dots/points/readings/measurements that connect on a calibration plot (Levy-Jennings)
- Shows the ranges of the expected signals for the range of concentrations of a particular analyte between the lowest and highest calibrator
- Used for comparison to the calibration curve and analyte concentration
- The process differs between instruments and calibrator materials and method used
- The laboratory test method or manufacturer develops a test method to determine the AMR: analytic measurement range)
- This defines the lowest and highest measurable quantities of calibrator material/solution
Accuracy is how closely an instrument's data or measurements are to the set standard or true value.
Precision is when measurements are repeated over and over again and yield the same results.
Reference standards or ranges are based on accuracy and precision, or known values.
Calibration Curve:
Clinical and analytical chemistry utilize calibration curves to determine the concentration of a substance in an unknown sample or a patient sample by comparing it to a set of standard samples containing a known concentration, to see if you get consistent ranges and concentrations that are precise and accurate. It is used in troubleshooting instrument calibration failures or problems. This is plotted on a line graph showing how the instrument is responding. Depending on the concentration of the analyze, the chemical signal will change. Each instrument has a working range which the concentration of the standards must fall within to pass and remain active. Analysis will produce a series of measurements that are plotted on a Levy-Jennings chart and the plot of instrument response versus concentration shows a linear response. This is called linearity. The mean is the average. The mode is the most frequent number seen. The median is the middle value in a data set.
LOD=limit of detection
LOQ=limit of quantification
LOL=limit of linearity
LOD=limit of detection
LOQ=limit of quantification
LOL=limit of linearity
Quality Control:
Quality Control in the clinical chemistry laboratory is a numerical process based on statistics, which is designed to observe, identify, decrease, and correct deficiencies in analytical results/data before patient test results are released, and to monitor that process. This aids in the improvement of the quality of laboratory results, which we want to be as accurate and precise as possible. Basically, quality control is a measurement of precision, or how frequently measurements are repeated over and over again, yielding the same results. Quality control materials (liquid reagents) are created on a matrix that is similar to real patient samples. At least two levels of controls are performed and documented or logged on each day of testing. For some tests, this occurs more often and may include more than two levels of controls. Since each laboratory is unique and uses different types of control materials and analyzers, each laboratory has its own established quality control procedures and techniques, documentation, and need to follow the manufacturer's instructions and recommendations.
The QC process is based on Quality Assurance, the procedure and administrative processes that actually set the goals to meet the requirements or standards. It is systematic, it provides comparison with a standard or reference range, it monitors the processes of QC, and it provides feedback that helps prevent errors before they happen.
Levey-Jennings charts are used as graphs or visual representations of quality control data. This system uses points on line graphs with changing data that shows patterns to make sure that ranges are good. If a point is outside the range, you can see it and adjust it. Lines going across are set for the mean, +/- 1 SD, +/- 2 SD and +/- 3 SD.
The QC process is based on Quality Assurance, the procedure and administrative processes that actually set the goals to meet the requirements or standards. It is systematic, it provides comparison with a standard or reference range, it monitors the processes of QC, and it provides feedback that helps prevent errors before they happen.
Levey-Jennings charts are used as graphs or visual representations of quality control data. This system uses points on line graphs with changing data that shows patterns to make sure that ranges are good. If a point is outside the range, you can see it and adjust it. Lines going across are set for the mean, +/- 1 SD, +/- 2 SD and +/- 3 SD.
- Detects increased random errors
- Detects shifts (trends) in calibration
- Distance from the mean is measured in SD's, or standard deviation
- Westgard Rules can be applied to see if it is ok to release results or if they need to be run again
- Based on statistics and statistics methods
- Define performance limits for a specific assay (test)
- Used to detect random errors
- Used to detect systemic errors
- Programmed on automatic analyzers
- Cautiously analyzed to make sure errors are true and not false errors
Benefits and Functions of the Quality Control Program:
- Serves as a guide to how the equipment is functioning
- Serves as a guide as to how the reagents are performing
- Monitors individual technique
- Confirms testing accuracy and compares results with reference values
- Shows a pattern, such as an increase in the frequency of high and/or low minimally acceptable values (referred to as dispersion)
- Shows a progressive of values from the mean (referred to as trends)
- Shows a rapid shift or change from the established mean (referred to as a shift)
Other Associated Terms:
The Mean:
In a set of numerical data, the mean is the mathematical average of the sum of the numbers divided by the number of data points or values. For example, if you have 5, 6, 7, 8 and 9, you would add them up (35) and divide that number by 5, which gives you 7.
The Median:
The median is the middle number in a set of numerical data. If the set of numbers is even, the median is the average of the two numbers in the middle. For example, if you have 5, 6, 7, 8 and 9, 7 is the median.
The Mode:
In a set of numerical data, the mode is the most frequent number encountered. For example, if you have 9, 9, 7, 5, 6, 9, 10, then 9 is the mode.
Standard Deviation (SD):
SD is the measurement that is used to quantify the amount of variation in a set of data (numbers). A low SD closer to +/-1 means that it is closer to the mean (average), which is a good thing. A high SD is further from the mean.
- 34 out of 100 test results fall within 1 SD of the mean value for an analyte (66%)
- 5 out of 100 test results fall within 2 SD of the mean value for an analyte (95%)
- 1 out of 100 test results fall within 3 SD of the mean value for an analyte (99%)
Standard Deviation Index (SDI):
Coefficient of Variation (CV):
Bias:
Risk Assessment:
- Preanalytical
- Analytical
- Postanalytical
- Intended uses and impact
- Components
- Variations
- Data
- Instrument performance
- Manufacturer's instructions and recommendations