What Is It, and How Can it Be Used in Your Facility or Lab?
Method validation can be defined as testing a process or procedure to ensure it works as intended. This validation follows a set of standard steps that are reproducible, and generates a data set that serves to quantitatively confirm the validation’s findings. Usually, method validation is done when importing a previously-described protocol into your lab or facility, when you have developed a novel method, or when you are taking a procedure done for a given amount of material and scaling it up. Results from method validation can be used to judge the quality, reliability and consistency of analytical results; it is an integral part of any good analytical practice.
The process of method validation serves two main purposes. First, it’s a standardized way to make sure that the procedure or test being done actually does what it’s intended to do. And second, it describes the quantitative limits of the procedure, such as precision, accuracy, standard error, and sensitivity.
Figure 1. This image of a bullet-point distribution on a standard target is an intuitive approach to defining the concepts of Accuracy and Precision.
Any standard operating procedures (SOPs) in a facility or laboratory need to be validated before their introduction into routine use. Occasionally, an SOP may need to be re-evaluated, generally understood to mean “whenever the conditions that the method was originally validated for change”. This would include a new instrument being purchased, new solvents or materials being used, and more generally speaking, whenever the method is changed, and the change is outside the original scope of the method.
The American Association for Clinical Chemistry (AACC) identifies two major steps that are required in the process of method validation. The first step is defining the goals of your method. Accepting that all lab measurements contain some experimental error, decide beforehand based on working knowledge of your products and your facility what the acceptable performance is for precision, accuracy, sensitivity, and analytical range of measurement.
Figure 2. A more typical statistical representation of the difference between Accuracy and Precision. Sensitivity describes how small of a difference the method is able to detect between measurements.
The second step the AACC describes is error assessment. Basically, in real life, whenever we take a measurement, there is some error associated with it. That error can be described by statistical terms such as variance or standard deviation. So the purpose of this second test is to figure out how much error affects the measurements, and what kind of error it is. For example, an amount of error in precision is expected; no two measurements are going to be exactly the same. But errors in accuracy are more serious because they can indicate systemic problems. These are usually assessed by consulting scientific literature or professionals in that field. Identifying error type is an important step in deciding whether steps need to be taken to reevaluate and perhaps revise the method.
You always test drive a car before you buy it. Buying-in to a new SOP is no different. No matter how well-spoken of or impressive the process seems, it should always be validated, allowing technicians using the method to confidently say, “This process works”. It’s how you ensure the delivery of high-quality products at all times.
Image Credit: Compliance4All