Analytics

Deciphering the Jargon of the Analytical Lab—Chapter 2

I’m a connoisseur of old generational slang. Like the ‘50s stuff in The Wild One, or any number of ‘70s grindhouse films. The trick is being hip to the jibe through a little education. In our last installment, we canvassed terms ululated across analytical labs, specifically specificity, the LOD, and the LOQ. Hopefully, you decided to tune in to Labonics, Chapter 2.

Deciphering the Jargon of the Analytical Lab—Chapter 2

I’m a connoisseur of old generational slang. Like the ‘50s stuff in The Wild One, or any number of ‘70s grindhouse films. The trick is being hip to the jibe through a little education. In our last installment, we canvassed terms ululated across analytical labs, specifically specificity, the LOD, and the LOQ. Hopefully, you decided to tune in to Labonics, Chapter 2.

You can be very good at the wrong thing, whether a feat, a test, or something at which you’re consistently not your best. Sometimes you’ll hear people brag about how precise their method or product is, but the analytical chemist knows. They’ll quickly point out that just being merely precise can quickly mushroom to any number of woes. For, precision relates how close data points are regardless of whether, compared to the true value, they’re just junkers. Scores of 56%, 55%, and 54% might be precise, but this means there’s something remiss, and also that you’re flunkers.

What you want, then, is high accuracy. The dartboards included in the corresponding image below make it easy to see: a precise method can be consistently off the mark, providing a small relative standard deviation, or RSD. An accurate one, however,  can skirt the true value as closely as the method enables, and convey information that’s more trustworthy.

Image Credit: Mr. Evans’ Science Website

Like all cosmic truths, there are equations that can be employed to describe one’s accuracy and precision. Armed with these like artillery, the analytical chemist evaluates a method numerically. By averaging the values to get the mean and also determining the spread in the measured data (the standard deviation), a quick ratio of the two multiplied by 100 yields the RSD calculation.

As the story went in Chapter 1 regarding a method’s specificity, the trick’s much the same to quantifying the critical metric called accuracy. And while there are several acceptable ways to perform this task, we shall spike the matrix, our molecular sea or soup, with the analyte of interest and quantify the amount we were able to recoup.  By first measuring the matrix and then the spiked species, we can distinguish, quantitatively, one from another. And a ratio of the two, multiplied by 100, tells us the percentage of analyte that we were able to recover.

Last on our list isn’t a term lightly tossed around the lab from tech to tech without fear, for gauging a method’s robustness, and measuring statistical similarity, is certainly something to revere. Robustness in a method perhaps is a lesson to us all to be adaptable, go with the flow, just chill. The analytical chemist seeks to evaluate robustness by making small, deliberate changes in their method, such as solvent composition or flow rate, while keeping everything else still. Should that method stand fast against the experimental changes, showing robustness and flexibility, the analytical chemist can strut, head high, at their method’s validity.

When calculating robustness, another statistic can be employed called the RPD (relative percent difference). In an ideal world, which never exists (except in the proposed multiverse), the best thing in the world would be nothing, goose eggs, no differences between method A and method B. But we live in reality.

And in reality, there are parameters we set by which we live. The analytical chemist is no different, being human after all, and so they put their data through a numerical sieve. The question needing to be answered is, how far off can a method be? One way to evaluate this is using a control chart, a graph that looks a lot like an EKG (electrocardiogram).

Image Credit: QI Macros

To master the control chart, you’d need a stack of data points, like the chart above shows. Plot each one using axes like date (x) and THC concentration (y), then connect the dots and calculate the average and standard deviation. In the plot above, the average is labeled as center line, and there are +/- standard deviations, or +/- 1 to 3 sigma, the second of which (+/-2), depending on sign, defines upper and lower warning limits (UWL, LWL), respectively. At this point, the analytical chemist is heralded by the data to take heed and to watch the process meticulously. Should a fated data point trek beyond the WLs, into the vast frontier, a +/-3 sigma means there’s action needed here.

I’ll see you at the end of all this, if you’ve patiently traversed through this soliloquy. In chapter three of this trilogy, we’ll meet again and talk about some more terms, including things like linearity, ruggedness, and sensitivity.

Image Credit

About the author

Jason S. Lupoi, Ph.D.

Leave a Comment