Assuring Accuracy of GC Results
Direct Verification within Methods
Accuracy and reliability of analytical results are often a problem. At best, errors are just embarrassing. More often, however, analytical results determine whether a product is decreased in price or declared as adulterated. Water supplies may be closed. Thus, errors may have severe financial and/or legal consequences. No surprise that the boss taking action asks twice about the reliability of the results. All of us analysts know (luckily more often than the boss) how easily a result may turn out wrong. The problem has, of course, been recognized in most laboratories, and recently a whole avalanche of measures have been taken to remedy the situation. Some of the efforts have substantially improved the situation. For many types of errors, however, I am skeptical because they will hardly be eliminated by the general schemes as are the current trend. Their detection presupposes more work in the lab, improving the processes and techniques involved. Elements should be added to methods that enable immediate recognition of possible deviations for each analysis. Such direct verification will render results reliable.
In the past: standard deviations
In earlier times, results were primarily checked by repeatability. Statistically minded people wanted the analyses to be repeated many times so they could calculate the probability that the results were correct. It was assumed the values obtained would have a Gaussian distribution around the true result, i.e. there would be primarily random errors.
The more practical rule says that a result is OK if the same numbers are obtained three times. For an old hand the rule was refined. If the first of three results deviated excessively, he would make a fourth attempt and if this last result fitted results no. 2 and 3, the first would be disregarded - because the first determination tends to be wrong anyway.
This is all nonsense, of course! Totally wrong results can also be reproducible. Reproducibility quantitates random deviations, but not systematic ones, i.e. the minimum guaranteed uncertainty rather than accuracy. If results are poorly reproducible, we should conclude there is something fishy about them, but we cannot reverse this, deriving accuracy from a low standard deviation. Most of the severe errors are, in fact, due to systematic deviations, i.e. one of those many traps involved in an analytical procedure. Glassware was not cleaned properly, a batch of dichloromethane contained too much hydrochloric acid, injection desorbed material from a corner, the previous blank test has not been checked. The experienced analyst knows dozens of such stories and probably performed repeatability tests to get his salary at the end of the month rather than because he believed in their usefulness.
Method validation
A more modern trend is to rely on "method validation" and "ruggedness" of a method. A method should be studied and described in such detail that errors are practically ruled out. It is tested by other laboratories in the hope that still undetected problems are recognized.
Method validation is certainly an important step ahead, but I still see considerable room for improvement. Often being directed by management, it tends to involve exceedingly complicated procedures, committees, meetings, complex statistics, and, of course, much paper and computer work. Commonly, none of these managers executed the method themselves frequently enough to become aware of the really critical steps. Resulting methods tend to be detailed in how to weigh the sample and what glassware to use, whereas more delicate steps, such as the separation of phases in an extraction step or injection into a GC, just comprises a few lines, if described at all.
Such method validation has at least one important advantage: everyone is safe, because imperfections are sanctioned by a recognized scientific body. Nobody loses face if results are wrong or can be prosecuted for "his" error. Weaknesses of a method are, in a way, socialized.
Certification
The most recent achievement is a highly complex, intellectually convincing construction called "certification." Again, the efforts brought some improvements, but reality does not always look as convincing. The lab people received a heavy load of extra work: the balance must be checked every so often and forms must be filled in triplicate for all managers, certifying that the lab is in good shape. Chemicals must be delivered with extensive paperwork in order to make sure the substance in the bottle is what the certificate says. Many methods, chemicals, and instruments are eliminated because they no longer "comply" to one of the many "standards." Unreasonable constraints and complications demotivated many lab people. A number of labs even did steps backwards, as some of the analyses are no longer possible. Awkward methods are applied because modifications became exceedingly complicated. Workers get careless because they lose interest and no longer feel responsible.
"Certification" was again imposed from outside the lab and seems like a somewhat helpless attempt to solve problems by intellectual force and general systems. It neglects the serious problems due to those particularities of techniques and samples that are left outside the "certified" area. Of course, there might have been a poorly calibrated balance somewhere, but its contribution to analytical problems was negligible.
Errors to be fought
Purposeful quality assurance is primarily work in the lab, tough, but unspectacular fighting with problems. It has little to do with brilliant concepts and there are no sweeping solutions. A given sample may require a longer syringe needle or some packing in the liner for splitless injection. The following four types of problems may be distinguished.
Systematic errors
Random errors are easily recognized by reproducibility tests. Detection of systematic errors is more difficult, because the analyst must devise special checking procedures, which in turn presupposes a lot of knowledge and experience. Systematic errors occur, for example, when a column shows varying adsorptivity or when the system behaves differently to the calibration solution than the (maybe "dirty") sample in split and splitless injection.
The extraordinary sample
The strategy of method validation assumes every sample analyzed corresponds to the sample(s) used for testing the method. In reality, however, maybe one out of twenty samples differs in a detail not thought of. Derivatization may, for instance, work beautifully under "normal" conditions, but the exceptional sample may contain a by-product hindering the reaction. In fact, validation of the method cannot consider all the possible extraordinary samples.
Incidental errors
In routine work, every so often the "analytical devil" catches a victim. The internal standard may be partially degraded because the solvent contained more peroxides than normal (concentrations at the ppb level may be sufficient for dilute solutions), the air is more humid during packing of a cleanup cartridge, or (during injection) a minute gas bubble causes more sample to be eluted from the syringe needle than normally. Even the most rigorous "certification" procedure is not immune to all this.
Analytical mysteries
Office managers may not accept it, but practical experience demonstrates that occasionally a result is wrong and no explanation can be found. I do not deny the principle that everything has a reason, but due to the impracticality of researching all these cases, the matter may remain a mystery.
Methods controlling each result: direct verification
No doubt, methods must be checked carefully before being applied, but they are unlikely to become foolproof. Knowing that, more should be done for the verification of each single or at least each group of results. The description of a method should include a list of the potential problems and how resulting deviations can be specifically detected. The latter requires the methods include verification elements. It has become common practice to reanalyze reference materials or spiked samples, which checks the system and the method as such, but does not prove that every single sample has been analyzed correctly. Most conclusive verification is obtained by controlling elements included into each analysis, such that the chromatogram obtained not only provides the result, but also enables to check for, e.g., extraction efficiency, degradation of a component, or yield of derivitization. Such "direct" verification enables the detection of samples behaving "abnormally" or any action of the "analytical devil." Such verification must be tailored to each method and may, for instance, include the following elements:
- To check the yield of a derivatization, a second internal standard not undergoing derivatization is added to each sample.
- A chemically stable component is added to the internal standard solution in order to check for degradation of the latter.
- Degradation of labile components is checked by addition of two internal standards, one being stable, the other of similar lability as the component.
- Adsorption is monitored by adding an inert and an adsorptive standard.
- Volatile and high boiling internal standards are added in order to monitor discrimination during GC analysis.
- Internal standards are added before and after a critical extraction or preseparation, checking for losses.
- Preseparation is controlled by adding components which should be removed and others that must be included in the window of the compound(s) of interest.
- Results are calculated in different ways and compared, e.g. using an internal and an external standard procedure.
Chromatograms obtained by methods designed for directly verifying results may well contain three or more internal standards. This renders evaluation of results more complicated, but computer reports may include all the necessary calculations and even a concluding statement on whether or not the analysis went OK.
Less repetition of analyses
Direct verification renders most duplicating analyses unnecessary. If a result is unexpected, the critical steps of the procedure can be checked, presumably confirming the result in most instances. If a result is wrong, the analyst knows where the deviation occurred and where to improve. This may save days of dull repetitious work and groping through the dark.
Real quality assurance
It must be admitted that the accuracy and reliability of analytical results is often a problem. Great efforts have been taken to improve on this, but all too often proposed "quality assurance" did not get beyond marginal aspects, such as measuring temperatures of all the heated parts of a GC instrument, which is mostly a waste of time and distracts from the true problems. Efficient quality assurance must get the knack of the really hot spots of each method. Potential sources of deviations must be determined and, as far as possible, checked by control elements built into the analysis of the sample. If such direct verification indicates that results are correct, we know that the detector was heated, that the internal standard solution has not been degraded, and that all the other steps and items involved were OK.
Originally published in the Restek Advantage 1995, Volume 1