An uncertain world

Richard Puttock and Michael Dinsdale of consulting engineers Peter Brett Associates stress the importance of accurate testing and warn it is difficult to achieve in practice


Chemistry is a black art – a subject we didn’t understand at school and one we never intended to revisit. But chemistry has re-entered our world in the form of contaminated land and our inattention in class has come back to bite us.

And it gets worse! Chemistry is no longer just a side issue – on many developments, not only do we not know the answers, we’re often unsure of the right questions to ask.

So where can we turn for help? In the past it would have been our laboratory. But over the last 15 years the testing market has been completely rationalised. Labs are bigger and more automated, offering cost-effective analysis but less support. Intense competition means that margins are so tight there is little room for added value services. Testing is now a numbers game.

Laboratories must adopt operating practices that enable them to make a profit under conditions of intense price competition. Their choices fundamentally affect the quality and reliability of the data they produce. While we don’t know what questions to ask, many labs are less than forthcoming in disclosing the limitations of their data. We work together in blissful ignorance even though our interaction has a critical influence on the quality of the data they produce and we use.

Accreditation schemes such as UKAS, compliance schemes and the Environment Agency’s MCERTS scheme are designed to address quality issues in testing. However all have limitations which are not readily apparent and are certainly not advertised to a largely ignorant consumer.

The problem is that you can’t tell by looking whether data is reliable or not. Because buyers of chemistry are largely ignorant of it, and the product does not readily reveal its quality, the key differentiator becomes price. While there is an industry bottom line, we should not be naïve enough to believe that this guarantees a right answer. So what are the questions we should ask?

Basis, basis, basis

It is a cold, wet day. The wind is making life difficult and you’re worried about getting caught in the traffic. You shovel a couple of kilograms of rubble into the bag and leave it by the gate for the lab to pick up. A fortnight later the lab (UKAS-accredited as the contract specified) reports back and you’re relieved to see thiocyanate content is 24.7mg/kg, just below your limit of 25 mg/kg. In the clear, you can sign the site off – or can you?

  • How was the sample prepared, and by whom?
  • What is the precision and bias of the method used?
  • On what basis are method precision and bias measured?
  • On what basis is the data reported?
  • On what basis is your acceptance criterion calculated?

Sample preparation

The importance of sample preparation cannot be emphasised enough. If it isn’t right, all that comes after is wrong. Unfortunately, good preparation is expensive, labour-intensive and repetitive – and often neglected. It is often the least qualified staff carrying out the most important function.

Accreditation schemes accredit results, and sample preparation does not produce a result. It is therefore debatable whether preparation falls within the scope of accreditation. This means clients are using a UKAS-accredited laboratory when the single most important operation cannot be accredited and is carried out by the company’s least qualified personnel.

The quality control con

Laboratory quality control focuses on the instrumental side of analysis. Quality control data is usually generated after samples have been prepared for analysis. Prepared QC samples or certified reference materials are finely ground, dry, inherently homogenous materials. Real samples are sun-drenched, windswept, dirty, heterogeneous lumps.

The QC data for samples which are normally analysed wet, may in fact be determined on dry ground reference soils which are spiked immediately before analysis. This ensures excellent QC data but doesn’t relate to the true bias and recovery from a mixed, wet, contaminated soil.

Precision, bias, repeatability and uncertainty

Each measurement a lab makes is subject to any number of errors. Good laboratories minimise the impact of errors by sound methodology and quality control. However, eliminating uncertainty is impossible, and a knowledge of uncertainty could be critical to remediation. For example, if the clean-up criteria on a scheme is 2500mg/kg of mineral oil and the sample shows a concentration of 2000mg/kg, you might breathe a sigh of relief. But when the method’s precision is +/100%, you might have to re-appraise a hasty sign-off.

Bias is in simple terms a measure of the amount you get out knowing what you originally put in. For example, a lab quotes an UKAS-accredited method recovery for DRO as 95%. Fine – a 95% recovery is great, and the method is UKAS-accredited. However, the recovery is quoted on a reference sample that has been dried and finely ground – the recovery quoted does not account for volatiles lost during drying and grinding. However, the Environment Agency has proposed that that precision and bias data accompany all results under the MCERTS scheme.

How is data actually reported?

Understanding the basis for reporting is critical. Some samples are analysed wet, some dry, some with stones removed, some without. Data on the same sample may be reported on a different basis. Do you know on what basis samples are analysed and reported? Do you know on what basis the acceptance criteria used are generated?
The example below illustrates this point, showing the range of total mercury values possible depending upon how the data is expressed or the lab chooses to prepare the sample.

A 100g sample of contaminated clay is submitted for total mercury analysis. It contains 500µg of mercury, and is composed of the fractions below. It is assumed that all the mercury is present in the fines. The acceptance criteria are the CLEA Soil Guideline Values of 8mg/kg mercury for residential use with plants and 15mg/kg for residential use without plants. The example illustrates a huge variation in ‘right’ answers that span the selected acceptance criteria. It also reveals that the same – or very similar answers – can be obtained using completely different assumptions: a whole dry basis being very similar to a <10mm wet basis in this example. What is more worrying is that the fines dry result (arguably the most common way of determining mercury in soil) is over three times higher than the result expressed on the whole sample (arguably the true result).

Sample homogeneity

Reliable data depends upon the sample from which it was extracted and it can be difficult to take representative samples from very mixed fill. This means that the apparent precision of lab data can be very misleading. In reality a result of 645.37mg/kg lead (as Pb) doesn’t mean that the horizon sampled contains a concentration of 645.37mg/kg lead. The problem is we don’t know what it means because the variability of the sampled horizon hasn’t been estimated and the limitations of the techniques the lab uses to prepare, extract, analyse and then correct the raw data are unknown. Those used to dealing with relative certainties would be horrified to learn that the true precision of chemical data is very poor.

The key questions

There is no doubt that the reliability of analytical data is limited. There is also no doubt that many laboratory clients don’t realise this. This is not because the labs are doing poor quality work. Rather it is a combination of the uncertainty inherent in sampling and analysis, coupled with the limitations of a price-driven market. Factor in a lack of understanding on both sides of the effect (or even existence) of such limitations and it is easy to see that people can easily find themselves skating on thin ice without even realising it.

The solution is a series of questions that should be asked when assessing methods, labs and data.

  • Labs broadly use the same analytical equipment. What allows some laboratories to be a lot cheaper than others?
  • What are the limitations of the selected analytical method? There are always limitations. Do they matter in this case?
  • On what basis is the data reported? Does it match the basis on which acceptance criteria are calculated?
  • Is the laboratory quality control data realistic or has it been generated in ideal conditions using ideal samples which are unlikely to represent site conditions?

If a reasonable attempt at answering these questions can be made, you will be a long way to understanding how data has been generated and will be more confident when interpreting data and able to take account of existing uncertainties.
If you can’t immediately answer these questions, then the basis on which data is being interpreted; computer models run; acceptance criteria applied; and sites remediated is unknown. It’s as fundamental as that.


Action inspires action. Stay ahead of the curve with sustainability and energy newsletters from edie

Subscribe