Toxicity testing times
In recent years toxicity has been recognised as a separate measure; it is not feasible to separately analyse or regulate all potential components of discharges for their effects on the environment, and substances may react synergistically. As Chris Whitworth, Applications Engineering Consultants, explains, a measure of overall toxicity is required.
Back in 1972 there was a sign above my lab manager’s desk, proudly testifying to the existence of 4.3 million chemical compounds. Today there are even more, not a few of which are toxic.
Clinical trials determine only the lethal dose, not the onset of toxiciy
Discharges to the environment must be managed to minimise adverse effects, but the process of management implies measurement, and thus specificity. Where only a few known substances are present, regulatory limits can be applied specifically. In more complex matrices, however, surrogates have to be used, for example BOD and COD, as measures of “load”.
Until recently the only available tests were modelled on clinical trials – exposing a set of live animals such as Daphnia, shellfish or fish to a diluted sample for periods of typically 48 hours. Trout are a common test animal, but many other fish have been used. Expensive and time consuming, such tests require many animals (control tests have to be carried out on clean waters) and often a licence to experiment on them. Usually quoted as an “LC50” – the concentration of sample needed to kill 50% of the animals in a given time – they determine only the lethal dose, not the onset of toxicity.
Numerous suppliers are now entering the toxicity testing market, offering a wide range of new products boasting the ability to achieve results more quickly and at lower cost. The realisation has dawned that there are many potential customers, driven by the recognition that environmental toxicity is at least as important as total load, long regulated via BOD.
The Environment Agency is very active in this area, having piloted a Direct Toxicity Assessment scheme to examine how regulation should be introduced. As well as environmental effects on rivers and other watercourses, the Agency has some responsibility to alert abstractors of upstream pollution: the water companies themselves obviously need reassurance that their products will not be toxic to the customer! Thus, companies with an effluent to discharge, regulators and water abstractors all need reassurance that their process stream is fit for purpose; all need to know immediately if toxins are present.
The tests available fall into two broad categories, of which the underlying principles of operation differ.
Chemiluminescence tests rely on contaminants affecting the rate of reaction of a reagent that emits light (luminol) with oxidants in the presence of a catalytic enzyme (such as horseradish peroxidase). Antioxidants or enzyme inhibitors such as cyanide, amines, phenols and some metals affect chemical reaction rates and change the light output when compared to a blank sample. This light output can be measured electronically and output as a light level integrated over time.
Biological tests rely on a measure of an animal’s metabolic activity that is affected by toxins in the water. The currently available tests can be subdivided into three groups, all of which use some form of electronic sensor to measure biological activity:
Bioluminescence tests rely on some single celled organisms’ ability to luminesce or fluoresce as part of their metabolism. Contaminants that disrupt metabolic rate affect output, measured electronically by light detector.
Nitrification inhibition tests work in a similar way. The metabolic activity of micro-organisms is measured, in this case by monitoring ammonia levels via an electrochemical electrode. Toxins disrupt the metabolic process and affect ammonia consumption.
Respirometry uses oxygen consumption as a measure of metabolism, via an electrochemical oxygen electrode or pressure sensor.
Such biological techniques may be applied to free swimming or immobilised cultures, of single or mixed species populations, in reaction vessels or on electrode surfaces.
The level of expertise exercised in carrying out testing and in the interpretation of results will undoubtedly vary widely, from expert technicians checking a river or reservoir drinking water abstraction point during or after a pollution incident, or surveying a watercourse or catchment below a discharge, to users working in a factory or landfill site testing one or more effluent streams. Users could be working routinely or attempting to trace or manage a particular incident. Certainly many will not be trained scientists.
The tests are conducted by exposing a sample of the process stream to reagents in a controlled environment containing the sensor system. The user has to ensure that the sample chosen is representative of the whole, not changed by storage or handling, and that the test is undertaken to the manufacturer’s specification, which may include precise temperature control. On an exposed riverbank at three in the morning in the middle of January, this may not be easy.
Most of the tests use very small quantities of expensive and fragile reagents; between 20 and 50 microlitres is typical, with specialised equipment required to dispense reproducibly. Preparation of reagents – some of which may have limited shelf lives or require refrigeration – before testing may take some time, and the tests themselves may require many minutes to complete. Indeed, multiple tests may be necessary, or at the very least blanks and positive controls. Considerable expertise and attention to detail is then required to ensure repeatable results. At best this may limit the application of the tests where scientific expertise is not available; at worst it will destroy their credibility entirely.
As stated earlier, the largest historical database of test results is probably based on Daphnia or trout results. Newer tests may not compare directly and each test will be affected differently by specific pollutants. For example, the chemiluminescence test may be affected more by small, acceptable levels of “natural” metals such as manganese, than by high and possibly toxic levels of detergents. A general problem – and an inevitable consequence of measuring metabolic activity or some other surrogate – is that limits of detection are far above regulatory standards, for example where pesticides or herbicides are to be measured. Regulatory limits must be set well below levels where life is affected, and a test based on this will not be sufficiently sensitive.
Interpretation is also complex. For a sample known to only contain one pollutant direct interpretation is possible. A typical clean river, however, may contain humic acids, iron, manganese and calcium salts in varying concentrations. Any or all are likely to interfere with the test. A database of normal values for the chosen test and its variability is therefore a pre-requisite; and one which may entail compilation over a sustained period to allow for seasonal shifts. The sensitivity of the test is reduced by this normal variability, and false alarms may be generated by unusual but natural extremes in the matrix. Conversely, high levels of toxins may not show up behind this natural variation, particularly where the test is relatively insensitive to some pollutants.
Expensive and difficult
Further, the tests themselves vary in sensitivity. One trade sample tested gave an “EC50
equivalent” – the “effective concentration” that halves light output in the standard time of the test – of over 5,000 on one test, less than 500 on a second, and less than 50 on a third. Other substances tested did not follow the same pattern. Even where these effects may be managed out, bacteria are known to react differently to “standard” toxins, when compared with fish, invertebrates or man. Factors of 100 are not uncommon.
In conclusion, tests for toxicity are very important to the public, industry and its regulators. They are beginning to emerge into the marketplace and gain credibility. However, intelligence and discrimination is required in their selection and administration – almost without exception they are expensive and difficult to use outside a laboratory environment.
Interpretation of results is difficult and complex, such that very few individuals outside the small community directly involved have the expertise to determine even the relevant questions in the field. These challenges are likely to keep many people occupied for many years to come
The traditional tests: BOD and COD
Around 100 years ago a measure of water pollution was needed. The BOD test was developed to model the behaviour of a river subject to pollution. Fish and lower organisms need oxygen to live, and water can only hold around 8mg/litre (mgl). In the presence of nutrients bacteria multiply, and can use up all the oxygen. The BOD5 test exposes aerated water containing inorganic salts needed for life to a sample. Bacteria are used to “seed” the mixture. It is then stored for five days in the dark at 20(C. The residual oxygen is then measured. The oxygen used, multiplied by the dilution of the sample, is the measure of “pollutant load”. Clean rivers are under 2mgl, sewage around 200mgl and strong trade wastes may be several thousand mgl.
However, the test takes five days, has poor precision (20% is often quoted) and toxins interfere. The analyst also has to know the result before doing the test, to guess the dilution needed to leave a useful residual to measure. For these reasons alternatives are sought.
The COD test is commonly used instead. It exposes the sample to a strong acid reagent at high temperature for two hours, then measures residual acid colorimetrically to establish the amount of oxygen with which acid has reacted. It is quicker and more precise (5%), but usually gives higher results than BOD5, as the acids oxidise more than the bacteria can. Comparisons depend upon the sample matrix under test.
Action inspires action. Stay ahead of the curve with sustainability and energy newsletters from edieSubscribe