There is a great deal of controversy and argument around whether or not the way chemicals are assessed for safety in the EU is adequately responsive to evidence that they may be causing harm. Leaving to one side lobbying by commercial and public-interest organisations, here we look at whether or not scientific practice produces the data regulators feel they need in order to make decisions about restricting the use of chemicals – and if not, then what can be done about it.
Decisions about which chemicals are safe rarely fail to attract controversy, with regulators under constant attack for either giving too much or too little credence to studies which suggest a chemical may be harmful. Dr Ruth Alcock of Lancaster University’s Environment Centre (UK) argues in a recent paper that one reason for this is that scientific research practices are poorly suited to the needs of regulators, leaving regulators unable to interpret new findings into the risk assessments on which EU chemicals regulation is based (Alcock et al. 2011).
Alcock cites the example of the flame-retardant deca-BDE as being no exception. Although deca-BDE is currently given the green light by EU safety assessment standards, Alcock reports polarised expert opinion about the safety of the substance, with toxicologists, regulators and chemists expressing attitudes about the risks it poses to health ranging from “obvious impacts […] on neurodevelopment” to “no direct evidence of harm at all”.
Risk assessment is based on two principal factors: estimates of human exposure to a chemical agent, and an assessment of the toxicity of the chemical at the likely level of exposure. The problem regulators face with deca-BDE is precisely a lack of consistent data in either area.
On exposure, different laboratories diverge greatly in their assessments of levels of deca-BDE in identical material samples – partly because the amounts are very small, and partly because deca-BDE breaks down rapidly under analysis, making it very difficult to obtain a reliable measure of its presence in the environment.
Understanding deca-BDE’s toxicity is similarly complex, with concern focusing on its ability to interfere with the healthy development of the brain. A substantial body of research produced by Professor Per Eriksson of Uppsala University (Sweden), shows strong evidence of neurotoxicity. However, doubts about his methodology has allowed enough uncertainty to persist that deca-BDE is not, from a regulatory perspective, considered as neurotoxic.
Without consistent, reliable data regulators are loathe to restrict the use of a substance, which is why legislators have yet to act unequivocally on deca-BDE, commissioning further assessments of exposure and toxicity without restricting the use of the substance.
The trouble is, where regulators want consistency and reliable protocols to give them definite answers about a substance’s potential for harm, researchers such as Eriksson need cost-effective methods for exploring new ways in which a substance may be harmful, understanding why that is the case, and for improving the predictive capacity of their models. To put it another way, university scientists tend to be engaged in exploratory research, whereas regulators want confirmatory research.
Exploratory research methods are rarely static, with the science continuously evolving, producing new knowledge and triggering new research programmes. At the same time, this constant flux limits the opportunity to judge overall reliability of results: two laboratories might produce the same results but with different methods, leaving the question as to which method, if either, is reliable or if both methods produce a false positive. Worse still, two methods may produce two different results, leaving regulators to ponder which experimental result to believe.
There is therefore an obvious mismatch between the needs of regulators and the practices of researchers. If regulators are trying to use exploratory research for confirmatory purposes, it is little wonder that so many of their decisions attract controversy and are open to accusations from both environment groups and commercial interests of not being sufficiently grounded in scientific evidence.
Reliability of data
One thing to which regulators are looking as a mark of reliable data is Good Laboratory Practice (GLP). GLP was established in 1978 by the US Environmental Protection Agency after a series of fraudulent chemical safety tests at commercial laboratories showed the need for a standard for data reporting and management. The standards ensure that outside auditors can evaluate any particular piece of work the laboratory does.
Compliance with GLP carries a great deal of weight with regulators, to the extent that both the US Food and Drug Administration and the European Food Safety Authority (EFSA) have treated two GLP studies as providing definitive proof that the controversial chemical bisphenol-A is safe, refuting a large body of peer-reviewed, non-GLP evidence that it is a reproductive toxicant (Myers et al. 2009).
Although it is true that detailed reporting means GLP tests easily replicated, regulators’ acceptance of GLP is not uncontroversial and may ultimately ride on a misunderstanding of what the standard guarantees.
The quality of a study is a product of its reliability and validity. Reliability is determined by the success with which independent research teams can produce the same results using the study’s techniques (something lacking in deca-BDE detection studies at the moment). Validity comes from study design and competence of execution. Both are essential for a study to be considered to reveal facts.
In order to be repeated and proven reliable and valid, a study has to be sufficiently well documented to allow a second independent laboratory to repeat the experiment and produce the same results. It is this sufficiency of documentation which GLP guarantees. Precise documentation, however, counts for nothing towards quality if a GLP study uses the wrong sort of animal, measures the wrong end-points for detecting an effect, or technicians make errors in e.g. removing organs from animals for examination. In all these cases, the study would be invalid even though it meets GLP standards.
At least one of the two GLP studies taken by the FDA and EFSA as exonerating BPA, Tyl et al. 2008, was heavily criticised for, amongst other issues, obviously over-heavy prostates retrieved from mice in the study, indicating either improper dissection or that the mice in the study were much older than reported. Since this means another laboratory would likely produce different results despite following the same protocols, the reliability of the Tyl study is in doubt even though it meets GLP standards.
Is peer-review a viable alternative to GLP?
There is more to the peer-review process than one researcher submitting their research to the scrutiny of others. In order to secure research funding, researchers have to demonstrate competence in the proposed area of study, use state-of-the-art experimental techniques, and submit their work for evaluation by independent experts before publishing in journals. On top of this, independent efforts are made to replicate findings are made, the possibility of refutation by these further encouraging honest and effective research practice.
Peer-review therefore functions as a set of safeguards which helps ensure that an overall body of research is more likely to reveal facts than fail to do so. Individual studies may be invalid and some research avenues may be red herrings, but these are normally identified and discarded by the system. The acceptance of invalid studies as valid and massively false bodies of research as true are aberrations rather than the norm.
Peer-review does not, however, amount to a formal process of validation. As a system it works because the results are generally reliable – however, there is no guarantee that any particular study within the system is itself reliable. Since regulators seem to want individual studies, not just the system as a whole, to produce reliable results, peer-review may not be a viable alternative to the use of standardised protocols.
Peer review was never designed with risk assessment in mind and so will never produce studies reliable enough for the existing demands of risk assessors. Unfortunately, standardised protocols such as GLP are not a short-cut to determining reliability of a study. What, then, might be the best way to deal with the complex evidence base produced by academic researchers? There appear to be several choices.
1. Stop using peer-reviewed studies in risk assessment. Risk managers already receive a lot of criticism for not using enough peer-reviewed data; to make it a policy not to use it could be interpreted as perverse, failing even to address the problems with how science feeds into policy.
2. Insist that academic laboratories become GLP-certified. GLP studies are 2-10 times more expensive to run than non-GLP studies while the extra cost adds little value to exploratory research (Apredica, retrieved 2011). The massive increase in cost of research funding which this would entail makes this financially unrealistic.
3. Encourage academic laboratories to do more corroborative studies. Few laboratories are equipped for corroborating another’s findings with new techniques and equipment: this has to be bought, staff have to be trained, and few laboratories will want to go to the expense of doing this while a technique is unproven. This option is also financially unrealistic.
4. Soften the demand for reliability of data in risk assessment. This would allow more peer-reviewed evidence to be introduced and be consistent with a precautionary approach to risk management, effective for preventing potential harm but at the likely cost of some unnecessary restrictions being placed on some chemicals. From an environmental health perspective this would make sense, though in the current climate is probably politically unrealistic.
5. Fund laboratories to replicate academic findings under standardised protocols. This amounts to a combination of (2) and (3): rather than pay for all laboratories to become GLP-certified and capable of corroborating new findings, the EU could fund the establishment of independent laboratories whose purpose is replicate the findings of academic studies under standardised conditions.
Option (5) would secure corroborative studies to standards amenable to risk assessment. Furthermore, if the funding also supported the development and validation of new assays, then testing procedures could keep pace with scientific knowledge. It is analogous to how the pharmaceutical industry moves from exploratory to confirmatory research, and arguably the most realistic option because it is the cheapest means for meeting the established needs of an existing risk management process.
Cancer rise and sperm quality fall ‘due to chemicals’: “The best working theory we have to explain why sperm counts may be declining is that chemicals from food or the environment are affecting the development of testicles of boys in the womb or in their early years of life,” says Dr Allan Pacey, University of Sheffield (UK).
Scientists want to help regulators decide safety of chemicals: The NYT reports on groups representing 40,000 researchers and clinicians which are urging federal agencies responsible for the safety of chemicals to examine the subtle impact a chemical might have on the human body, rather than simply ask whether it is toxic.
Food sold in recycled cardboard packaging ‘poses risk’: Leading food manufacturers are changing their packaging because of health concerns about boxes made from recycled cardboard, reports the BBC. Recycled cardboard can be contaminated with toxic inks.
UCSF Team Shows How to Make Skinny Worms Fat and Fat Worms Skinny: Researchers exploring human metabolism at the University of California, San Francisco (UCSF) have uncovered a handful of chemical compounds that regulate fat storage in worms, offering a new tool for understanding obesity and finding future treatments for diseases.
Chemical-free pest management cuts rice waste: Science Daily describes a “novel way of bringing sustainable, pesticide-free processes to protect stored rice and other crops from insects and fungi can drastically cut losses of stored crops and help increase food security for up to 3 billion daily rice consumers” – not to mention reducing risks to health posed by pesticide use.
Combating Environmental Causes of Cancer: An important contribution in the New England Journal of Medicine from Harvard’s David Christiani, MD, MPH, emphasising the importance of improving our understanding of how chemicals in the environment may be contributing to cancer incidence.
Several current-use, non-PBDE brominated flame retardants are highly bioaccumulative: Study finding that non-PBDE BFRs have similar bioaccumulative properties as PBDEs. PBDEs are currently being phased out due to environmental concerns – it may be the case that their substitutes are no better.
Endocrine disruptors: from endocrine to metabolic disruption. A good review of endocrine disrupting compounds and metabolism, including diabetes, obesity, metabolic syndrome, and more. It includes both epidemiological and mechanistic studies, and is particularly helpful for not assuming detailed knowledge on the part of the reader.
Food Packaging and Bisphenol A and Bis(2-Ethylhexyl) Phthalate Exposure: Findings from a Dietary Intervention: Evidence that exposure to BPA and DEHP can be substantially reduced by restricting the consumption of packaged food.
Environmental pollutants and type 2 diabetes: a review of mechanisms that can disrupt beta cell function: A new review of the association between chemical exposure and diabetes, finding that clear evidence that “some environmental pollutants affect pancreatic beta cell function.”