Concerns about “confused set of processes” and too high a burden of proof, says letter
A group of scientists investigating how to make best use of the best evidence to identify and classify endocrine disruptors, have written to EU Health Commissioner Vytenis Andriukaitis to voice concerns about the EU’s newly-proposed criteria for identification and regulation of endocrine disruptors.
The scientists are concerned about two main things:
- That the criteria place an under-defined, potentially unprecedentedly high, burden of proof on identifying problem compounds as having endocrine disrupting properties, with the result that the identification process will be either conducted inconsistently, or only a very small proportion of actual EDCs may be classified as such.
- That the criteria present a confused set of processes for identifying, evaluating and integrating scientific evidence which unnecessarily privilege certain types of data, and cannot be adequately operationalised for regulatory identification of EDCs.
The concerns are summarised in an opinion piece published in Euractiv. They are derived from the SYRINA Framework, a newly-published piece of research which outlines how to make best use of existing evidence for identifying and classifying EDCs, which is available here.
Similar concerns to those raised in the letter have been raised by other researchers, including in a letter to the Lancet Diabetes and Endocrinology which says the criteria “ensure that hardly any endocrine disruptors used as pesticides will be barred from commerce”, and a report from environment lawyers ClientEarth which concludes the proposed criteria are illegal “because they limit the identification of endocrine disruptors to those that are known to cause adverse effects”.
A “Matthew Effect” in
the research agenda
Two papers published in the last month have argued that the way toxicology research is incentivised is actively countering the discipline’s ability to produce the sort of research which is useful for preventing harm to health from chemical pollutants.
In one of the papers (Sobek et al. 2016), researchers from Stockholm find that Swedish scientists engaged in environmental monitoring tend to look for chemicals they know they will find, most commonly look for legacy pollutants such as dioxins and PCBs, and have left 98% of REACH-registered chemicals uninvestigated.
Although monitoring legacy pollutants is important, there is a question as to how much of this needs to be done and how it ought to be organised; and the major problem is that, if nobody is looking into emerging substances, how is anyone going to identify the next major pollution problem?
The second paper, “Paracelsus Revisited” (Grandjean 2016) laments how the demands for documentation, replication and reinforcement of existing findings, coupled with other determinants of research priorities among academics (such as feasibility of the study, availability of funding and pace of publication) are contributing to inertia and inflexibility in toxicological research.
The worry expressed in both papers is that, while established hazards become ever-better understood, new hazards are too-rarely investigated: in effect, academic research spends too much time investigating the ground illuminated by the street lamps, but not enough on increasing the amount of ground which is lit up.
To illustrate the problem, of the environmental chemicals identified as a top research priority by the U.S. Environmental Protection Agency in 2006, barely any have been covered by academic research even today, while the Swedish Research Council for the Environment only funded three scientific research projects with the aim of identifying emerging contaminants.
Somehow, the way research is incentivised needs to change, so that the determinants of research priorities stop militating against the fundamental objectives of toxicology. This is going to be difficult, as many of the drivers of research present catch-22s. For example, if researchers are not looking for a chemical or assessing its health effects, there is no data to justify regulatory action; yet, regulatory action is a significant driver of the research which produces this data in the first place.
Overcoming what both papers describe as a “Matthew Effect” in research will require careful investigation of the mechanisms by which research objectives are prioritised, and (above all) imaginative interventions which will break the feedback loops that result in too much time being spent on activities which might be effective for keeping a research unit a going concern, but which do not service the big picture of toxicology.
April 2016 News Bulletin
How DuPont Concealed the Dangers of the New Teflon Toxin. An account of the apparently bizarre situation whereby US CBI rules relating to new chemicals brought to market obstructs investigation even by government agencies into their occurrence in the environment and the risks they might pose. (The Intercept)
Updating the Toxic Substances Control Act to Protect Human Health. Given the magnitude of human and economic burden associated with these conditions, it might be expected that the passage of bipartisan legislation in both houses of Congress to update the Toxic Substances Control Act (TSCA) for the first time in 40 years would meet with widespread approval by the public health and medical community. This Viewpoint endeavors to explain flaws in both bills that have dampened enthusiasm by medical and public health organizations. (JAMA)
Fire safety ignites rules rethink. (p32) The current fire testing regime does not properly reflect how real furniture behaves in a fire as fabric and fillings are assessed separately. An official familiar with the fire-testing process, says: “quite often we find furniture with a label on it that does not pass. There is an element of fraud.” BIS has therefore opened a consultation in April 2014 on reforming the test procedures. (ENDS Report)