The uncertainty factor

April 29, 2015 at 6:50 pm | Posted in Feature Articles | Leave a comment
Tags: , , , ,

The uncertainty factor

Coin Flip - Kevin Dooley - Flickr - 900 horiz border

For a prominent feature of risk assessment, uncertainty factors attract few publications in the peer-reviewed literature. According to Web of Science, there are only 79 toxicology citations with “uncertainty factor*” in the title. A number of these are conference abstracts rather than full journal papers. Of these publications, 50% have been cited 7 times or fewer. Only 25 have been published since 2003.

Of these, a critical review by Martin et al. (2013) is fairly typical, attracting only two citations and going unmentioned in a recent, purportedly comprehensive review by Felter et al. (2015). Here we ask if there is anything in the Martin et al. review which should give us cause for concern about the effectiveness of uncertainty factors in protecting health.

Download printer-friendly PDF

The use of uncertainty factors in chemical risk assessment

Quantitative risk assessment is the process by which chemicals are assessed for safety in the attempt to ensure that chemical products do not excessively impact on human or environmental health. When it comes to understanding the effect a chemical may have on health, human toxicity tests are obviously unethical. Most of our knowledge about the levels of exposure at which a chemical presents a health risk therefore comes from laboratory animal studies, from which safe (or at least, low-risk) levels of exposure for humans need to be extrapolated.

Determination of health risk begins (at least in theory, though most if not all chemicals lack a sufficiently comprehensive toxicity database) with a comprehensive battery of tests which are administered to determine the critical health hazard for a chemical, i.e. the adverse effect which occurs at a lower level than any other. The lowest dose at which this adverse effect can be observed (the No Observed Adverse Effect Level, NOAEL) then becomes the point of departure for determining the maximum acceptable dose for the human population. Ideally, we would then know the relative toxicity of the compound in the animal model as compared to humans, adjust the maximum allowable dose accordingly, and that would be sufficient for determining the levels at which people can safely be exposed (the tolerable daily intake).

Of course, the real world does not work that way: there is a great deal which is unknown in interpreting experimental results from animals to tolerable daily intakes for humans. Some of these unknowns represent things we could know but do not know, such as when we are presented with a chemical with incomplete toxicological data for which the critical end-point is therefore unknown. Some of these represent things we cannot know, such as the relative sensitivity of a new-born rat to a toxicant as compared to a new-born human, because the human test cannot be conducted ethically.

To make allowance for these unknowns in extrapolating from the animal data the maximum acceptable exposure for humans, quantitative risk assessment makes use of uncertainty factors. Uncertainty factors assume that, in the absence of information, a human is to some degree more sensitive to toxicants than laboratory animals. In the first incarnation of uncertainty factors, as proposed by Lehman & Fitzhugh (1954), two assumptions were made to give enough space between the NOAELs observed in toxicology testing and the unknown NOAELs in humans, such that it could be assumed that the animal-derived human exposure limit would be safe.

The first assumption was that humans are ten times more sensitive to a given toxicant than animals (thereby accounting for interspecies sensitivity); and the second was that no human is more than ten times as sensitive to a given toxicant than any other human (accounting for intraspecies sensitivity). This yields a default safety factor of 100, such that the NOAEL for a critical end-point in an animal toxicity test is divided by 100 to yield the maximum tolerable dose presumed to present minimal risk in humans.

TDI = NOAEL / (interspecies UF x intraspecies UF x other UFs)

Barely three years had elapsed before the same uncertainty factor was made to do more work, with the Joint FAO/WHO Expert Committee on Food Additives (JECFA) recommending the use of a default uncertainty factor of 100 to cover five basic areas of uncertainty (JECFA 1958): “In the extrapolation of this figure to man, some margin of safety is desirable to allow for any species differences in susceptibility, the numerical differences between the test animals and the human population exposed to the hazard, the greater variety of complicating disease processes in the human population, the difficulty of estimating the human intake and the possibility of synergistic action among food additives.”

Things have changed very little to this day, with the default uncertainty factor of 100 still considered largely protective unless there is evidence to the contrary. It may, however, seem a little odd that such a precise measure can be made of something either has not or cannot be measured; after all, why not an uncertainty factor of 12×12? Or 12×15? Or 11×9? Furthermore, if uncertainties have expanded (such as JECFA adding statistical issues and mixture effects into the UF mix, without changing the UF itself), is there any reason for thinking UFs might now be insufficient – and if so, how would we go about finding out?

Holding assumptions

For default uncertainty factors to be protective, four conditions have to hold true:

  1. firstly, that no human is more than 10x as sensitive to a toxicant than a test species;
  2. secondly, that no individual is 10x more sensitive to a toxicant than any other individual;
  3. thirdly, that an uncertainty factor of 100 absorbs all other potential factors which can result in increased sensitivity of one individual compared to another;
  4. fourthly, the application of uncertainty factors meets a suitable benchmark of protection.

We can test these assumptions by looking at empirical evidence of variance in sensitivity for (a) and (b), speculating reasonably about (c), and doing the maths on (d). For the sake of space, we will examine (a), (b) and (d) in detail.

Interspecies sensitivity

Assumption (a) is concerned with interspecies differences in toxicokinetics (broadly speaking, the route which a toxicant takes through the body: the organs it goes to; the metabolites to which it is broken down; and the route and speed of its excretion) and toxicodynamics (broadly speaking: how the molecule and its metabolites interface with the body to exert toxic effects). Standard practice is to accord toxicokinetic and toxicodynamic differences an uncertainty factor of 3.2, multiplying up to the total UF of 10 for interspecies differences.

While rats, mice, monkeys, guinea pigs, other animals and humans are able to clear toxicants at different rates, it is possible to make surprisingly accurate predictions of the difference in the relative clearance rate of a molecule using allometric calorie scaling from the animal model to a human. This is because there is quite a neat predictive relationship between the rate at which a mammal can clear a compound from its body and the ratio of the pace of the mammal’s metabolic activity to its body size. On this measure, the mean rate at which a rat clears many toxicants from its body is constant with the relative rate at which a human can clear the same compounds, which in theory allows calculation of a relatively precise uncertainty factor.

The problem is, this is not always the case. Martin et al. discuss research which suggests mean clearance ratios following oral exposure to potential toxicants neutralised via glucuronidation are as much as 6.2 times faster in rats and 10.2 times faster in mice than in humans (Walton et al. 2001). This is higher than the UF of 4 used as the default uncertainty subfactor in this instance, and higher than the adjustment allowed for in allometric calorie scaling. For toxicodynamics the differences are much less studied but it is known that dioxin is about 10,000 times as toxic to guinea pigs as it is to rats, which is obviously in excess of the standard 3.2x UF for toxicodynamic differences (Karlbergh & Schneider, 1998).

Intraspecies sensitivity

Intraspecies sensitivities are a product of the differences between individuals in a population. These include: differences in genetic make-up, such as when the presence or absence of a particular group of genes might make a substantial difference to the rate at which a toxicant can be neutralised by the body; differences in age and gender of the exposed individual; and any acquired susceptibility factors such as disease state, additional exposures such as tobacco smoke, diet and lifestyle etc.

Martin et al. catalogue a number of studies which undermine confidence in the intraspecies UF. For example, one interpretation of available data in healthy rats has suggested that 8% of chemicals would require a UF of more than 10x in order to account for intraspecies variability (Dourson & Stara 1983); and an examination of intraspecies toxicodynamic data suggests that the standard toxicokinetic UF of 3.2 will leave 19,000 individuals per million unprotected (Renwick & Lazarus, 1988).

Acquired sensitivities, such as exposures to other environmental factors including other chemicals (mixture effects), tobacco smoke, and dietary differences are almost impossible to test for and validate in the form of uncertainty factors. There are also the complicating disease factors in humans: not only might unwell humans be more sensitive to toxicants, it may not be possible to model these diseases in animals – or at best, a surrogate outcome such as reduced organ weight might be observed instead of the effect which exposure would have in a human.

Numerical differences and meeting a protection benchmark

A NOAEL is not an observation of no health effect of an exposure to a potential toxicant in a population; rather, it is the absence of a statistically significant effect of exposure to the potential toxicant in a relatively small test population. This point was recognised by JECFA in its 1958 endorsement of UFs, whereby the UF allows for “numerical differences” between the size of the test population and the larger exposed human population.

Since the TDI is derived from the NOAEL via the application of uncertainty factors, there has to be an assumption that the true NOAEL in the human population is within two orders of magnitude of the observed NOAEL in the animal study population (assuming a default UF; otherwise within whatever range the adjusted UF allows for), and that the remaining differences in sensitivity are absorbed by what remains of the UF in the TDI calculation.

In determining whether we can expect those differences to be absorbed we need a target. Selecting the target is arbitrary, but here we can assume that in order to be protective of a diverse population across all life stages, there should be no more than 0.0001% increase in adverse health outcome over background levels (i.e. that if one can expect 10,000 cases of an outcome in a given population, exposure to a chemical should increase the number of cases to no more than 10,001). Note that this is an example value which should not be assumed to be sufficiently protective.

This sort of statistical power can in theory be designed in to animal experiments; however, testing to within a 0.0001% increase in adverse health outcome over background levels, particularly for rare outcomes, would require an impractically large number of animals. Instead, relatively small numbers of animals are given high doses, which are diminished to the point at which no statistically significant effect is observed in that group.

The problem is, there might be no statistically significant evidence of an effect on 100 animals at the NOAEL if intraspecies differences mean the dose will only affect 1% of the test animals. A cohort of 1000 animals might, however, be able to show that effect (it is much easier to see 10 in 1000 than 1 in 100). Whatever the difference in dose that amounts to, the UF is supposed to be able to absorb it – all the way through to 1 in 10,000. Can we expect it to do so?

In an animal study, it is relatively straightforward to power an experiment to detect a dose to which only 5% of individuals respond. If that is divided by 10, to account for intraspecies differences, one might predict that almost no individual in a population of the same species would respond to it. The problem is, while this assumption might hold true for a single chemical, the degree of protection this offers a large population dwindles as the number of chemicals to which it is possible to be exposed increases. Note this is not a mixture effect, but simply treating chemicals as if they are acting in isolation: while risk from exposure to chemical A might be insignificant, if people can be exposed to A or B or C or D etc. then the probability of a response will increase.

According to the results of one model highlighted by Martin et al. (Hattis et al., 2002), if 50% of chemicals are considered then 2 people per 10,000 would respond to a chronic dose which is 10% of the dose which elicits a response in 5% of people: in other words, if one takes a realistic level of sensitivity to a toxicant which is detected in laboratory studies, and then multiplies this by the intraspecies UF of 10, it leaves 2 people per 10,000 unprotected, when half the chemicals in the Hattis et al. toxicity database are taken into consideration.

If 95% of the chemicals in the Hattis et al. database are considered, the level of protection drops to 3 people per 1,000. This makes the 10x intraspecies UF seem very unlikely to be sufficient. This is particularly so in light of limitations of the model, which include how the toxicity database is not representative of all chemicals and makes the assumption that population sub-groups are all equally sensitive to the effects of toxicants.

Conclusion

Uncertainty factors are supposed to offer a conservative level of protection (SCHER / SCENIHR / SCCS, 2013); however, there are a number of challenges to the assumptions which underpin the claim to conservativism. Firstly, it is not clear that no human is more than 10x as sensitive to a toxicant than a test species; secondly, sensitivity between humans appears likely to vary by a factor of more than 10; and thirdly, application of default UFs does not appear to meet a suitable benchmark of protection for the population as a whole.

It therefore appears that, rather than being conservative, the 100x UF is unable to absorb even the observed variation in inter- and intra-species response, let alone the extra complications of mixture effects, to any sort of extent which could be considered as approaching a 1 in 10,000 protection benchmark – if that benchmark is itself even sufficiently protective as a target, which seems unlikely.

The problem is to some, perhaps even large, degree intractable because validation of the uncertainty factors in humans is ethically impossible. We could add extra safety factors and still be unable to determine if they were sufficiently protective (though they would certainly be more protective than the current defaults). This therefore requires a discussion of how to manage this problem, rather than make bland assertions of the satisfactoriness of the UF approach.

As with many things, the consequences of uncertainty about uncertainty factors cuts two ways: a clearer understanding of what we don’t know about chemical toxicity and the limits of uncertainty factors can open up reasoned discussion of the circumstances in which a UF may be unnecessarily large. In the interim, reducing uncertainty factors seems premature, given how much uncertainty they have to absorb and how unlikely it seems they are protective, as it makes a lot of assumptions about the spare capacity in the system.

Bibliography

Dourson ML, Stara JF: Regulatory history and experimental support of uncertainty (safety) factors. Regul Toxicol Pharmacol 1983, 3:224–238.

Felter SP, Daston GP, Euling SY, Piersma AH, Tassinari MS: Assessment of health risks resulting from early-life exposures: Are current chemical toxicity testing protocols and risk assessment methods adequate? Critical Reviews in Toxicology 2015: Early Online 1-26

Hattis D, Baird S, Goble R: A straw man proposal for a quantitative definition of the RfD. Drug Chem Toxicol 2002, 25:403–436.

JECFA: Procedures for the testing of intentional food additives to establish their safety for use – second report of the joint FAO/WHO expert committee on food additives, Technical report series, Volume 144. Geneva: World Health Organization; 1958

Kalberlah F, Schneider K: Quantification of Extrapolation Factors No 1116 06 113. Dortmund/Berlin: Bundesanstalt Fur Arbeitsschutz und Arbeitsmedizin; 1998.

Lehman AJ, Fitzhugh OG: 100-Fold margin of safety. Q Bull – Assoc Food Drug Officials 1954, 18:33–35.

Martin OV, Scholze M, Kortenkamp A: Dispelling urban myths about default uncertainty factors in chemical risk assessment – sufficient protection against mixture effects? Environmental Health 2013, 12:53.

Renwick AG, Lazarus NR: Human variability and noncancer risk assessment – an analysis of the default uncertainty factor. Regul Toxicol Pharmacol 1998, 27:3–20

SCHER/SCENIHR/SCCS: Opinion on the toxicity and assessment of chemical mixtures. Brussels: European Commission; 2012.

Walton K, Dorne JL, Renwick AG: Uncertainty factors for chemical risk assessment: interspecies differences in the in vivo pharmacokinetics and metabolism of human CYP1A2 substrates. Food and Chem Toxicol 2001, 39:667–680.

 

April 2015 News Bulletin: US pressures EU on pesticide rules; BPA is OK (if you ignore most studies); and more

April 13, 2015 at 4:44 pm | Posted in News and Science Bulletins | Leave a comment

April 2015 News Bulletin

The US Government Is Pressuring Europe to Dial Back Its Pesticide Rules. There’s an important debate going on in Europe that could dramatically influence how pesticides are used on the United States’ 400 million acres of farmland. At the center of the debate are endocrine disruptors, a broad class of chemicals known for their ability to interfere with naturally occurring hormones, and the impact on US agriculture which an EU ban on endocrine-disrupting pesticides could have. (Mother Jones)

BPA Is Fine, If You Ignore Most Studies About It. Bisphenol-A (BPA) is either a harmless chemical that’s great for making plastic or one of modern society’s more dangerous problems. Depends whom you ask. “There’s too much data consistent across studies…time and time again…to ignore it and suggest BPA has no effect on humans,” says Gail Prins, a physiologist at the University of Illinois at Chicago. But the plastic industry, researchers it funds and, most important, many regulatory agencies—including the U.S. Food and Drug Administration (FDA and the European Food Safety Authority (EFSA)—say BPA is safe for humans at the levels people are exposed to. (Newsweek)

Chemical Exposure Linked to Billions in Health Care Costs. Exposure to hormone-disrupting chemicals is likely leading to an increased risk of serious health problems costing at least $175 billion (U.S.) per year in Europe alone, according to a new study. (National Geographic)

Hand-Me-Down Hazard: Flame Retardants in Discarded Foam Products. On 1 January 2015 California implemented the first U.S. rule mandating that certain products containing polyurethane foam be labeled to identify whether they contain chemical flame retardants. Furniture industry experts predict flame-retardant-free couches, chairs, and other padded furnishings and products will be popular with consumers and large purchasers, and the new labeling law, known as SB 1019, is expected to have influence beyond the state’s borders. Crate and Barrel, IKEA, and La-Z-Boy are among the manufacturers that reportedly offer or will offer furniture with no added flame retardants. (EHP)

Widely used herbicide linked to cancer. The cancer-research arm of the World Health Organization last week announced that glyphosate, the world’s most widely used herbicide, is probably carcinogenic to humans. But the assessment, by the International Agency for Research on Cancer (IARC) in Lyon, France, has been followed by an immediate backlash from industry groups. (Nature)

How Lab Rats Are Changing Our View of Obesity. Obesity stems primarily from the overconsumption of food paired with insufficient exercise. But this elementary formula cannot explain how quickly the obesity epidemic has spread globally in the past several decades nor why more than one third of adults in the U.S. are now obese. Many researchers believe that a more complex mix of environmental exposures, lifestyle, genetics and the microbiome’s makeup help explain that phenomenon. (Scientific American)

Doctors and academics call for ban on ‘inherently risky’ fracking. Fracking should be banned because of the impact it could have on public health, according to a prominent group of health professionals. In a letter published by the British Medical Journal on Monday, 20 high-profile doctors, pharmacists and public health academics said the “inherently risky” industry should be prohibited in the UK. (The Guardian)

April 2015 Science Bulletin #2: increasing the policy impact of research; biomonitoring and the concept of “toxic trespass”; and more

April 13, 2015 at 4:28 pm | Posted in News and Science Bulletins | Leave a comment

April Science Bulletin #2:
Non-human studies,
research methods and reviews

Science and policy | Scientific contestations over “toxic trespass”: health and regulatory implications of chemical biomonitoring. Interesting examination of stakeholder interpretations of biomonitoring evidence through interviews with scientists from industry, environmental health organizations, academia, and regulatory agencies. Both social movements and industry stakeholders frame the meaning of scientific data in ways that advance their own interests; the ways in which they do so are mapped in a very revealing diagram.

Science and policy | How to increase the potential policy impact of environmental science research. This article highlights eight common issues that limit the policy impact of environmental science research. The article also discusses what environmental scientists can do to resolve these issues, including optimising directness of the study to policy-makers needs, using powerful study designs, and minimising risk of bias.

Phthalates, fertility | Prenatal exposure to di-(2-ethylhexyl) phthalate (DEHP) affects reproductive outcomes in female mice. These results indicate that prenatal DEHP exposure increased male-to-female ratio compared to controls. Further, 22.2% of the 20μg/kg/day treated animals took longer than 5 days to get pregnant at 3 months and 28.6% of the 750mg/kg/day treated animals lost some of their pups at 6 months. Thus, prenatal DEHP exposure alters F1 sex ratio, increases preantral follicle numbers, and causes some breeding abnormalities.

Phthalates, neurotoxicity | Phthalates and neurotoxic effects on hippocampal network plasticity. This review summarizes the effects of phthalate exposure on brain structure and function with particular emphasis on developmental aspects of hippocampal structural and functional plasticity. In general, it appears that widespread disruptions in hippocampal functional and structural plasticity occur following developmental (pre-, peri- and post-natal) exposure to phthalates. Whether these changes occur as a direct neurotoxic effect of phthalates or an indirect effect through disruption of endogenous endocrine functions is not fully understood.

Phthalates, fertility | Short term exposure to di-n-butyl phthalate (DBP) disrupts ovarian function in young CD-1 mice. DBP exposure decreased serum E2 at all doses. 0.1 mg/kg/day DBP increased FSH, decreased antral follicle numbers, and increased mRNA encoding pro-apoptotic genes (Bax, Bad, Bid). These novel findings show that DBP can disrupt ovarian function in mice at doses relevant to humans.

BPA, autoimmunity | Environmental estrogen bisphenol A and autoimmunity. Autoimmunity development is influenced by multiple factors and is thought to be a result of interactions between genetic and environmental factors. Here, we review the role of a specific environmental factor, bisphenol A (BPA), in the pathogenesis of autoimmune diseases. BPA belongs to the group of environmental estrogens that have been identified as risk factors involved in the development of autoimmune diseases.

« Previous PageNext Page »

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.

Follow

Get every new post delivered to your Inbox.

Join 734 other followers

%d bloggers like this: