Are TTCs the best way to reduce animal testing? (Part 2)

February 23, 2012 at 7:59 am | Posted in H&E Features | 1 Comment

Limitations in the toxicological tests used to calculate the NOAELs on which TTCs are based could result in thresholds of effect being greatly over-estimated. Click to enlarge.

Part 2: Reasons for doubting that TTCs are adequately protective of health

Last month, we explained how thresholds of toxicological concern (TTCs) are a proposal to reduce the amount of data which needs to be generated in order to perform chemical risk assessments, by requiring toxicological testing of a substance of unknown toxicity (such as a food contaminant or pesticide metabolite) only in the event that humans are exposed to it above a certain threshold.

The claim is that TTCs are a viable alternative to standard chronic toxicity-based risk assessments because they set an exposure threshold low enough that risk to health posed by an unidentified substance at or below that level is negligible. Risk assessors should therefore have confidence in the safety of the substance, even though no chronic toxicity data on the substance is available.

Interest in substituting TTCs for toxicity testing is driven by many factors, including: the need for risk management decisions in the face of an overwhelming lack of toxicity data on the multitude of chemicals and their breakdown products present in the environment; the EU’s commitment to ending animal tests for cosmetics by 2013; understandable support from some animal welfare groups; and support from industry itself, which is happy to emphasise the animal welfare benefits of the proposal, yet must also see substantial financial benefits in facing reduced toxicological test requirements before bringing their products to market.

The European Food Safety Authority has published a draft opinion on the application of TTCs to food contaminants (EFSA 2011) while the EU non-food Scientific Committees have also drafted a recommendation on TTCs for cosmetics (SCHER/SCCP/SCENIHR 2008). A search of the published literature indicates they are being considered for use in a range of risk assessment and regulatory forums, including: prenatal developmental toxicity (van Ravenzwaay et al. 2011), substances regulated under REACH (Rowbotham & Gibson 2011, Marquart et al. 2011), tobacco smoke (Talhout et al. 2011), pesticide metabolites (Dekant et al. 2010), hormonally-active substances (Gross et al.2010), aerosol ingredients (Carthew et al. 2010), household and personal care products (Blackburn et al. 2005) and food additives (Pratt et al. 2009).

Last month we explained that TTCs are calculated by creating a database of existing chronic toxicity data for a representative subset of a structural class of chemicals; ranking them from lowest to highest lowest-found no observed adverse effect level (NOAEL); and identifying the 5th percentile NOAEL for the sub-set. This means that, in theory, there is only a 1 in 20 chance that a random chemical in the class is toxic at a dose equal to or less than this level. TTCs are then set by dividing this dose by an uncertainty factor of 100

As a probabilistic method, it is always possible that a chemical is toxic at the TTC. The contention is that there are so few chemicals which are toxic at this dose, there likelihood of harm is insignificant. Proponents of TTCs are therefore effectively offering policy-makers a trade-off: society foregoes the lesser advantages accrued from generating specific toxicological data on substances to reap the greater rewards of reduced cost and use of fewer animals in toxicological testing.

The acceptability of this trade-off turns on two questions. Firstly, can risk assessors be confident that the health risk posed by a substance below the TTC exposure thresholds really is negligible? And secondly, are the benefits of detailed toxicological testing really so marginal that they are outweighed by the benefits of waiving specific data requirements for risk assessment?

We will address the second question next month. Regarding the first question, one would imagine risk assessors would lose confidence in TTCs if there was a compelling evidence that the threshold doses are too high to reliably prevent harm to health, and/or if any substances which are toxic at the threshold dose could pose significant health threats, so that even if relatively few substances are toxic at the threshold dose, the potential consequences of exposure for population health are too severe to afford the risk of waiving toxicity data requirements.

Note on exposure. TTCs also require exposure levels for substances to be accurately determined, since one has to know the degree of exposure in order to know if an exposure threshold is exceeded. Biomonitoring programmes to measure actual exposures are rare in Europe, so exposure will likely be modelled rather than measured.

Exposure models, however, typically struggle both with multiple routes of exposure to single substances, and with the possibility that chemicals can have a combined, additive toxic effect. Determining if thresholds are exceeded or not is therefore challenging. Proponents of TTCs will at least have to prove that substances do not have cumulative toxic effects at the proposed threshold doses; as for overall accuracy of modelling, it is beyond the scope of this article to comment.

There are many studies finding that chemicals have effects at doses below the NOAELs identified in the TTC databases. Click to enlarge.

Are the thresholds
low enough?

The effectiveness of the TTC methodology depends entirely upon how few substances are toxic at the threshold doses. The more conservative the 5th percentile is, the fewer substances will be toxic below the threshold; the more consistently the TTC database over-estimates the NOAELs for a given class of substances, the more substances there will be which are toxic at the threshold dose.

Standard critiques of risk assessment give plenty of reason to hypothesise that the NOAELs on which TTCs are based may well be overestimated. We have covered these shortcomings a number of times in H&E (see e.g. #42, #34) and there are detailed critiques in the peer-reviewed literature (e.g. Myers at al. 2009). Current concerns with how NOAELs are determined which may lead to their over-estimation include: the failure to test full dose ranges; the use of insensitive assays inappropriate for detecting many toxic effects; the failure to test during specific developmental windows; and the sacrifice of animals before disease manifests.

The age of many of the studies used in the Munro databases on which TTCs are based (Munro et al. 1996) makes them especially susceptible to these concerns, although it has been claimed that TTCs derived from this data have been validated against newer studies and for a greater range of substances (Barlow 2005).

Additionally, there are doubts that thresholds of effect truly even exist, with the US National Research Council recommending a move away from risk assessments based on no-effect levels (National Research Council 2009). If there are no thresholds of effect, TTCs are necessarily over-estimated.

Direct evidence for the hypothesis that NOAELs are underestimated would come from studies showing effects at doses lower than the 5th percentile NOAELs in the TTC databases. And such studies are straightforward enough to find.

For example, PFOA, deltamethrin and BPA would have TTCs based on a 5th percentile NOAEL of 0.15mg/kgbw/day, yet there is evidence that PFOA has effects at 0.01mg/kgbw/day (Macon et al. 2011), deltamethrin at 0.003mg/kgbw/day (Issam at al. 2009), and BPA from less than 0.05mg/kgbw/day to lower than 0.025mg/kgbw/day (Richter et al. 2007). The TTC for DEHP would be based on a 5th percentile NOAEL of 3mg/kgbw/day, yet effects have been observed at doses as low as 0.045mg/kgbw/day (Andrade et al. 2006).

A few counterexamples do not, of course, prove broken a system which only claims to be right in most cases. However, the more counterexamples there are, the more it should undermine confidence in the accuracy of TTCs, as each example is evidence that the databases of NOAELs set the TTC at too high a dose. At the very least, however, one would think this toxicity data ought to be incorporated into the TTC databases from which the 5th percentile NOAEL is calculated; if these data are not part of the databases, it would be interesting to know why not.

Note on uncertainty factors: Proponents of TTCs might argue that the uncertainty factors used to create a TTCs from a 5th percentile NOAEL are sufficient to cover potential toxic effects from any substance which is toxic at the 5th percentile NOAEL or less, and that TTCs are not falsified by effects being found below the 5th percentile threshold, but instead below the TTCs themselves.

The use of uncertainty factors in TTCs, however, is different from their use in risk assessment in a subtle but important way. How this is so, and what this means for the trade-off which regulators are being offered, we will explore in next month’s article.

What about potential magnitude of harm
from
 substances toxic at low doses?

Our second concern with TTCs is the assumption that all we need to know for risk assessment is the likelihood of harm at a threshold dose, not the potency of the substances which are harmful at the dose. Proponents of TTCs are effectively saying that the potential magnitude of harm posed by the unknown substances which are toxic below the threshold of toxicological concern does not need to be calculated.

The problem is, risk of harm is not only a function of the likelihood of harm, it is also a function of the magnitude of effect. Even a small effect from a dose below a TTC could have substantial impact if a large population is exposed; for example, exposure to an unidentified substance which causes 10 instances of cancer per 1,000,000 lifetimes would cause 600 cancers in a population the size of the UK.

It is the need to estimate this potential impact which determines a risk assessor’s need for chemical-specific toxicological data. It is hard to understand, therefore, how a method such as TTCs, which provides only data on the probability of harm, can possibly be interpreted as a substitute for risk assessment based on chronic toxicity data: it is a completely different approach to managing risk from substances of unknown toxicity, which looks badly at odds with current standards in risk assessment.

In conclusion, it is far from clear that risk assessors should accept the TTC trade-off: not only is there substantial evidence that thresholds are set at a level which will fail to protect population health; without specific chronic toxicity data, the potential magnitude of effect of a chemical exposure on a population cannot be anticipated. Therefore TTCs do not look like a viable substitute for risk assessment based on chronic toxicity data.

1 Comment »

RSS feed for comments on this post. TrackBack URI

  1. [...] Last month we looked at whether or not the proposed thresholds are likely to be adequately protective of health and concluded this may not be the case, as the age and methods used in the tests on which the TTCs are based mean their safety could easily be overestimated. [...]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com. | The Pool Theme.
Entries and comments feeds.

Follow

Get every new post delivered to your Inbox.

Join 690 other followers

%d bloggers like this: