Fact Check
What Musk and Rogan Got Very Wrong About Climate Change and Meat
Mythbusting•9 min read
Reported
Each year, millions of mice, bunnies and beagles are force-fed pharmaceuticals in toxicity tests, but there are alternatives.
Words by Sophie Kevany
Researchers from IBM are part of a team to develop a new artificial intelligence model for testing novel drugs and other substances rather than use living animals. For millions of mice, rats and beagles, lab tests often require them to be force-fed or made to vomit, suffer paralysis, convulsions or internal bleeding. And much of the suffering is pointless — their responses to drugs aren’t the same as humans and the tests themselves can be difficult to reproduce.
Scientists first developed toxicity tests on animals as a way to measure whether a substance would cause harm to humans. One of the most infamous is the LD50. Though the test is used far less often now, the process requires scientists to feed animals enough of a substance until half of them die, which became a standard measure of toxicity for decades. Animals usually receive no pain relief — it would skew the results — with many forced to ingest substances through a tube running from throat to stomach.
Around 600,000 animals are used every year for these experiments in the European Union. Meanwhile, in the U.S., the exact number is unclear, but one study puts the total of rats and mice used in all lab research at over one million.
For decades, researchers have been working on alternatives — everything from eliminating duplicative testing to using engineered or natural human tissues called organ-on-a-chip. This latest AI model, built by researchers in the U.S. and India, is both more reliable for testing toxicity and replaces the animals, the developers say.
The model was trained using data from about 50,000 molecules, says Shiranee Pereira, one of the developers. That molecular data was in turn uploaded to the computer-based model, allowing it to recognize the difference between toxic and non-toxic structures.
The model’s analysis is more trustworthy for users than an animal model, Pereira argues, because it directly analyzes the molecule’s properties and how those might affect humans. Animal tests, by comparison, are indirect because they look at how an animal reacts to a substance, and then extrapolates that to humans, she says.
Although AI modeling has been hailed by some as the future of scientific testing, and similar models exist, the model created by Pereira and colleagues is the first to use the absence or presence of certain molecular characteristics — called “pertinent positives” and “pertinent negatives” by researchers in the field — to help predict how toxic a substance might be for humans, Pereira says.
What also makes this model unique is that it doesn’t require new data from future animal experiments, says Pereira, who is based in India, at the International Center for Alternatives in Research and Education, or ICARE. The rest of the team are in the U.S. at the Rensselaer Polytechnic Institute and the IBM Watson Research Laboratory.
“We have used only historic data,” Pereira says, “data from experiments previously performed on animals.” Using that data, the model can “predict human clinical toxicity directly” without any need for new animal experiments. By contrast, she adds, other initiatives using AI to “create virtual animal models” are likely to need new animal test data in the future.
The new model, which is actually two models, also found that rather than helping predict toxicity, animal data was unimportant and even unhelpful in predicting the toxic effect a substance might have on a human. “Our very premise is that animals cannot ever provide correct information on human toxicity,” Pereria added, therefore animal models are not useful. Chocolate is a good example, she says. “It kills dogs but works like an antidepressant in human beings.”
Other infamous examples of animal tests producing misleading results include the oft cited finding that if aspirin, which can kill cats, had been tested on them, humans might never have benefitted from this common painkiller. One of the worst examples of a drug working in animals but killing humans was Vioxx, a painkiller launched by Merck in 1999. Whistleblower and FDA researcher, David Graham, told Congress that Vioxx had caused 88,000 to 139,000 heart attacks, and that 30 to 40 percent of them were fatal.
Despite the recognized problems with animal data, Laura Rego Alvarez, head of science policy and regulation with the NGO, Cruelty Free International, says it “is still viewed by many as the gold standard for human toxicity prediction.”
At the same time, digital technologies have already been shown to outperform animal testing in terms of reproducibility, Rego Alvarez says. Many regulatory agencies, including the FDA, have announced they are scaling back demands for animal testing, says Alvarez, who adds, “they do not explicitly require data from new animal tests,” yet in practice, “conducting new animal tests remains the default option in most testing guidelines and regulations.”
The other problem Rego Alvarez points to, is that regulators often fail to provide clear guidance on how non-animal testing alternatives can be used to satisfy specific data requirements set forth in the rules. Applicants and companies may then be “reluctant to submit ‘non-standard’ data in case it is rejected and the process of getting their product to market is subsequently delayed.”
In response to these comments, the FDA told Sentient Media in an email that it “does not dictate the type or design of studies used” to determine whether drugs might be safe for investigation in humans. It added that the organization “explicitly considers information from non-animal methods.”
The consumer products giant Procter & Gamble has been working with academics to identify a path forward for regulators to make a wholesale switch to non-animal testing. In a recent sponsored post on Politico EU, the company’s scientific communications director, Harald Schlatter, and co-authors, highlighted chemical safety and registration legislation as key roadblocks.
One of the necessary shifts, according to the authors of a paper that resulted from Proctor & Gamble’s work with academics is for regulators to acknowledge “that there will not be a one-for-one replacement” of older animal tests for newer non-animal ones. Instead, the future of toxicity testing might be a range of assessment tools that will be more specific and accurate for humans.
To help regulators transition to these new tests, the authors argue, “extensive training” will be needed to enable a “paradigm” shift away from focusing on a substance’s adverse effects in animals, toward methods that focus on “modes of action” which are more relevant for assessing human safety.
Based on a general description, veterinarian and visiting fellow at the Harvard Animal Law & Policy Program, Larry Carbone, says the new model has promise. “[It looks] exciting in that it builds on three kinds of data,” existing animal findings, in vitro lab tests and human clinical data. He added that if more companies contributed toxicity data to this or the FDA’s project, it would increase the power of such models.
But Carbone also pointed to another potential roadblock saying that before human trials are authorized, he would still expect to see regulators require some animal data, even if only to reassure the people who volunteer for trials that the substance they were about to ingest “was safe in real animals, not just in the AI evaluation.”
Meaning, in effect, that if people want a world without animal testing, they will have to get used to trusting AI.
Investigation
Climate•6 min read
Solutions
Food•9 min read