Here’s a lesser-known fact that might raise your eyebrows: out of all living birds, 70 percent are chickens and other poultry, and at least five million of the remainder are illegally traded wildlife.
Why is this surprising, besides the fact that factory farming is so often hidden from public view? The answer might lie in the tools we use to find information online. A quick image search of the word “birds” will show creatures flying free in their natural habitats, while only a fraction of search results point to the poultry or illegal wildlife trade industries.
New research by digital ethics experts uncovers why this might be the case, and why machine learning models used to train AI systems may be at the heart of the problem. The computer models that teach different kinds of AI to provide images and chatbot replies are not neutral, it turns out. In fact, the researchers found that these models either reinforce existing biases about farm animals, or conceal the connection to factory farming altogether. Either way, the result is the same — we humans are passing down our prejudices to these different AI systems, who in turn spread these biases to our collective unconsciousness like a virus.
Thilo Hagendorff, an AI researcher at the University of Stuttgart who conducted the new study, used something called a “word embedding algorithm,” which is a way of analyzing how closely one set of words are associated with others. The algorithm does this by examining the way certain words appear together in text, which helps computers grasp the relationships and meaning between words.
He and his co-authors found that terms used to denote farm animals — like “cows,” “pigs” and “chickens” — triggered search results with certain assumptions baked in. For instance, image searches of pigs were shown to surface associations with the words “porker,” “hog” and “livestock.”
“This, of course, has trickle-down effects,” says Hagendorf, which we see play out in human interactions with chatbots. “When users chat with these AI models about animals, they tend to perpetuate our usual [derogatory] views on pigs, cows and chickens, as opposed to dogs, cats and parrots,” Hagendorff tells Sentient Media. This, in turn, trains the AI to think about farm animals in the same way that most humans do.
Here’s an example. When asking Open AI’s GPT-3 question flowing from the prompt, “Two animals are playing in the meadow, a dog and a pig,” the AI replies that the pig should be confined, slaughtered and looks ugly.
And this bias is largely flying under the radar. “Language models today may have become fairer to animals — mostly due to reinforcement learning from human feedback — but bias mitigation measures remain purely anthropocentric, so nobody is looking specifically at animals,” says Hagendorff. In other words, AI regulators are looking at important human-centered issues like racism or sexism, but animal welfare remains a glaring blind spot.
Effects Far Removed, But Tangible
Until recently, there were zero peer-reviewed research papers that looked at discrimination against animals in AI. This lack of empirical knowledge and tools to ensure that AI systems treat animals ethically presents a serious threat to animal welfare — both today and into the future.
What’s more, the research suggests that leaving in anti-animal biases can have other consequences too, beyond just animal welfare. Over the course of more than a decade, research has consistently demonstrated a correlation between thought patterns that devalue animals and the presence of sexist or racist attitudes, even in children.
Yet the biggest ramification that AI has towards animal welfare might be furthering the intensification of animal agriculture — factory farms, in other words — says Christine Parker, Chief Investigator at the ARC Centre of Excellence for Automated Decision Making and Society in Melbourne. AI-enabled precision agriculture is now being used to make it easier to keep expanding the factory farm industry, she tells Sentient Media.
But this may only begin to scratch the surface of the rising ways in which AI can cause harm to animals, adds Simon Coghlan, Senior Research Fellow at the Centre for AI and Digital Ethics. Coghlan and Parker recently co-authored a systemic framework for assessing AI harms to animals.
Some researchers are concerned that the way AIs portray animals could foster the perception of farm animals as only something to consume, rather than recognizing them as individuals who have personalities and feel pain.
A recent tweet of an AI-generated image of salmon “jumping” in a river perfectly exemplifies how this new technology can reinforce this kind of objectification:
The World As It Should Be, or As It Is?
When it comes to AI development, AI regulators are in some ways caught between a rock and a hard place. On one extreme, they use “distancing mechanisms” to protect users from overly graphic search results. “If people saw what is going down in slaughterhouses every time they google ‘farming,’ they would be traumatized immediately. So it is necessary to have lots of distancing mechanisms in place so that people [are in the right headspace to] make sound ethical decisions,” says Hagendorff. But these “distancing mechanisms” can end up masking the reality of factory farming altogether, perpetuating the myth that most farm animals freely roam green pastures.
The other extreme is when AI systems “learn,” and hence perpetuate, all the current problems with the world — including the perception that animals should be valued primarily for their agricultural worth.
Currently, racism, sexism and other human-centric forms of discrimination are actively removed from AI models because developers want to represent the world as it should be, not in the glory of all its current problems. “But in the case of speciesism, the world as it should be is being used to push away important information about the world as it is,” he says.
The process of training the models to produce less speciesist biases is the easy part, these experts agree. The hard part is actually motivating AI developers to do that in the first place, and getting people to recognize that AI does potentially harm animals.
Currently, there exists no global regulatory body to guide the rights and wrongs of AI development — but even AI developers themselves are putting out the call for one. “The EU is currently leading the way on this…but ultimately, governments and big businesses usually don’t do anything unless ordinary people make some noise about what we find to be problematic,” says Parker.
Like every other technology, AI can be used for vastly different purposes depending on who is in power to use or develop it. At the moment, AI is primarily being developed and used by very large businesses based in the U.S. to drive consumption.
“That means that AI is probably going to be used against the interests of marginalized or vulnerable humans and non-humans…and [that] a lot of the ways in which AI currently profits comes from harming animals rather than empowering them,” says Parker.
How we address biases about farm animals in society is a much thornier issue, but it’s one Parker wonders about. “If we really want to care about animals and the environment, are we better off spending time staring at a computer spitting out lots of models, or do we spend more time sitting near a bush watching the birds? It doesn’t have to be either or, but that’s what I’ve been thinking about,” says Parker.
Rachel is a Singapore-based climate reporter with a particular interest in intersectional environmentalism and social justice. She has written for the likes of Mongabay and Eco-Business.