It’s been under four years since ChatGPT was released, and in that time the Animal Welfare movement has had to grapple with what it means to face off with AI. This has not gone well. Not only does the Animal Welfare movement have no money, it also has no technical talent. While billions of dollars has (rightly) gone into questions about how to keep humans alive, extremely little has gone to the question of what about all the not humans. The animal movement in an ideal world would have put at least 10% of  funds into the equivalent of AI safety/capabilities, but again there is no money and no expertise. It is hard to make speculative bets in areas you know nothing about when your counterfactual dollar can keep a hen out of a cage for 10 years. Fortunately, the animal movement did have Constance Li who went hard on field building with Sentient Futures and was able to bring more people to the table. So this field has led with community building instead of concrete projects, which has led to people being confused about the point.

That all to say, this AIxAnimals field is very much behind. 

It should be extremely clear that we want a lot of brain power on this problem. In any universe where AI doesn’t kill us all, we want to make sure 

The big question is what does AI mean for animals? But another way of putting it is: how do we navigate humanity’s biggest atrocity as we develop superintelligence? 

We are currently in a stage course correction. Manifund has set up the new Falcon Fund to get the specific projects off the ground. I have donated to it. You should as well. It’s very important.

I am not part of any AIxAnimals organization, but I do talk with pretty much everyone involved. I do fundraising for the whole Animal Welfare field, and my background is in machine learning research for alternative proteins. I am trying to help AIxAnimals get off the ground, and am personally donating to Manifund’s new Falcon Fund which I am very excited about.

What is AIxAnimals?

From yesterday’s criticism, there was a definition.

The AI x animals argument, as I understand it: AI systems are making decisions that affect how we use animals. Those systems don’t adequately represent animal welfare. If we can get welfare into the benchmarks/constitutions of AI labs, we can shift outcomes for animals at huge scale before they get locked in.

I agree with this, it’s well put.

I will address AI Animals Capabilities, such as enabling cultivated meat, later in the post.

Why AIxAnimals?

There are many theories of change here, but as a more general intuition pump, consider that it is the intersection between the worst thing that humanity does, and the technology that will determine the future. It’s an incredibly compelling moral drama. How are we going to navigate this one?

Theory 1: Direct impact on animals

The values that AI have could be the difference between the practice ending entirely, and taking factory farming to the stars. We should be very afraid of lock-in. Solving cultivated meat is definitely not a guarantee, and you should not underestimate that people want meat from an animal that lived and really suffered. We need to navigate this well. There are so many animals that even a marginal difference could have a huge effect. This may be even larger when you consider wild animals. As a basic example, the difference in recommendations of insecticides for farmers to use could affect countless lives.

Theory 2: Impact on sentient life, including digital minds and humans

Animals are sentient. If AIs treat them poorly, it means we are not creating systems 

The Survival and Flourishing Fund puts it well:

As AI capabilities continue to advance, humanity’s relationship to animal welfare takes on increased significance and urgency. The moral frameworks we develop and institutionalize now, including how we weigh the interests of non-human animals, have the potential to influence the values embedded in AI systems through the norms, laws, and training objectives that are set for them. Therefore, how humanity treats animals today may shape how AI systems treat all sentient life in the future.

Jaan Tallinn goes further to suggest this 

Theory 3: We shouldn’t build AI systems that commit moral atrocities

This is my own more vibes-based argument but I think it’s true. I think this would go badly generally.

Who is involved?

Main organizations

  • Sentient Futures is the fieldbuilding org for AIxAnimals. They have been very successful in bringing people together with conferences that frontier AI lab employees attend. They also do fieldbuilding for digital minds work. 

  • CAML: Compassion Aligned Machine Learning. They are a research organization that creates benchmarks and does research for how AI labs might improve on the benchmarks.

To be clear, there is exactly one small organization doing direct work in the space. CAML runs on a shoestring budget and had to move to Mexico to afford rent. This is not great to interact with an industry that’s based in SF.

Academia and research

Other

On the funding side:

  • AIxAnimals RFP. They have $300k. The focus areas are across upskilling animal orgs, fieldbuilding, and research. The funding came from Coefficient Giving, The Navigation Fund, and Stray Dog Institute who all wanted to dip their toes in the water.

  • Survival and Flourishing Fund Animal Welfare Theme Round at $2-4M. They have a particular interest in AIxAnimals but are open to broader animal grants. Grants will be announced in November, and it may take longer for the money to go out.

So there is very little money available for direct work available immediately. The Falcon Fund was set up to address this, and is the only active grantmaker in the space.

**