Manifund is launching a new animal welfare fund, led by regrantor Marcus Abramovitch. We make rapid (<1 week), early-stage ($25k–$150k) grants across animal welfare, with a particular interest in the intersection of animals and transformative AI.
Reach out to marcus.s.abramovitch@gmail.com if you’d like to donate!
Why AI x animals?
Many EAs take seriously both the welfare of animals, and the possibility of short AI timelines. But EA funders currently consider these in isolation. AI safety grants mostly ignore potential outcomes for non-human beings. And animal welfare grants assume business-as-usual, that our world in 10 years mostly looks like the world today.
We don’t expect this to be the case. One major goal of the fund will be to identify and create opportunities so that transformative AI secures good outcomes for animals. Some example projects we’d like to fund:
Animal harm benchmarks. There are only a handful of animal harm benchmarks, none of which adopted by frontier labs. Other benchmarks that are well known and used (SWE bench, FrontierMath) came about through rising to the top of a marketplace of benchmarks. The same should happen with animal welfare benchmarks. Many benchmarks should be created, some by established ML engineers with the goal that one or two get traction to “hill-climb” on.
Animal Welfare Constitutions: Recently, Claude’s constitution was published with a value of “Welfare of animals and of all sentient beings” when determining how to respond to a prompt. This is one line of an 84 page document from one frontier lab. There should be ready made versions of texts of various lengths for constitutions, system cards, etc. to improve model behaviours and considerations for animals.
Watchdog organization: As AI begins to take effect across industries, there is a good chance the factory farming industry and others will start to use AI in ways beyond Precision Livestock Farming that will be important to get out ahead of. Keeping an eye on industry practices as well as effects on wild animals will be important to identify high-leverage, urgent interventions
Animal welfare salience in AI labs: Assuming AI systems are going to have profound effects on the world, it is important for those shaping the technology to be aware of and care about issues related to animal welfare as they are developing a technology with potentially large lock-in effects
(We also expect to place some bets on non-AI opportunities that are unusually strong.)
Why rapid?
One of the top complaints among grantees is the glacial pace of funding decisions. To a founder deciding to leave their job or making their first hire, a quick response can be make-or-break. In other domains, Tyler Cowen’s Fast Grants and Jueyan Zhang’s AISTOF show that multi-month-long reviews don’t have to be the default. In the for profit world, VCs similarly make decisions incredibly quickly.
By having one directly responsible individual for this fund, we eschew the overheads in typical grantmaking. As a Manifund regrantor on AI safety, Marcus has turned around funding decisions <1 week; Manifund is able to wire funds in <3 days after that. We’re bringing this speed to the animal welfare space to serve early-stage orgs.
Why Marcus?
This fund represents a bet on Marcus’s taste and execution. He’s already funded many successful early-stage projects, and is fluent in both animal welfare as well as AI/AI safety issues.
Marcus has been a hardcore earn-to-give EA. He’s personally donated ~$1.5m, representing >60% of his lifetime earnings, primarily to animal welfare. He earned this money through poker, cryptocurrency/quant trading, prediction markets, and advising a family office. (He was, for a time until he quit, the #1 trader on Manifold by all-time profit.)
Animal track record. Marcus has been an early backer of many projects that are now considered standout animal welfare charities, including:
Shrimp Welfare Project — electrical stunner placements now spare ~3.3 billion shrimp/year
Society for the Protection of Insects — state-level bans on insect factory farming
Compassion Aligned Machine Learning — animal-welfare evals for frontier AI
AI safety regranting record. This highlights Marcus’s eye for talent and understanding of frontier AI development. From a $100k Manifund regranting budget in 2023, Marcus funded:
Marius Hobbhahn, then starting Apollo Research
Jesse Hoogland, then starting Timaeus
Joseph Bloom, who went on to lead Whitebox Interp at UK AISI
Lisa Thiergart, who went on to lead MIRI’s technical governance team
Marcus also nudged his friend Ege Erdil to start Mechanize, and offered them their first investment.
Compared to other funders
We’re fans of the EA Animal Welfare Fund, the Navigation Fund, CG Farmed Animal Welfare and others in this space. We’re starting this fund as an alternative, for several reasons:
First, AI x animals. Others don’t currently prioritize interventions that focus on a transformative AI world. We’re much more AI-pilled and expect there’s a lot of low-hanging fruit for this reason. The AI x Animals RFP and SFF’s 2026 round seem good, but neither are currently fundraising.
Second, speed of deployment. We think that there is a need for much faster deployment of funds given our timelines for transformative AI. Especially when it comes to piloting new projects and starting new orgs, we need to move as fast as the AI landscape is moving to support effective interventions.
Third, transparency. As with other grants on Manifund, every grant and rationale by this fund will be made in public on our site, in real time. Donors and grantees will be able to evaluate our decisions for themselves. We think this is good for the ecosystem as a public benefit to build trust, share information and give potential donors a much better insight to what we are doing.
Fourth, active grantmaking. Marcus plans on reaching out to promising individuals rather than primarily taking inbound applications. He has a wide network to draw upon, across the animal welfare, AI, and AI safety ecosystems.
How to donate
Reach out to marcus.s.abramovitch@gmail.com if you’d like to donate, or book a call here.
We’re targeting an initial $2m raise by May 15. Marcus is taking no salary; Manifund runs ops and fiscal sponsorship with a 5% overhead.
Manifund is a 501c3 registered charity (officially “Manifold for Charity Inc.”), EIN 88-3668801; we can accept donations through DAFs, direct wire/bank transfer, crypto, and credit card.
Mechanize’s mission is to build AGI to replace human labor. They actively deny that AI misalignment risk is a concern. I would not list this as a positive under AI safety track record.
I wouldn’t say I “nudged” him. He was doing it. I invested since I thought it was a good investment (it has been). They had no problem raising money, and my investment replaced (some of) one of the other investors.
I wouldn’t have included this, but Austin really wanted to.
I have donated a lot of money recently, to animal welfare (~$450k in the last 5 months), I would have donated less if I had did not have this investment.
I included this story as a short anecdote about Marcus’s ability to spot talent, make active investments, and convince founders to take the leap, all of which I expect to transfer into helping start great AI x Animal orgs. I understand that different people in EA/AI safety have different takes about whether Mechanize specifically is good or bad—I happen to think good or at least neutral.
(And I take responsibility for any factual errors with this specific anecdote. Talking to Marcus just now, it seems like his main nudge was to convince Ege/Matthew/Tamay that the nonprofit structure was wrong for what they wanted to accomplish.)
Curious why you think Mechanise might be positive impact?
Thanks for asking!
My strong default prior is that most forprofits are good for the world, along the standard arguments: gains from trade, Paul Graham on wealth, the finding that corporations only keep 2.2% of value created
Moreover, I like when people who share my values start valuable companies, because they often spend that money on projects that are good. Mechanize doesn’t seem that different than Asana, or maybe Microsoft in this regard. Tamay has taken the GWWC pledge; I find Matthew’s writing (eg on AI rights) very informative.
Object-level, it seems like Mechanize mostly sells good code RL environments to Anthropic. Across the community, opinions on accelerating Anthropic capabilities are also mixed, but on balance I lean pro.
I personally benefit a lot from the good coding capabilities that Claude Code provides. This stage of AI/LLM development seems broadly good to me.
(Nb, my view wouldn’t change if they were mostly selling to OpenAI or GDM or something.)
On minor level, some people view that leaving Epoch was somehow a betrayal of Epoch or the funding they received; this seems quite fake. I strongly support individuals’ rights to branch out and start new orgs. In any case, it seems like Epoch has continued to do well.
Kinda off topic but I’d love to have a canonical post for why most forprofits are good for the world that lists the standard arguments, if you or anyone else knows one. For now it’s just your comment that I’ll link to people ’^^
I just wanted to give props to this.
Thanks
Love this! I think speed of deployment is a really big issue in current funding in the space, with projects often waiting for 3-12 months before a funder can make a decision (plus many only have RFPs running once or twice a year, and it can take another month of so to receive funding), and given how urgent AIxAnimals projects are, we really need faster decisions and implementation. Thanks for starting this!
Strong agree. I’m doing a lot of writing over the next month, some of which will tackle ways I think the EA funding system needs fixing. I think speed is crucial to grantmaking both in the strict monetary sense (interest rates) and in building momentum.
Hi Marcus.
Are you open to funding research on the sentience of nematodes? This is one of the “Four Investigation Priorities” mentioned in section 13.4 of chapter 13 of the book The Edge of Sentience by Jonathan Birch.
How about funding research on the time trade-offs between the pains defined by the Welfare Footprint Institute (WFI) by surveying people who have recently experienced excruciating pain? I think people suffering from cluster headaches would be good candidates. Ambitious Impact (AIM) currently estimates suffering-adjusted days (SADs) assuming that excruciating pain is 48.0 (= 11.7/0.244) times as intense as hurtful pain (you can ask Vicky Cox for the sheet), which I believe is very off. It implies 16 h of “awareness of Pain is likely to be present most of the time” (hurtful pain) is as bad as 20.0 min (= 16⁄48.0*60) of “severe burning in large areas of the body, dismemberment, or extreme torture” (excruciating pain). Here is a thread where I discussed AIM’s pain intensities with the person responsible for their last iteration.
How about funding research on welfare comparisons across species? In Bob Fischer’s book about comparing welfare across species, the tentative sentience-adjusted welfare range of shrimps is 8.0 % of that of humans. However, if the sentience-adjusted welfare range is proportional to “individual number of neurons”^”exponent”, and “exponent” can range from 0 to 2, which I consider reasonable, the sentience-adjusted welfare range of shrimp can range from 10^-12 (= (10^-6)^2) to 1 times that of humans.
I am very open to funding research on the sentience of nematodes.
Regarding intensities of pain, I’m open to it, but would be surprised.
Welfare comparisons across species are also in scope. I consider Bob Fischer to be one of our best people who has a strong hunch for making his research useful, and as much as is practicable/possible, he should have free rein to do the work he finds most valauble. This talk in 2023 is responsible for a lot of my thinking around smaller animals and very much cemented the idea that helping non-human animals was going to be far more cost-effective.