I think the debate motion bundles together several distinct mechanisms by which human flourishing under AGI could translate to animal welfare and I’m interested in which ones folks put the most weight on. I’ve tried to identify mechanisms that might connect human and animal welfare under AGI, each of which could hold in some possible worlds and fail in others. This list isn’t a claim about what I think is most probable, since I’m highly uncertain. Some mechanisms (non-exhaustive list) might be:
Expanding moral circle: as AGI makes humans become more secure and prosperous, humans may extend moral concern outward to more groups. I think this is possible, but wealthy societies have industrialised animal agriculture and increased reported animal welfare concern simultaneously, and the concern doesn’t prevent poor animal welfare.
How strong do you think the empirical relationship between prosperity and animal welfare concern actually is? And will concern translate to meaningful change?
More resources: AGI-driven wealth could mean wealthier people could direct more resources for animal welfare. Global spending on improving animal welfare is currently tiny compared to the global meat industry, so more resources could make a meaningful difference.
Do you think rising incomes shift food consumption patterns toward higher-welfare products, or are attitudes to food sticky enough that the pattern persists even under significant income growth?
Do resources get distributed to those who would direct them to animal welfare or do they get concentrated somewhere else?
Technological co-benefits: AGI solving human problems could also solve the barriers to replacing animal agriculture. I’m unsure how AGI-optimised factory farming plays out against other food systems that might come about with AGI.
How do you expect AGI-optimised conventional farming to compete against AGI-optimised alternative proteins, or some other food system? Which gets there first, and does that create a lock-in dynamic?
How much can AGI help with non-technical barriers, like regulatory and political constraints?
Institutional improvement: AGI creates better, more rational institutions for humans and the benefits get extended to animals.
Do you expect that new institutions reshaped by AGI would still need to explicitly include animal welfare considerations in the objectives, or could they emerge as a byproduct?
If AGI concentrates institutional power in the hands of a small number of actors, does that make pro-animal institutional reform more or less likely?
How sticky are today’s laws/​regulations?
Moral AGI: a sufficiently capable AGI reasons from first principles, weights animal suffering heavily, and acts on it unprompted. Unlike moral circle expansion, which requires humans to change their values, this could bypass human values. I think this is possible but worry about AGI instead being well-aligned with today’s human values, which I don’t think would benefit animals.
Do you think rigorous moral reasoning from first principles tends toward weighting animal welfare heavily?
How much does it matter that animal welfare is underrepresented in AI alignment frameworks today?
Note: anything I comment during the symposium is my personal view and not necessarily the views of my employer :)
Thanks for your comment. I agree it’s possible that ASI could come shortly after AGI, and I do caveat in the piece that if you believe this, most of the takeaways won’t hold.
What I wanted to do with this post wasn’t necessarily persuade people of any one scenario, but instead describe the actual bottlenecks that cultivated meat faces so that people can calibrate their own views, whatever those views are, against the real landscape. For example, if someone came away from reading this more optimistic about cultivated meat under AGI, but also better able to articulate why (according to how they think AGI solves the bottlenecks), I think that’s still a valuable outcome.
I used a narrow definition of AGI because I think that’s where actionable analysis can be made, but I agree its not necessarily enough. If you have recommendations for how to reason about worlds where current baselines genuinely don’t extrapolate at all, I’d really welcome them! It’s a problem I find really hard, and I think a lot of others, especially those coming from cause areas outside of AI safety, do too.