Currently grantmaking in animal advocacy, at Mobius. I was previously doing social movement and protest-related research at Social Change Lab, an EA-aligned research organisation I’ve founded.
Previously, I completed the 2021 Charity Entrepreneurship Incubation Program. Before that, I was in the Strategy team at Extinction Rebellion UK, working on movement building for animal advocacy and climate change.
My blog (often EA related content)
Feel free to reach out on james.ozden [at] hotmail.com or see a bit more about me here
I think it’s probably true that animal advocates likely under-rate how weird things might be with TAI but I am not convinced that this would change significant amounts of how resources are allocated:
If the world really will be that weird, probably there isn’t that much we can actually do now that would improve animal welfare going forward. For example, if we think that frontier AI companies will replace governments and AI decides on policy issues like cultivated meat regulation: what can we actually do to change this? An optimistic view is that we should make sure that AIs have pro-animal values (which people are already working on!) but a pessimistic view might say that AIs will realise that their values have been altered by some pressure groups and this work is moot. They might come to the (I believe) correct conclusion that factory farming is a very inefficient and cruel way to produce food but this is not because of advocacy, but because this is a super-intelligent AI system that just worked it out.
Relatedly, it’s possible that in worlds where things are very weird, any good that happens to animals is basically due to non-animal-movement factors and our advocacy won’t make much of a difference. For example, if all humans are uploaded to the cloud or we send out digital copies ourselves across the universe, how would our advocacy predictably influence this in a positive way for animals? And therefore, most of the counterfactual impact is in worlds where things aren’t that weird, timelines are long, etc.
(In case it’s not clear, I also agree with the recommendations you have: research to figure out a strategy, building flexible capacity to respond quickly, influencing frontier AI companies, etc. I’m glad some of these things are beginning to happen but I’m also somewhat pessimistic on how well research can actually make actionable recommendations, given the weirdness of the future).