“The easiest pain to bear is someone else’s.”
My research can be found at:
This EA Forum profile page
About me:
Ren (they/them), living on Kaurna Land (South Australia)
Preferred form of communication is email: ren (dot) springlea (at) animalask (dot) org.
My work focuses on animal advocacy.
I have experience in ecology, fisheries science, and statistics from my time in academia and government. I enjoy thinking about politics and social justice.
I like soccer!
I recently changed my surname (formerly Springlea) 😊 My email address is unchanged.
Thank you for this post. I work in animal advocacy rather than AI, but I’ve been thinking about some similar effects of transformative AI on animal advocacy.
I’ve been shocked by the progress of AI, so I’ve been thinking it might be necessary to update how we think about the world in animal advocacy. Specifically, I’ve been thinking roughly along the lines of “There’s a decent chance that the world will be unrecognisable in ~15-20 years or whatever, so we should probably be less confident in our ability to reliably impact the future via policies, so interventions that require ~15-20 years to pay off (e.g. cage-free campaigns, many legislative campaigns) may end up having 0 impact.” This is still a hypothesis, and I might make a separate forum post about it.
It struck me that this is very similar to some of the points you make in this post.
In your post, you’ve said you’re planning to act as though there are 4 years of the “AI midgame” and 3 years of the “AI endgame”. If I translated this into animal advocacy terms, this could be equivalent to something like “we have ~7 years to deliver (that is, realise) as much good as we can for animals”. (The actual number of years isn’t so important, this is just for illustration.)
Would you agree with this? Or would you have some different recommendation for animal advocacy people who share your views about AI having the potential to pop off pretty soon?
(Some context as to my background views: I think preventing suffering is more important than generating happiness; I think the moral values of animals is comparable to humans, e.g. within 0-2 orders of magnitude depending on species; I don’t think creating lives is morally good; I think human extinction is bad because it could directly cause suffering and death, but not so much because of its effects on loss of potential humans who do not yet exist; I think S-risks are very very bad; I’m skeptical that humans will go extinct in the near future; I think society is very fragile and could be changed unrecognisably very easily; I’m concerned more about misuse of AI than any deliberate actions/goals of an AI itself; I have a great deal of experience in animal advocacy and zero experience in anything AI-related. The person reading this certainly doesn’t need to agree with any of these views, but I wanted to highlight my background views so that it’s clear why I believe both “AI might pop off really soon” and “I still think helping animals is the best thing I can do”, even if that latter belief isn’t common among the AI community.)