Thank you for this post. I work in animal advocacy rather than AI, but I’ve been thinking about some similar effects of transformative AI on animal advocacy.
I’ve been shocked by the progress of AI, so I’ve been thinking it might be necessary to update how we think about the world in animal advocacy. Specifically, I’ve been thinking roughly along the lines of “There’s a decent chance that the world will be unrecognisable in ~15-20 years or whatever, so we should probably be less confident in our ability to reliably impact the future via policies, so interventions that require ~15-20 years to pay off (e.g. cage-free campaigns, many legislative campaigns) may end up having 0 impact.” This is still a hypothesis, and I might make a separate forum post about it.
It struck me that this is very similar to some of the points you make in this post.
In your post, you’ve said you’re planning to act as though there are 4 years of the “AI midgame” and 3 years of the “AI endgame”. If I translated this into animal advocacy terms, this could be equivalent to something like “we have ~7 years to deliver (that is, realise) as much good as we can for animals”. (The actual number of years isn’t so important, this is just for illustration.)
Would you agree with this? Or would you have some different recommendation for animal advocacy people who share your views about AI having the potential to pop off pretty soon?
(Some context as to my background views: I think preventing suffering is more important than generating happiness; I think the moral values of animals is comparable to humans, e.g. within 0-2 orders of magnitude depending on species; I don’t think creating lives is morally good; I think human extinction is bad because it could directly cause suffering and death, but not so much because of its effects on loss of potential humans who do not yet exist; I think S-risks are very very bad; I’m skeptical that humans will go extinct in the near future; I think society is very fragile and could be changed unrecognisably very easily; I’m concerned more about misuse of AI than any deliberate actions/goals of an AI itself; I have a great deal of experience in animal advocacy and zero experience in anything AI-related. The person reading this certainly doesn’t need to agree with any of these views, but I wanted to highlight my background views so that it’s clear why I believe both “AI might pop off really soon” and “I still think helping animals is the best thing I can do”, even if that latter belief isn’t common among the AI community.)
I’m not Buck, but I can venture some thoughts as somebody who thinks it’s reasonably likely we don’t have much time.
Given that “I’m skeptical that humans will go extinct in the near future” and that you prioritize preventing suffering over creating happiness, it seems reasonable for you to condition your plan on humanity surviving the creation of AGI. You might then back-chain from possible futures you want to steer toward or away from. For instance, if AGI enables space colonization, it sure would be terrible if we just had planets covered in factory farms. What is the path by which we would get there, and how can you change it so that we have e.g., cultured meat production planets instead. I think this is probably pretty hard to do; the term “singularity” has been used partially to describe that we cannot predict what would happen after it. That said, the stakes are pretty astronomical such that I think it would be pretty reasonable for >20% of animal advocacy effort to be specifically aimed at preventing AGI-enabled futures with mass animal suffering. This is almost the opposite of “we have ~7 years to deliver (that is, realise) as much good as we can for animals.” Instead it might be better to have an attitude like “what happens after 7 years is going to be a huge deal in some direction, let’s shape it to prevent animal suffering.”
I don’t know what kind of actions would be recommended by this thinking. To venture a guess: trying to accelerate meat alternatives, doing lots of polling around public opinions on moral questions around eating meat (with the goal of hopefully finding that humans think factory farming is wrong so a friendly AI system might adopt such a goal as well; human behavior in this regard seems like a particularly bad basis on which to train AIs). Pretty uncertain about these two idea and I wouldn’t be surprised if they’re actually quite bad.
Thank you, I appreciate you taking the time to construct this convincing and high-quality comment. I’ll reflect on this in detail.
I did do some initial scoping work for longtermist animal stuff last year, of which AGI-enabled mass suffering was a major part of course, so might be time to dust that off.
Thank you for this post. I work in animal advocacy rather than AI, but I’ve been thinking about some similar effects of transformative AI on animal advocacy.
I’ve been shocked by the progress of AI, so I’ve been thinking it might be necessary to update how we think about the world in animal advocacy. Specifically, I’ve been thinking roughly along the lines of “There’s a decent chance that the world will be unrecognisable in ~15-20 years or whatever, so we should probably be less confident in our ability to reliably impact the future via policies, so interventions that require ~15-20 years to pay off (e.g. cage-free campaigns, many legislative campaigns) may end up having 0 impact.” This is still a hypothesis, and I might make a separate forum post about it.
It struck me that this is very similar to some of the points you make in this post.
In your post, you’ve said you’re planning to act as though there are 4 years of the “AI midgame” and 3 years of the “AI endgame”. If I translated this into animal advocacy terms, this could be equivalent to something like “we have ~7 years to deliver (that is, realise) as much good as we can for animals”. (The actual number of years isn’t so important, this is just for illustration.)
Would you agree with this? Or would you have some different recommendation for animal advocacy people who share your views about AI having the potential to pop off pretty soon?
(Some context as to my background views: I think preventing suffering is more important than generating happiness; I think the moral values of animals is comparable to humans, e.g. within 0-2 orders of magnitude depending on species; I don’t think creating lives is morally good; I think human extinction is bad because it could directly cause suffering and death, but not so much because of its effects on loss of potential humans who do not yet exist; I think S-risks are very very bad; I’m skeptical that humans will go extinct in the near future; I think society is very fragile and could be changed unrecognisably very easily; I’m concerned more about misuse of AI than any deliberate actions/goals of an AI itself; I have a great deal of experience in animal advocacy and zero experience in anything AI-related. The person reading this certainly doesn’t need to agree with any of these views, but I wanted to highlight my background views so that it’s clear why I believe both “AI might pop off really soon” and “I still think helping animals is the best thing I can do”, even if that latter belief isn’t common among the AI community.)
I’m not Buck, but I can venture some thoughts as somebody who thinks it’s reasonably likely we don’t have much time.
Given that “I’m skeptical that humans will go extinct in the near future” and that you prioritize preventing suffering over creating happiness, it seems reasonable for you to condition your plan on humanity surviving the creation of AGI. You might then back-chain from possible futures you want to steer toward or away from. For instance, if AGI enables space colonization, it sure would be terrible if we just had planets covered in factory farms. What is the path by which we would get there, and how can you change it so that we have e.g., cultured meat production planets instead. I think this is probably pretty hard to do; the term “singularity” has been used partially to describe that we cannot predict what would happen after it. That said, the stakes are pretty astronomical such that I think it would be pretty reasonable for >20% of animal advocacy effort to be specifically aimed at preventing AGI-enabled futures with mass animal suffering. This is almost the opposite of “we have ~7 years to deliver (that is, realise) as much good as we can for animals.” Instead it might be better to have an attitude like “what happens after 7 years is going to be a huge deal in some direction, let’s shape it to prevent animal suffering.”
I don’t know what kind of actions would be recommended by this thinking. To venture a guess: trying to accelerate meat alternatives, doing lots of polling around public opinions on moral questions around eating meat (with the goal of hopefully finding that humans think factory farming is wrong so a friendly AI system might adopt such a goal as well; human behavior in this regard seems like a particularly bad basis on which to train AIs). Pretty uncertain about these two idea and I wouldn’t be surprised if they’re actually quite bad.
Thank you, I appreciate you taking the time to construct this convincing and high-quality comment. I’ll reflect on this in detail.
I did do some initial scoping work for longtermist animal stuff last year, of which AGI-enabled mass suffering was a major part of course, so might be time to dust that off.