I’m not Buck, but I can venture some thoughts as somebody who thinks it’s reasonably likely we don’t have much time.
Given that “I’m skeptical that humans will go extinct in the near future” and that you prioritize preventing suffering over creating happiness, it seems reasonable for you to condition your plan on humanity surviving the creation of AGI. You might then back-chain from possible futures you want to steer toward or away from. For instance, if AGI enables space colonization, it sure would be terrible if we just had planets covered in factory farms. What is the path by which we would get there, and how can you change it so that we have e.g., cultured meat production planets instead. I think this is probably pretty hard to do; the term “singularity” has been used partially to describe that we cannot predict what would happen after it. That said, the stakes are pretty astronomical such that I think it would be pretty reasonable for >20% of animal advocacy effort to be specifically aimed at preventing AGI-enabled futures with mass animal suffering. This is almost the opposite of “we have ~7 years to deliver (that is, realise) as much good as we can for animals.” Instead it might be better to have an attitude like “what happens after 7 years is going to be a huge deal in some direction, let’s shape it to prevent animal suffering.”
I don’t know what kind of actions would be recommended by this thinking. To venture a guess: trying to accelerate meat alternatives, doing lots of polling around public opinions on moral questions around eating meat (with the goal of hopefully finding that humans think factory farming is wrong so a friendly AI system might adopt such a goal as well; human behavior in this regard seems like a particularly bad basis on which to train AIs). Pretty uncertain about these two idea and I wouldn’t be surprised if they’re actually quite bad.
Thank you, I appreciate you taking the time to construct this convincing and high-quality comment. I’ll reflect on this in detail.
I did do some initial scoping work for longtermist animal stuff last year, of which AGI-enabled mass suffering was a major part of course, so might be time to dust that off.
I’m not Buck, but I can venture some thoughts as somebody who thinks it’s reasonably likely we don’t have much time.
Given that “I’m skeptical that humans will go extinct in the near future” and that you prioritize preventing suffering over creating happiness, it seems reasonable for you to condition your plan on humanity surviving the creation of AGI. You might then back-chain from possible futures you want to steer toward or away from. For instance, if AGI enables space colonization, it sure would be terrible if we just had planets covered in factory farms. What is the path by which we would get there, and how can you change it so that we have e.g., cultured meat production planets instead. I think this is probably pretty hard to do; the term “singularity” has been used partially to describe that we cannot predict what would happen after it. That said, the stakes are pretty astronomical such that I think it would be pretty reasonable for >20% of animal advocacy effort to be specifically aimed at preventing AGI-enabled futures with mass animal suffering. This is almost the opposite of “we have ~7 years to deliver (that is, realise) as much good as we can for animals.” Instead it might be better to have an attitude like “what happens after 7 years is going to be a huge deal in some direction, let’s shape it to prevent animal suffering.”
I don’t know what kind of actions would be recommended by this thinking. To venture a guess: trying to accelerate meat alternatives, doing lots of polling around public opinions on moral questions around eating meat (with the goal of hopefully finding that humans think factory farming is wrong so a friendly AI system might adopt such a goal as well; human behavior in this regard seems like a particularly bad basis on which to train AIs). Pretty uncertain about these two idea and I wouldn’t be surprised if they’re actually quite bad.
Thank you, I appreciate you taking the time to construct this convincing and high-quality comment. I’ll reflect on this in detail.
I did do some initial scoping work for longtermist animal stuff last year, of which AGI-enabled mass suffering was a major part of course, so might be time to dust that off.