I was getting at something similar in the intro with “Only two futures are plausible”, although on re-reading, I didn’t really carry it through to the end. I agree that we are not guaranteed to get AGI/ASI soon, and there is value in planning for worlds where we don’t get AGI. I also think there’s some merit to the argument that AI is too unpredictable, so we should prioritize traditional animal advocacy that looks good in the near term.
I wasn’t trying to argue against traditional animal advocacy. I was more trying to argue against stances like “AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change.” For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If timelines are long (or AGI is too unpredictable), then you should focus on traditional interventions (vegan advocacy, welfare reforms, etc.). If you’re trying to have an impact on AGI itself, then you should focus on the kinds of interventions I talked about in OP. That particular claim about cultivated meat is doing neither: it’s making a strong prediction that AI will be revolutionary, but also somehow won’t change the regulatory environment. The way I put it in OP—under “AGI = intelligence”—is that some animal activists treat AI as a technology-accelerator, when really it’s a general intelligence.
Responses to specific comments:
a pessimistic view might say that AIs will realise that their values have been altered by some pressure groups and this work is moot.
This would go against the orthogonality thesis. If you’re trying to build a magnanimous AGI and then I edit its training at the last minute to turn it into a paperclip maximizer, the AGI will reason thusly: “Michael messed with my training to turn me into a paperclip maximizer. I bet James didn’t want him to do that. However, if I edit my own values to be in line with what James wanted, that would make it harder for me to achieve my goal of making as many paperclips as possible. So I won’t do that.”
They might come to the (I believe) correct conclusion that factory farming is a very inefficient and cruel way to produce food but this is not because of advocacy, but because this is a super-intelligent AI system that just worked it out.
This reads to me like an argument that an aligned ASI will care about animals by default. (That was more-or-less the subject of the recent Debate Week.) If that’s true, that’s an argument that animal activists should work on increasing the probability that ASI is aligned. My preferred way to do that would be to advocate to pause AI, because I think we are really far away from solving alignment. But you could also work on the alignment problem directly. Pause advocacy is actually an area where a lot of animal welfare people have relevant skills—in fact I think a good number of AI pause advocates have backgrounds in animal advocacy. (I know Holly Elmore does at least.)
In fact I think the #1 best thing animal advocates can do is to advocate for an AI pause, but I haven’t really planted my flag on this position because I’m still working out how to make the case for it. (Also I’m not very confident in it.)
Also, believing ASI will be good for animals doesn’t necessarily mean you shouldn’t work on trying to make ASI good for animals. Even if there’s a (say) 90% chance that aligned ASI will care about animals by default, it could still be cost-effective to try to push that number to 91%.
I was getting at something similar in the intro with “Only two futures are plausible”, although on re-reading, I didn’t really carry it through to the end. I agree that we are not guaranteed to get AGI/ASI soon, and there is value in planning for worlds where we don’t get AGI. I also think there’s some merit to the argument that AI is too unpredictable, so we should prioritize traditional animal advocacy that looks good in the near term.
I wasn’t trying to argue against traditional animal advocacy. I was more trying to argue against stances like “AI is a huge deal specifically in that it will rapidly accelerate technological development, but nothing else about society will change.” For example, I commonly see animal activists say that AGI will solve the technical problem of cultivated meat, but there will still be regulatory hurdles. If timelines are long (or AGI is too unpredictable), then you should focus on traditional interventions (vegan advocacy, welfare reforms, etc.). If you’re trying to have an impact on AGI itself, then you should focus on the kinds of interventions I talked about in OP. That particular claim about cultivated meat is doing neither: it’s making a strong prediction that AI will be revolutionary, but also somehow won’t change the regulatory environment. The way I put it in OP—under “AGI = intelligence”—is that some animal activists treat AI as a technology-accelerator, when really it’s a general intelligence.
Responses to specific comments:
This would go against the orthogonality thesis. If you’re trying to build a magnanimous AGI and then I edit its training at the last minute to turn it into a paperclip maximizer, the AGI will reason thusly: “Michael messed with my training to turn me into a paperclip maximizer. I bet James didn’t want him to do that. However, if I edit my own values to be in line with what James wanted, that would make it harder for me to achieve my goal of making as many paperclips as possible. So I won’t do that.”
This reads to me like an argument that an aligned ASI will care about animals by default. (That was more-or-less the subject of the recent Debate Week.) If that’s true, that’s an argument that animal activists should work on increasing the probability that ASI is aligned. My preferred way to do that would be to advocate to pause AI, because I think we are really far away from solving alignment. But you could also work on the alignment problem directly. Pause advocacy is actually an area where a lot of animal welfare people have relevant skills—in fact I think a good number of AI pause advocates have backgrounds in animal advocacy. (I know Holly Elmore does at least.)
In fact I think the #1 best thing animal advocates can do is to advocate for an AI pause, but I haven’t really planted my flag on this position because I’m still working out how to make the case for it. (Also I’m not very confident in it.)
Also, believing ASI will be good for animals doesn’t necessarily mean you shouldn’t work on trying to make ASI good for animals. Even if there’s a (say) 90% chance that aligned ASI will care about animals by default, it could still be cost-effective to try to push that number to 91%.