Hi James. This is a great and valuable analysis and I’ve learnt a lot from it. One thing that I think would be valuable is more cross-over between this sort of medium-term (30-50 years) thinking, and ideas from longtermism. I don’t know much about longtermism but here is my attempt to do it:
Holden Karnofsky (co-CEO of Open Philanthropy) says “I estimate that there is more than a 10% chance we’ll see transformative AI within 15 years (by 2036); a ~50% chance we’ll see it within 40 years (by 2060); and a ~2/3 chance we’ll see it this century (by 2100).” I would like this possibility to be incorporated in analyses like this. Most primitively, maybe transformative AI could accelerate cultured meat research a lot. But I imagine that it affects scenarios in other ways too because it would change the world a lot and I would like people who are thinking about AI to comment on what those ways could be.
One potential scenario that is missing is human extinction. Toby Ord gave 1 in 6 chance of humanity not surviving the next 100 years in his book The Precipice.
There could be a global catastrophe (e.g., very bad pandemic or large-scale nuclear war followed by nuclear winter) which might make humanity take a big step backwards. What is the fate of factory farming in those scenarios?
Scenarios like the ones above make me think that what factory farming looks like in 50 years is a bit less directly important. Even if we get rid of factory farming, the world is quite likely to change unrecognisably soon afterwards (if not before), perhaps into something where factory farming is not that relevant anyway. Such possibilities also make it harder to plan for the future. What we do in animal advocacy could have an effect on the far future which might be more important. But then it might be better to think about how we affect various far future scenarios directly. However, I still think that the analysis you wrote is very useful, I’d just like for us to build on it with some input from longtermists.
I think it’s usually okay for an issue-based analysis of the medium-term future to disregard relatively unlikely (though still relevant!) AI / x-risk scenarios. By relatively unlikely, I just mean significantly less likely than business-as-usual, within the particular time frame we’re thinking about. As you said, If the world becomes unrecognizably different in this time frame, factory farming probably stops being a major issue and this analysis is less important. But if it doesn’t, or in the potentially very long time before it does, we won’t gain very much strategic clarity about decreasing farmed animal suffering by approaching it with a longtermist lens. There’s a lot of suffering that probably won’t affect the long-run future but is still worth thinking about effectively. In other words, I don’t think longtermism helps us think about how to be animal advocates today.
Hmm, maybe you are right. Maybe we can only predict the business-as-usual scenario of humanity where there is economic stagnation with enough clarity to make useful conclusions from those predictions. I guess my only point then is that medium-term strategy like this is a bit less important because the future will probably not be business-as-usual for very long.
Well, we could also think about which scenarios lead to the most moral circle expansion for people who might be making decisions impacting the far future. So e.g., maybe expansion of animal advocacy to developing countries is less important because of this consideration? I don’t know how strong this consideration is though because I don’t how decision-making might look in the future but maybe nobody does. I guess doing many different things (which is what the author suggests) can also be good to prepare for future scenarios we can’t predict.
Hi Saulius, thanks for your kind words! I do agree the longer-term ideas would be good to incorporate and I actually thought I put something about AI timelines in the alternative protein section but seems like I didn’t. I definitely do agree something like AI within the next 50 years (which is plausible as the links you reference say) could massively speed up the development of low-cost alternative proteins so that should be a factor pushing it towards being more likely. On other ways that it would change the world to affect farmed animals, as you say, that definitely does seem more complicated so it would be interesting to get the take on someone who works on AI.
On other considerations around human extinction, global catastrophes and other events that could change the future of humanity in huge ways, I agree it definitely does make it harder to plan and it’s not obvious what we should do in these cases. I think those cases probably a) warrant a lot more thought and b) seem much harder to design interventions for that will be robustly good. As Martin and you talk about below, it seems extremely challenging to predict good solutions for potentially very different futures whereas making the next 50 years go well for animals seems comparatively easier, and I generally believe making the next 50 years go well will be good for the next 500-5,000 years too (although this might not always be true).
I guess to clarify some of your points, is it that medium-term strategy may be unimportant as things could change very significantly, so we should try find ways to steer these future scenarios in ways that are conducive to good animal welfare (e.g. make sure ALLFED isn’t proposing insects etc.)?
Hi James. This is a great and valuable analysis and I’ve learnt a lot from it. One thing that I think would be valuable is more cross-over between this sort of medium-term (30-50 years) thinking, and ideas from longtermism. I don’t know much about longtermism but here is my attempt to do it:
Holden Karnofsky (co-CEO of Open Philanthropy) says “I estimate that there is more than a 10% chance we’ll see transformative AI within 15 years (by 2036); a ~50% chance we’ll see it within 40 years (by 2060); and a ~2/3 chance we’ll see it this century (by 2100).” I would like this possibility to be incorporated in analyses like this. Most primitively, maybe transformative AI could accelerate cultured meat research a lot. But I imagine that it affects scenarios in other ways too because it would change the world a lot and I would like people who are thinking about AI to comment on what those ways could be.
One potential scenario that is missing is human extinction. Toby Ord gave 1 in 6 chance of humanity not surviving the next 100 years in his book The Precipice.
There could be a global catastrophe (e.g., very bad pandemic or large-scale nuclear war followed by nuclear winter) which might make humanity take a big step backwards. What is the fate of factory farming in those scenarios?
Scenarios like the ones above make me think that what factory farming looks like in 50 years is a bit less directly important. Even if we get rid of factory farming, the world is quite likely to change unrecognisably soon afterwards (if not before), perhaps into something where factory farming is not that relevant anyway. Such possibilities also make it harder to plan for the future. What we do in animal advocacy could have an effect on the far future which might be more important. But then it might be better to think about how we affect various far future scenarios directly. However, I still think that the analysis you wrote is very useful, I’d just like for us to build on it with some input from longtermists.
I think it’s usually okay for an issue-based analysis of the medium-term future to disregard relatively unlikely (though still relevant!) AI / x-risk scenarios. By relatively unlikely, I just mean significantly less likely than business-as-usual, within the particular time frame we’re thinking about. As you said, If the world becomes unrecognizably different in this time frame, factory farming probably stops being a major issue and this analysis is less important. But if it doesn’t, or in the potentially very long time before it does, we won’t gain very much strategic clarity about decreasing farmed animal suffering by approaching it with a longtermist lens. There’s a lot of suffering that probably won’t affect the long-run future but is still worth thinking about effectively. In other words, I don’t think longtermism helps us think about how to be animal advocates today.
Hmm, maybe you are right. Maybe we can only predict the business-as-usual scenario of humanity where there is economic stagnation with enough clarity to make useful conclusions from those predictions. I guess my only point then is that medium-term strategy like this is a bit less important because the future will probably not be business-as-usual for very long.
Well, we could also think about which scenarios lead to the most moral circle expansion for people who might be making decisions impacting the far future. So e.g., maybe expansion of animal advocacy to developing countries is less important because of this consideration? I don’t know how strong this consideration is though because I don’t how decision-making might look in the future but maybe nobody does. I guess doing many different things (which is what the author suggests) can also be good to prepare for future scenarios we can’t predict.
Hi Saulius, thanks for your kind words! I do agree the longer-term ideas would be good to incorporate and I actually thought I put something about AI timelines in the alternative protein section but seems like I didn’t. I definitely do agree something like AI within the next 50 years (which is plausible as the links you reference say) could massively speed up the development of low-cost alternative proteins so that should be a factor pushing it towards being more likely. On other ways that it would change the world to affect farmed animals, as you say, that definitely does seem more complicated so it would be interesting to get the take on someone who works on AI.
On other considerations around human extinction, global catastrophes and other events that could change the future of humanity in huge ways, I agree it definitely does make it harder to plan and it’s not obvious what we should do in these cases. I think those cases probably a) warrant a lot more thought and b) seem much harder to design interventions for that will be robustly good. As Martin and you talk about below, it seems extremely challenging to predict good solutions for potentially very different futures whereas making the next 50 years go well for animals seems comparatively easier, and I generally believe making the next 50 years go well will be good for the next 500-5,000 years too (although this might not always be true).
I guess to clarify some of your points, is it that medium-term strategy may be unimportant as things could change very significantly, so we should try find ways to steer these future scenarios in ways that are conducive to good animal welfare (e.g. make sure ALLFED isn’t proposing insects etc.)?