I think it’s usually okay for an issue-based analysis of the medium-term future to disregard relatively unlikely (though still relevant!) AI / x-risk scenarios. By relatively unlikely, I just mean significantly less likely than business-as-usual, within the particular time frame we’re thinking about. As you said, If the world becomes unrecognizably different in this time frame, factory farming probably stops being a major issue and this analysis is less important. But if it doesn’t, or in the potentially very long time before it does, we won’t gain very much strategic clarity about decreasing farmed animal suffering by approaching it with a longtermist lens. There’s a lot of suffering that probably won’t affect the long-run future but is still worth thinking about effectively. In other words, I don’t think longtermism helps us think about how to be animal advocates today.
Hmm, maybe you are right. Maybe we can only predict the business-as-usual scenario of humanity where there is economic stagnation with enough clarity to make useful conclusions from those predictions. I guess my only point then is that medium-term strategy like this is a bit less important because the future will probably not be business-as-usual for very long.
Well, we could also think about which scenarios lead to the most moral circle expansion for people who might be making decisions impacting the far future. So e.g., maybe expansion of animal advocacy to developing countries is less important because of this consideration? I don’t know how strong this consideration is though because I don’t how decision-making might look in the future but maybe nobody does. I guess doing many different things (which is what the author suggests) can also be good to prepare for future scenarios we can’t predict.
I think it’s usually okay for an issue-based analysis of the medium-term future to disregard relatively unlikely (though still relevant!) AI / x-risk scenarios. By relatively unlikely, I just mean significantly less likely than business-as-usual, within the particular time frame we’re thinking about. As you said, If the world becomes unrecognizably different in this time frame, factory farming probably stops being a major issue and this analysis is less important. But if it doesn’t, or in the potentially very long time before it does, we won’t gain very much strategic clarity about decreasing farmed animal suffering by approaching it with a longtermist lens. There’s a lot of suffering that probably won’t affect the long-run future but is still worth thinking about effectively. In other words, I don’t think longtermism helps us think about how to be animal advocates today.
Hmm, maybe you are right. Maybe we can only predict the business-as-usual scenario of humanity where there is economic stagnation with enough clarity to make useful conclusions from those predictions. I guess my only point then is that medium-term strategy like this is a bit less important because the future will probably not be business-as-usual for very long.
Well, we could also think about which scenarios lead to the most moral circle expansion for people who might be making decisions impacting the far future. So e.g., maybe expansion of animal advocacy to developing countries is less important because of this consideration? I don’t know how strong this consideration is though because I don’t how decision-making might look in the future but maybe nobody does. I guess doing many different things (which is what the author suggests) can also be good to prepare for future scenarios we can’t predict.