Could you expand on what effects eating meat would have on thinking about s-risks and other AI stuff? What kinds of scenarios are you thinking of?
My initial reaction is somewhat sceptical. I think these effects are hard to assess and could go either way. But it depends a bit on what mechanisms you have in mind.
Nobody actively wants factory farming to happen, but it’s the cheapest way to get something we want (i.e. meat), and we’ve built a system where it’s really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there’s something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it’s really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don’t consider these downside risks seem pretty naïve.
Regarding the first point, yeah we should do something about it, but that seems unrelated to the point about eating meat leading to motivated reasoning about s-risks and AI.
Regarding the second point, it is not obvious to me that eating meat leads to worse reasoning about suffering subroutines. In principle the opposite might be true. Seems very hard to tell. I think there is a risk that arguments about this often beg the question (e.g. by assuming that suffering subroutines are a major risk, which is the issue under discussion).
Regarding the third point—not quite sure I follow, but in any event I think that futures without strong AGI might be dominated in expected value terms by futures with strong AGI. And certainly future downside risks should be considered, but the link between that and current meat-eating is non-obvious.
Could you expand on what effects eating meat would have on thinking about s-risks and other AI stuff? What kinds of scenarios are you thinking of?
My initial reaction is somewhat sceptical. I think these effects are hard to assess and could go either way. But it depends a bit on what mechanisms you have in mind.
Quickly written:
Nobody actively wants factory farming to happen, but it’s the cheapest way to get something we want (i.e. meat), and we’ve built a system where it’s really hard for altruists to stop it from happening. If a pattern like this extended into the long-term future, we might want to do something about it.
In the context of AI, suffering subroutines might be an example of that.
Regarding futures without strong AGI: Factory farming is the arguably most important example of a present-day ongoing atrocity. If you fully internalize just how bad this is, that there’s something like a genocide (in terms of moral badness, not evilness) going on right here, right now, under our eyes, in wealthy Western democracies that are often understood to be the most morally advanced places on earth, and it’s really hard for us to stop it, that might affect your general outlook on the long-term future. I still think the long-term future will be great in expectation, but it also makes me think that utopian visions that don’t consider these downside risks seem pretty naïve.
Thanks.
Regarding the first point, yeah we should do something about it, but that seems unrelated to the point about eating meat leading to motivated reasoning about s-risks and AI.
Regarding the second point, it is not obvious to me that eating meat leads to worse reasoning about suffering subroutines. In principle the opposite might be true. Seems very hard to tell. I think there is a risk that arguments about this often beg the question (e.g. by assuming that suffering subroutines are a major risk, which is the issue under discussion).
Regarding the third point—not quite sure I follow, but in any event I think that futures without strong AGI might be dominated in expected value terms by futures with strong AGI. And certainly future downside risks should be considered, but the link between that and current meat-eating is non-obvious.