Regarding the first point, yeah we should do something about it, but that seems unrelated to the point about eating meat leading to motivated reasoning about s-risks and AI.
Regarding the second point, it is not obvious to me that eating meat leads to worse reasoning about suffering subroutines. In principle the opposite might be true. Seems very hard to tell. I think there is a risk that arguments about this often beg the question (e.g. by assuming that suffering subroutines are a major risk, which is the issue under discussion).
Regarding the third point—not quite sure I follow, but in any event I think that futures without strong AGI might be dominated in expected value terms by futures with strong AGI. And certainly future downside risks should be considered, but the link between that and current meat-eating is non-obvious.
Thanks.
Regarding the first point, yeah we should do something about it, but that seems unrelated to the point about eating meat leading to motivated reasoning about s-risks and AI.
Regarding the second point, it is not obvious to me that eating meat leads to worse reasoning about suffering subroutines. In principle the opposite might be true. Seems very hard to tell. I think there is a risk that arguments about this often beg the question (e.g. by assuming that suffering subroutines are a major risk, which is the issue under discussion).
Regarding the third point—not quite sure I follow, but in any event I think that futures without strong AGI might be dominated in expected value terms by futures with strong AGI. And certainly future downside risks should be considered, but the link between that and current meat-eating is non-obvious.