Objection: The longtermist idea makes quite strong somewhat counterintuitive claims about how to do good but the longtermist community has not yet demonstrated appropriately strong intellectual rigour (other than in the field of philosophy) about these claims and what they mean in practice. Individuals should therefore should be sceptical of the claims of longtermists about how to do good.
Do you think there are any counterexamples to this? For example certain actions to reduce x-risk?
I guess some of the: AI will be transformative therefore deserves attention arguments are some of the oldest and most generally excepted within this space.
For various reasons I think the arguments for focusing on x-risk are much stronger than other longtermist arguments, but how best to do this, what x-risks to focus on, etc, is all still new and somewhat uncertain.
Do you think there are any counterexamples to this? For example certain actions to reduce x-risk?
I guess some of the: AI will be transformative therefore deserves attention arguments are some of the oldest and most generally excepted within this space.
For various reasons I think the arguments for focusing on x-risk are much stronger than other longtermist arguments, but how best to do this, what x-risks to focus on, etc, is all still new and somewhat uncertain.