I guess some of the: AI will be transformative therefore deserves attention arguments are some of the oldest and most generally excepted within this space.
For various reasons I think the arguments for focusing on x-risk are much stronger than other longtermist arguments, but how best to do this, what x-risks to focus on, etc, is all still new and somewhat uncertain.
I guess some of the: AI will be transformative therefore deserves attention arguments are some of the oldest and most generally excepted within this space.
For various reasons I think the arguments for focusing on x-risk are much stronger than other longtermist arguments, but how best to do this, what x-risks to focus on, etc, is all still new and somewhat uncertain.