It’s true existential risk from AI isn’t generally considered a ‘near-term’ or ‘current problem’. I guess the point I was trying to make is that a strong longtermist’s view that it is important to reduce the existential threat of AI doesn’t preclude the possibility that they may also think it’s important to work on near-term issues e.g. for the knowledge creation it would afford.
Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.
Apologies, I do still need to read your blogpost!
It’s true existential risk from AI isn’t generally considered a ‘near-term’ or ‘current problem’. I guess the point I was trying to make is that a strong longtermist’s view that it is important to reduce the existential threat of AI doesn’t preclude the possibility that they may also think it’s important to work on near-term issues e.g. for the knowledge creation it would afford.
Granted any focus on AI work necessarily reduces the amount of attention going towards near-term issues, which I suppose is your point.
Yep :)