Can you expand a bit on what you mean by why these ideas applying better to near – termism?
E.g. Out of ‘hey it seems like machine learning systems are getting scarily powerful, maybe we should do something to make sure they’re aligned with humans’ vs ‘you might think it’s most cost effective to help extremely poor people or animals but actually if you account for the far future it looks like existential risks are more important, and AI is one of the most credible existential risks so maybe you should work on that’, the first one seems like a more scalable/legible message or something. Obviously I’ve strawmaned the second one a bit to make a point but I’m curious what your perspective is!
Can you expand a bit on what you mean by why these ideas applying better to near – termism?
E.g. Out of ‘hey it seems like machine learning systems are getting scarily powerful, maybe we should do something to make sure they’re aligned with humans’ vs ‘you might think it’s most cost effective to help extremely poor people or animals but actually if you account for the far future it looks like existential risks are more important, and AI is one of the most credible existential risks so maybe you should work on that’, the first one seems like a more scalable/legible message or something. Obviously I’ve strawmaned the second one a bit to make a point but I’m curious what your perspective is!
Maybe I should have said global health and development, rather than near-termism.