Suffering-focused and downside-focused views, which you cover after strong longtermism, still support work to reduce certain x-risks, specifically s-risks. [...] Some AI safety work could look very good to both downside- and upside-focused views, so you might find you have more credence in working on that specifically.
FWIW, I agree with both of these points, and think they’re important. (Although it’s still the case that I’m not currently focused on s-risks or AI safety work, due to other considerations such as comparative advantage.)
I’m unsure of my stance on the other things you say in those first two paragraphs.
FWIW, I agree with both of these points, and think they’re important. (Although it’s still the case that I’m not currently focused on s-risks or AI safety work, due to other considerations such as comparative advantage.)
I’m unsure of my stance on the other things you say in those first two paragraphs.