I appreciate the clarity and structure of this post, and I essentially agree with its conclusions (e.g., I’ve switched into a longtermism-aligned career). On the other hand, I think some of the arguments given don’t necessarily support the conclusions, and that there are some other “objections” some people hold which you haven’t note (some of which other commenters have already noted). I’ll put separate points in separate comments.
If we care about all the effects of our actions, it’s not clear that near-term interventions are any less speculative than long-term interventions. This is because of the dominating but uncertain long-term effects of near-term interventions
To me, that’s perhaps the most important pair of sentences in this post. I think this is a key point that people often miss. I believe Rob Wiblin discusses similar matters in the second interview here.
For the same reason, I also agree with the following sentence:
We have a better idea of OpenAI’s long-term effects than AMF’s, just because we’ve thought more about the long-term effects of OpenAI, and it’s targeting a long-term problem
However, it may be worth noting that supporting longtermist interventions like AI safety is probably also more likely than AMF to have substantial negative effects, even if it’s still better in expectation.
AMF’s bad effects would probably have to flow through very minor population increases or fertility declines or something like that, and then a complex and probably weak causal chain from there to really big deal things. Whereas with AI safety work, which I think is typically very valuable in expectation, it also seems pretty easy to imagine it being quite bad.
E.g., extra support to it could create an attention hazard, highlighting the potential significance of AI, leading to more funding or government involvement, leading to faster development and less caution, etc. Or the safety research could “spill over” into capabilities development, which may not be negative at all, but plausibly could be substantially negative.
I don’t think this is an argument for avoiding longtermist interventions. This is because I think that, very roughly speaking, we should do what’s best in expectation, rather than worrying especially much about “keeping our hands clean” and avoiding any risk of causing harm. But this does seem a point worth noting, and I do think it’s an argument for thinking more about downside risks in relation to longtermist interventions than in relation to near-termist (e.g., global poverty) intervention.
(That’s perhaps more of a tangent than a response to your core claims.)
I appreciate the clarity and structure of this post, and I essentially agree with its conclusions (e.g., I’ve switched into a longtermism-aligned career). On the other hand, I think some of the arguments given don’t necessarily support the conclusions, and that there are some other “objections” some people hold which you haven’t note (some of which other commenters have already noted). I’ll put separate points in separate comments.
To me, that’s perhaps the most important pair of sentences in this post. I think this is a key point that people often miss. I believe Rob Wiblin discusses similar matters in the second interview here.
For the same reason, I also agree with the following sentence:
However, it may be worth noting that supporting longtermist interventions like AI safety is probably also more likely than AMF to have substantial negative effects, even if it’s still better in expectation.
AMF’s bad effects would probably have to flow through very minor population increases or fertility declines or something like that, and then a complex and probably weak causal chain from there to really big deal things. Whereas with AI safety work, which I think is typically very valuable in expectation, it also seems pretty easy to imagine it being quite bad.
E.g., extra support to it could create an attention hazard, highlighting the potential significance of AI, leading to more funding or government involvement, leading to faster development and less caution, etc. Or the safety research could “spill over” into capabilities development, which may not be negative at all, but plausibly could be substantially negative.
I don’t think this is an argument for avoiding longtermist interventions. This is because I think that, very roughly speaking, we should do what’s best in expectation, rather than worrying especially much about “keeping our hands clean” and avoiding any risk of causing harm. But this does seem a point worth noting, and I do think it’s an argument for thinking more about downside risks in relation to longtermist interventions than in relation to near-termist (e.g., global poverty) intervention.
(That’s perhaps more of a tangent than a response to your core claims.)