Finally, I expect the development of any new technology to be safe by default.
The argument you give in this paragraph only makes sense if “safe” is defined as “not killing everyone” or “avoids risks that most people care about”. But what about “safe” as in “not causing differential intellectual progress in a wrong direction, which can lead to increased x-risks in the long run” or “protecting against or at least not causing value drift so that civilization will optimize for the ‘right’ values in the long run, whatever the appropriate meaning of that is”?
If short-term extinction risk (and in general risks that most people care about) is small compared to other kinds of existential risks, it would seem to make sense for longtermists to focus their efforts more on the latter.
The argument you give in this paragraph only makes sense if “safe” is defined as “not killing everyone” or “avoids risks that most people care about”. But what about “safe” as in “not causing differential intellectual progress in a wrong direction, which can lead to increased x-risks in the long run” or “protecting against or at least not causing value drift so that civilization will optimize for the ‘right’ values in the long run, whatever the appropriate meaning of that is”?
If short-term extinction risk (and in general risks that most people care about) is small compared to other kinds of existential risks, it would seem to make sense for longtermists to focus their efforts more on the latter.
I agree re value-drift and societal trajectory worries, and do think that work on AI is plausibly a good lever to positively affect them.