It is not clear to me that taking action on non-extinction x-risks would be in conflict with neartermist goals:
Value lockin → like an AI singleton locking in a scenario that would not be optimal for longtermist goals? Isn’t that akin to the alignment problem, and so directly intertwined with extinction risk?
Irreversible technological regression → wouldn’t this be incredibly bad for present humans and so coincide with neartermist goals?
Any discrete event that prevents us from reaching technological maturity → wouldn’t this essentially translate to reducing extinction risk as well as ensuring we have the freedom and wealth to pursue technological advancement, thus coinciding with neartermist goals?
I think it’s a question of priorities. Yes, irreversible technological regression would be incredibly bad for present humans, but so would lots of other things that deserve a lot of attention from a neartermist perspective. However, once you start assigning non-trivial importance to the long-term future, things like this start looking incredibly incredibly bad and so get bumped up the priority list.
Also value lock-in could theoretically be caused by a totalitarian human regime with extremely high long-term stability.
I’d add s-risks as another longtermist priority not covered by either neartermist priorities or a focus on mitigating extinction risks (although one could argue that most s-risks are intimately entwined with AI alignment).
It is not clear to me that taking action on non-extinction x-risks would be in conflict with neartermist goals:
Value lockin → like an AI singleton locking in a scenario that would not be optimal for longtermist goals? Isn’t that akin to the alignment problem, and so directly intertwined with extinction risk?
Irreversible technological regression → wouldn’t this be incredibly bad for present humans and so coincide with neartermist goals?
Any discrete event that prevents us from reaching technological maturity → wouldn’t this essentially translate to reducing extinction risk as well as ensuring we have the freedom and wealth to pursue technological advancement, thus coinciding with neartermist goals?
Am I missing something?
I think it’s a question of priorities. Yes, irreversible technological regression would be incredibly bad for present humans, but so would lots of other things that deserve a lot of attention from a neartermist perspective. However, once you start assigning non-trivial importance to the long-term future, things like this start looking incredibly incredibly bad and so get bumped up the priority list.
Also value lock-in could theoretically be caused by a totalitarian human regime with extremely high long-term stability.
I’d add s-risks as another longtermist priority not covered by either neartermist priorities or a focus on mitigating extinction risks (although one could argue that most s-risks are intimately entwined with AI alignment).