I think it’s a question of priorities. Yes, irreversible technological regression would be incredibly bad for present humans, but so would lots of other things that deserve a lot of attention from a neartermist perspective. However, once you start assigning non-trivial importance to the long-term future, things like this start looking incredibly incredibly bad and so get bumped up the priority list.
Also value lock-in could theoretically be caused by a totalitarian human regime with extremely high long-term stability.
I’d add s-risks as another longtermist priority not covered by either neartermist priorities or a focus on mitigating extinction risks (although one could argue that most s-risks are intimately entwined with AI alignment).
I think it’s a question of priorities. Yes, irreversible technological regression would be incredibly bad for present humans, but so would lots of other things that deserve a lot of attention from a neartermist perspective. However, once you start assigning non-trivial importance to the long-term future, things like this start looking incredibly incredibly bad and so get bumped up the priority list.
Also value lock-in could theoretically be caused by a totalitarian human regime with extremely high long-term stability.
I’d add s-risks as another longtermist priority not covered by either neartermist priorities or a focus on mitigating extinction risks (although one could argue that most s-risks are intimately entwined with AI alignment).