One reason you might believe in a difference in terms of tractability is the stickiness of extinction, and the lack of stickiness attaching to things like societal values. Here’s very roughly what I have in mind, running roughshod over certain caveats and the like.
The case where we go extinct seems highly stable, of course. Extinction is forever. If you believe some kind of ‘time of perils’ hypothesis, surviving through such a time should also result in a scenario where non-extinction is highly stable. And the case for longtermism arguably hinges considerably on such a time of perils hypothesis being true, as David argues.
By contrast, I think it’s natural to worry that efforts to alter values and institutions so as to beneficially effect the very long-run by nudging us closer to the very best possible outomces are far more vulnerable to wash-out. The key exception would be if you suppose that there will be some kind of lock-in event.
So does the case for focusing on better futures work hinge crucially, in your view, on assigning significant confidence to lock-in events occuring within the near-term?
Yeah, I think that lock-in this century is quite a bit more likely than extinction this century. (Especially if we’re talking about hitting a points of no return for total extinction.)
That’s via two pathways: - AGI-enforced institutions (including AGI-enabled immortality of rulers). - Defence-dominance of star systems
I do think that “path dependence” (a broader idea than lock-in) is a big deal, but most of the long-term impact of that goes via a billiards dynamic: path-dependence on X, today, affects some lock-in event around X down the road. (Where e.g. digital rights and space governance are plausible here.)
I think my gut reaction is to judge extinction this century as at least as likely as lock-in, though a lot might depend on what’s meant by lock-in. But I also haven’t thought about this much!
One reason you might believe in a difference in terms of tractability is the stickiness of extinction, and the lack of stickiness attaching to things like societal values. Here’s very roughly what I have in mind, running roughshod over certain caveats and the like.
The case where we go extinct seems highly stable, of course. Extinction is forever. If you believe some kind of ‘time of perils’ hypothesis, surviving through such a time should also result in a scenario where non-extinction is highly stable. And the case for longtermism arguably hinges considerably on such a time of perils hypothesis being true, as David argues.
By contrast, I think it’s natural to worry that efforts to alter values and institutions so as to beneficially effect the very long-run by nudging us closer to the very best possible outomces are far more vulnerable to wash-out. The key exception would be if you suppose that there will be some kind of lock-in event.
So does the case for focusing on better futures work hinge crucially, in your view, on assigning significant confidence to lock-in events occuring within the near-term?
Yeah, I think that lock-in this century is quite a bit more likely than extinction this century. (Especially if we’re talking about hitting a points of no return for total extinction.)
That’s via two pathways:
- AGI-enforced institutions (including AGI-enabled immortality of rulers).
- Defence-dominance of star systems
I do think that “path dependence” (a broader idea than lock-in) is a big deal, but most of the long-term impact of that goes via a billiards dynamic: path-dependence on X, today, affects some lock-in event around X down the road. (Where e.g. digital rights and space governance are plausible here.)
I think my gut reaction is to judge extinction this century as at least as likely as lock-in, though a lot might depend on what’s meant by lock-in. But I also haven’t thought about this much!