On (b): The first thing to note is that the Long Reflection doesnât require stopping any actions âthat could have a long term impactâ, and certainly not stopping people considering such actions. (I assume by âconsiderâ you meant âconsider doing it this yearâ, or something like that?)
It requires stopping people taking actions that weâre not yet confident wonât turn out to have been major, irreversible mistakes. So people could still do things weâre already very confident are good, or things that are relatively minor.
Some good stuff from The Precipice on this, mainly from footnotes:
The ultimate aim of the Long Reflection would be to achieve a final answer to the question of which is the best kind of future for humanity. [...]
We would not need to fully complete this process before moving forward. What is essential is to be sufficiently confident in the broad shape of what we are aiming at before taking each bold and potentially irreversible actionâeach action that could plausibly lock in substantial aspects of our future trajectory.
Also:
We might adopt the guiding principle of minimising lock-in. Or to avoid the double negative, of preserving our options.
[Endnote:] Note that even on this view options can be instrumentally bad if they would close off many other options. So there would be instrumental value to closing off such options (for example, the option of deliberately causing our own extinction). One might thus conclude that the only thing we should lock in is the minimisation of lock-in.
This is an elegant and reasonable principle, but could probably be improved upon by simply delaying our ability to choose such options, or making them require a large supermajority (techniques that are often used when setting up binding multiparty agreements such as constitutions and contracts). That way we help avoid going extinct by accident (a clear failing of wisdom in any society), while still allowing for the unlikely possibility that we later come to realise our extinction would be for the best.
Also:
There may yet be ethical questions about our longterm future which demand even more urgency than existential security, so that they canât be left until later. These would be important to find and should be explored concurrently with achieving existential security.
Somewhat less relevant:
Protecting our potential (and thus existential security more generally) involves locking in a commitment to avoid existential catastrophe. Seen in this light, there is an interesting tension with the idea of minimising lock-in (here [link]). What is happening is that we can best minimise overall lock-in (coming from existential risks) by locking in a small amount of other constraints.
But we should still be extremely careful locking anything in, as we might risk cutting off what would have turned out to be the best option. One option would be to not strictly lock in our commitment to avoid existential risk (e.g. by keeping total risk to a strict budget across all future centuries), but instead to make a slightly softer commitment that is merely very difficult to overturn. Constitutions are a good example, typically allowing for changes at later dates, but setting a very high bar to achieving this.
With this in mind, we can tweak your question to âSome actions that could turn out to be major, irreversible mistakes from a the perspective of the long-term future could be taken unilaterally. How could people be stopped from doing that during the Long Reflection?â
This ends up being roughly equivalent to the question âHow could we get existential risk per year low enough that we can be confident of maintaining our potential for the entire duration of the Long Reflection (without having to take actions like locking in our best guess to avoid being preempted by something worse)?â
I donât think anyone has a detailed answer to that. But one sort-of promising thing is that we may have to end up with some decent ideas of answers to that in order to just avoid existential catastrophe in the first place. I.e., conditional on humanity getting to a Long Reflection process, my credence that humanity has good answers to those sorts of problems is higher than my current credence on that matter.
(This is also something I plan to discuss a bit more in those upcoming(ish) drafts.)
On (b): The first thing to note is that the Long Reflection doesnât require stopping any actions âthat could have a long term impactâ, and certainly not stopping people considering such actions. (I assume by âconsiderâ you meant âconsider doing it this yearâ, or something like that?)
It requires stopping people taking actions that weâre not yet confident wonât turn out to have been major, irreversible mistakes. So people could still do things weâre already very confident are good, or things that are relatively minor.
Some good stuff from The Precipice on this, mainly from footnotes:
Also:
Also:
Somewhat less relevant:
With this in mind, we can tweak your question to âSome actions that could turn out to be major, irreversible mistakes from a the perspective of the long-term future could be taken unilaterally. How could people be stopped from doing that during the Long Reflection?â
This ends up being roughly equivalent to the question âHow could we get existential risk per year low enough that we can be confident of maintaining our potential for the entire duration of the Long Reflection (without having to take actions like locking in our best guess to avoid being preempted by something worse)?â
I donât think anyone has a detailed answer to that. But one sort-of promising thing is that we may have to end up with some decent ideas of answers to that in order to just avoid existential catastrophe in the first place. I.e., conditional on humanity getting to a Long Reflection process, my credence that humanity has good answers to those sorts of problems is higher than my current credence on that matter.
(This is also something I plan to discuss a bit more in those upcoming(ish) drafts.)