An ecosystem of organizations to initiate a “Hasty Reflection”
Values and Reflective Processes, Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve
The Long Reflection appears to me to be robustly desirable. It only suffers from being more or less unrealistic depending on how it is construed.
In particular, I feel that two aspects of it are in tension: (1) delaying important, risky, and irreversible decisions until after we’ve arrived at a Long Reflection–facilitated consensus on them, and (2) waiting with the Long Reflection itself until after we’ve achieved existential security.
I would expect, as a prior, that most things happen because of economic or political necessity, which is very hard to influence. Hence the Long Reflection either has to ramp up early enough that we can arrive at consensus conclusions and then engage in the advocacy efforts that’ll be necessary to improve over the default outcomes or else risk that flawed solutions get locked in forever. But the first comes at the risk of diverting resources from existential security. This indicates that there is some optimal trade-off point between existential security and timely conclusions. (From this 2020 blog post of mine.)
Michael Aird suggested to not use the term “Long Reflection” for the institution that I’m aiming for because it doesn’t share enough features with Ord’s Long Reflection. I call it “Hasty Reflection” for now. If AI allows, we’ll perhaps leave our solar system within the coming millennium or even just a few centuries. The communication delays between parts of the human civilization will then quickly increase to years, which will prevent efficient conversations. Barring the invention of faster-than-light communication, we will need to solve ethics and coordination and resiliently install the solution in our civilization before that happens. That seems like a project that may well take more than a millennium, so it’s fairly urgent. (Though likely less urgent than AI safety.)
I envision that the Hasty Reflection will have the following components:
Organizations that aim to improve incentives in academia, and maybe differentially the branches most relevant for the Hasty Reflection.
Organizations that, in the meantime, create strong alternatives to academia for researches in the relevant fields, e.g., through proportional prizes and impact markets.
Organizations that build coalitions with political parties and media.
Organizations that think about the strategy of it all and coordinate the other organizations.
Organization that conduct research into faster-than-light communication because it might buy us time if it’s at all imaginable.
Parts of the Effective Altruism community already form some sort of proto–Hasty Reflection, so it should be easier to bootstrap a proper Hasty Reflection out of EA than to start from scratch.
An ecosystem of organizations to initiate a “Hasty Reflection”
Values and Reflective Processes, Epistemic Institutions, Effective Altruism, Research That Can Help Us Improve
The Long Reflection appears to me to be robustly desirable. It only suffers from being more or less unrealistic depending on how it is construed.
In particular, I feel that two aspects of it are in tension: (1) delaying important, risky, and irreversible decisions until after we’ve arrived at a Long Reflection–facilitated consensus on them, and (2) waiting with the Long Reflection itself until after we’ve achieved existential security.
I would expect, as a prior, that most things happen because of economic or political necessity, which is very hard to influence. Hence the Long Reflection either has to ramp up early enough that we can arrive at consensus conclusions and then engage in the advocacy efforts that’ll be necessary to improve over the default outcomes or else risk that flawed solutions get locked in forever. But the first comes at the risk of diverting resources from existential security. This indicates that there is some optimal trade-off point between existential security and timely conclusions. (From this 2020 blog post of mine.)
Michael Aird suggested to not use the term “Long Reflection” for the institution that I’m aiming for because it doesn’t share enough features with Ord’s Long Reflection. I call it “Hasty Reflection” for now. If AI allows, we’ll perhaps leave our solar system within the coming millennium or even just a few centuries. The communication delays between parts of the human civilization will then quickly increase to years, which will prevent efficient conversations. Barring the invention of faster-than-light communication, we will need to solve ethics and coordination and resiliently install the solution in our civilization before that happens. That seems like a project that may well take more than a millennium, so it’s fairly urgent. (Though likely less urgent than AI safety.)
I envision that the Hasty Reflection will have the following components:
Organizations that aim to improve incentives in academia, and maybe differentially the branches most relevant for the Hasty Reflection.
Organizations that, in the meantime, create strong alternatives to academia for researches in the relevant fields, e.g., through proportional prizes and impact markets.
Organizations that improve epistemics, like QURI, Rational Animations and Kelsey Piper.
Organizations that build coalitions with political parties and media.
Organizations that think about the strategy of it all and coordinate the other organizations.
Organization that conduct research into faster-than-light communication because it might buy us time if it’s at all imaginable.
Parts of the Effective Altruism community already form some sort of proto–Hasty Reflection, so it should be easier to bootstrap a proper Hasty Reflection out of EA than to start from scratch.