Right. How can we prevent a misaligned AI from locking in bad values?
A misaligned AI surviving takeover counts as “no extinction”, see the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ
Maxime Riché 🔸
Longtermist Implications of the Existence Neutrality Hypothesis
Intelligent life extinction could be prevented by creating a misaligned AI locking-in bad moral values, no?
Maybe see the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ
I am curious about the lower tractability.
Do you think that changing the moral values/goals of the ASIs Humanity would create is not a tractable way to influence the value of the future?
If yes, is that because we are not able to change them, or because we don’t know which moral values to input, or something else?
In the second case, what about inputting the goal of figuring out which goals to pursue (“long reflection”)?
I am curious about (1)
Do you think that changing the moral values/goals of the ASIs Humanity would create is not a tractable way to influence the value of the future?
If yes, is that because we are not able to change them, or because we don’t know which moral values to input, or something else?
In the second case, what about inputting the goal of figuring out which goals to pursue (“long reflection”)?
There is a misunderstanding: “Increasing value of futures where we survive” is an X-risks reduction intervention.
See the comment by MacAskill https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=jbyvG8sHfeZzMqusJ which clarifies that the debate is between Extinction-Risks vs Alignment-Risks (AKA increasing the value of future) which both are X-risks. The debate is not between X-risks and Alignment-Risks.
One of the most impactful way to “increasing value of futures where we survive” is to work on AI governance and technical AI alignment.
I want to very briefly argue that given the complexity of long-term trajectories, the lack of empirical evidence, and the difficulty of identifying robust interventions, efforts to improve future value are significantly less tractable than reducing existential risk.
[...]
And compared to existential risk, where specific interventions may have clear leverage points, such as biosecurity or AI safety, increasing the quality of long-term futures is a vast and nebulous goal.I guess, there is a misunderstanding in your analysis. Please correct me if I am wrong.
“Increasing the quality of long-term futures” reduces existential risks. When longtermists talk about “increasing the quality of long-term futures,” they include progress on aligning AIs as one of the best interventions they have in mind.
To compare their relative tractability, let’s look at the best intervention to reduce Extinction-Risks and, on the other hand, at the best interventions for “increasing the quality of long-term futures”, what I call reducing Alignment-Risks.Illustrative best PTIs for Extinction-Risks reduction: Improving AI control, Reducing AI misuses. These reduce the chance of AI destroying future Earth-originating intelligent agents.
Illustrative best PTIs for Alignment-Risks reduction: Technical AI alignment, improving AI governance. These improve the quality of the long-term futures.
Now, let’s compare their tractability. How these interventions differ in tractability is not clear. These interventions actually overlap significantly. It is not clear if reducing misuse risks is actually harder than improving alignment or than improving AI governance.
Interestingly, this leads us to a plausible contradiction in arguments against Alignment-Risks: Some will say that the interventions to reduce Alignment-Risks and Extinction-Risks are the same, and some will say they have vastly different tractability. One of the two groups is incorrect. Interventions can’t be the same and have different tractability.
The Convergent Path to the Stars—Similar Utility Across Civilizations Challenges Extinction Prioritization
It would only reduce the value of extinction risk reduction by an OOM at most, though?
Right, at most, one OOM. Higher updates would require us to learn that the universe is more Civ-Saturated than our current best guess. This could be the case if:
- humanity’s extinction would not prevent another intelligent civilization from appearing quickly on Earth—
OR that intelligent life in the universe is much more frequent (e.g., to learn that intelligent life can appear around red dwarfs whose lifespan is 100B to 1T years).
Suppose that Earth-originating civilisation’s value is V, and if we all worked on it we could increase that to V+ or to V-. If so, then which is the right value for the alien civilisation? Choosing V rather than V+ or V- (or V+++ or V—etc) seems pretty arbitrary.
I guess, as long as V ~ V+++ ~ V--- (like the relative difference is less than 1%), then it is likely not a big issue. However, the relative difference may become large only when we become significantly more certain about the impact of our actions, e.g., if we are the operators choosing the moral values of the first ASI.
You can find a first evaluation of the Civ-Saturation hypothesis in Other Civilizations Would Recover 84+% of Our Cosmic Resources—A Challenge to Extinction Risk Prioritization. It seems pretty accurate as long as you assume EDT.
> Civ-Similarity seems implausible. I at least have some control over what humans do in the future
Maybe there is a misunderstanding here. The Civ-Similarity is not about having control; it is not about marginal utility. It is that the expected utility (not the marginal) produced by space-faring civilizations given either human ancestry or alien ancestry, are similar. The single strongest argument in favour of this hypothesis is that we are too uncertain about how conditioning on human ancestry or alien ancestry changes the utility produced in the far future by a space-faring civilization. We are too uncertain to say that U(far future | human ancestry) significantly differs from U(far future | alien ancestry).
Other Civilizations Would Recover 84+% of Our Cosmic Resources—A Challenge to Extinction Risk Prioritization
Thank you for organizing this debate!
Here are several questions. They are related to two hypotheses, that could, if both significantly true, make impartial longtermists update the value of Extinction-Risk reduction downward (potentially by 75% to 90%).
Civ-Saturation Hypothesis: Most resources will be claimed by Space-Faring Civilizations (SFCs) regardless of whether humanity creates an SFC.
Civ-Similarity Hypothesis: Humanity’s Space-Faring Civilization would produce utility similar to other SFCs (per unit of resource controlled).
For context, I recently introduced these hypotheses here, and I will publish a few posts producing preliminary evaluations of those during the debate week.
General questions:
What are the best arguments against these hypotheses?
Is the AI Safety community already primarily working on reducing Alignment-Risks and not on reducing Extinction-Risks?
By Alignment-Risks, I mean “increasing the value of futures where Earth-originating intelligent-life survive”.
By Extinction-Risks, I mean “reducing the chance of Earth-originating intelligent-life extinction”.
What are the current relative importance given to Extinction-Risks and Alignment-Risks in the EA community? E.g., what are the relative grant allocations?
Should the EA community do more to study the relative priorities of Extinction-Risks and Alignment-Risks, or are we already allocating significant attention to this question?
Specific questions:
Should we prioritize interventions given EDT (or other evidential decision theories) or CDT? How should we deal with uncertainty there?
I am interested in this question because the Civ-Saturation hypothesis may be significantly true when assuming EDT (and thus at least assuming we control our exact copies, and they exist). However, this hypothesis may be otherwise pretty incorrect assuming CDT.
We are strongly uncertain about how the characteristics of ancestors of space-faring civilizations (e.g., Humanity) would impact the value space-faring civilizations would produce in the far future. Given this uncertainty, should we expect it to be hard to argue that Humanity’s future space-faring civilization would produce significantly different value than other space-faring civilizations?
I am interested in this question, because I believe we should use the Mediocrity Principle as a starting point when comparing our future potential impact with that of aliens, and that it is likely (and also in practice) very hard to find robust enough arguments to update significantly away from this principle, especially given that we can find many arguments reinforcing the mediocrity principle prior (e.g., selection pressures and convergence arguments).
What are our best arguments supporting that Humanity’s space-faring civilization would produce significantly more value than other space-faring civilizations?
How should we aggregate beliefs over possible worlds in which we could have OOMs of difference in impact?
Here is the formalized solution I use to study the above hypotheses: Decision-Relevance of worlds and ADT implementations
You may want to see a series I am currently publishing, which includes some preliminary investigation of this question: https://forum.effectivealtruism.org/s/hi2DyFuqHmt9ieoCi
Formalizing Space-Faring Civilizations Saturation concepts and metrics
Decision-Relevance of worlds and ADT implementations
Great news!
> If there are other posts you think more people should read, please comment them below. I might highlight them during the debate week, or before.I am in the process of publishing a series of posts (“Evaluating the Existence Neutrality Hypothesis”) related to the theme of the debate (“Extinction risks” VS “Alignment risks / Future value”). The series is about evaluating how to update on those questions given our best knowledge about potential space-faring civilizations in the universe.
I will aim to publish several of the remaining posts during the debate week.
Space-Faring Civilization density estimates and models—Review
I somewhat agree with your points. Here are some contributions, and pushbacks:
I get that there’s been a lot of work on this and that we can make progress on it (I know, I’m an astrobiologist), but I’m sure there are so many unknown unknowns associated with the origin of life, development of sentience, and spacefaring civilisation that we just aren’t there yet. The universe is so enormous and bonkers and our brains are so small—we can make numerical estimates sure, but creating a number doesn’t necessarily mean we have more certainty.
Something interesting about these hypotheses and implications is that they get stronger the more uncertainty we are, as long as one uses some form of EDT (e.g., CDT + exact copies). The less we know about how conditioning on Humanity ancestry impacts utility production, the more the Civ-Similarity Hypothesis is close to correct. The broader our distribution over the density of SFC in the universe, the more the Civ-Saturation Hypothesis is close to correct. This seems true as long as you account for the impact of correlated agents (e.g., exact copies) and that they exist. For the Civ-Similarity Hypothesis, this comes from the application of the Mediocrity Principle. For the Civ-Saturation Hypothesis, this comes from the fact that we have orders of magnitude more exact copies in saturated worlds than in empty worlds.
I think you’re posing a post-understanding of consciousness question. Consciousness might be very special or it might be an emergent property of anything that synthesises information, we just don’t know. But it’s possible to imagine aliens with complex behaviour similar to us, but without evolving the consciousness aspect, like superintelligent AI probably will be like. For now, the safe assumption is that we’re the only conscious life, and I think it’s very important that we act like it until proven otherwise.
Consciousness is indeed one of the arguments pushing the Civ-Similarity Hypothesis toward lower values (humanity being more important), and I am eager to discuss its potential impact. Here are several reasons why the update from consciousness may not be that large:
Consciousness may not be binary, in that case, we don’t know if humans are low, medium, or high consciousness, I only know that I am not at zero. We should then likely assume we are average. Then, the relevant comparison is no longer between P(humanity is “conscious”) and P(aliens creating SFCs are “conscious”) but between P(humanity’s consciousness > 0) and P(aliens-creating-SFC’s consciousness > 0)
If human consciousness is a random fluke and has no impact on behavior (or it could be selected in or out), then we have no reason to think that aliens will create more or less conscious descendants than us. Consciousness needs to have a significant impact on behavior to change the chance that (artificial) descendants are conscious. But the larger the effect of consciousness on behaviors, the more likely consciousness is to be a result of evolution/selection.
We don’t understand much about how the consciousness of SFC creators would influence the consciousness of (artificial) SFC descendants. Even if Humans are abnormal in being conscious, it is very uncertain how much that changes how likely our (artificial) descendants are to be conscious.
I am very happy to get pushback and to debate the strength of the “consciousness argument” on Humanity’s expected utility.
What’s the difference between “P(Alignment | Humanity creates an SFC)” and “P(Alignment AND Humanity creates an SFC)”?
I will try to explain it more clearly. Thanks for asking.
P(Alignment AND Humanity creates an SFC) = P(Alignment | Humanity creates an SFC) x P(Humanity creates an SFC)
So the difference is that when you optimize for P(Alignment | Humanity creates an SFC), you no longer optimize for the term P(Humanity creates an SFC), which was included in the conjunctive probability.
Can you maybe run us through 2 worked examples for bullet point 2? Like what is someone currently doing (or planning to do) that you think should be deprioritised? And presumably, there might be something that you think should be prioritised instead?
Bullet point 2 is: (ii) Deprioritizing to some degree AI Safety agendas mostly increasing P(Humanity creates an SFC) but not increasing much P(Alignment | Humanity creates an SFC).
Here are speculative examples. The degree to which their priorities should be updated is to be debated. I only claim that they may need to be updated conditional on the hypotheses being significantly correct.
AI Misuse reduction: If the PTIs are (a) to prevent extinction through misuse and chaos, (b) to prevent the loss of alignment power resulting from a more chaotic world, and (c) to provide more time for Alignment research, then it is plausible that the PTI (a) would become less impactful.
Misalign AI Control: If the PTIs are (c) as above, (d) to prevent extinction through controlling early misaligned AI trying to take over, (e) to control misaligned early AIs to make them work on Alignment research, and (f) to create fire alarms (note: which somewhat contradicts the path (b) above), then it is plausible the PTI (d) would be less impactful since these early misaligned AI may have a higher chance to not create an SFC after taking over (e.g., they don’t survive destroying humanity or don’t care about space colonization).
Here is another vague diluted effect: If an intervention, like AI control, increases P(Humanity creates an SFC | Early Misalignment), then this intervention may need to be discounted more than if it was increasing P(Humanity creates an SFC) only. Changing P(Humanity creates an SFC) may have no impact when the hypotheses are significantly correct, but P(Humanity creates an SFC | Misalignment) is net negative, and Early Misalignment and (Late) Misalignment may be strongly correlated.
AI evaluations: The reduction of the impact of (a) and (d) may also impact the overall importance of this agenda.
These updates are, at the moment, speculative.
I think it’s a better model to think about humanity and aliens ICs as randomly sampled from among Intelligent Civilizations (ICs) with the potential to create a space-faring civilization. Alien civilizations also have a chance at succeeding at aligning their ASI with positive moral values. Thus, by assuming the Mediocrity Principle, we can say that the expected value produced by both is similar (as long as we don’t gain information that we are different).
Misaligned AIs, are not sampled from this distribution. Thus, letting loose a misaligned AGI does not produce the same expected value. I.e. letting loose humanity is equivalent to letting loose an alien IC (if we can’t predict their differences and the impact of such differences), but letting loose a misaligned AGI does not produce the same expected value.
I hope that makes sense. You can also see the comment by MacAskill just below.
For clarity, I think that letting loose a misaligned AGI is strongly negative, as argued in posts I published.