> Civ-Similarity seems implausible. I at least have some control over what humans do in the future Maybe there is a misunderstanding here. The Civ-Similarity is not about having control; it is not about marginal utility. It is that the expected utility (not the marginal) produced by space-faring civilizations given either human ancestry or alien ancestry, are similar. The single strongest argument in favour of this hypothesis is that we are too uncertain about how conditioning on human ancestry or alien ancestry changes the utility produced in the far future by a space-faring civilization. We are too uncertain to say that U(far future | human ancestry) significantly differs from U(far future | alien ancestry).
No, I don’t think there’s a misunderstanding. It’s more that I think the future could go many different ways with wide variance in expected value, and I can shape the direction the human future goes but I cannot shape the direction that alien futures go.
What do you think about just building and letting misaligned AGI loose? That seems fairly similar to letting other civilisations take over. (Apologies that I haven’t read your evaluation.)
What do you think about just building and letting misaligned AGI loose? That seems fairly similar to letting other civilisations take over. (Apologies that I haven’t read your evaluation.)
I think it’s a better model to think about humanity and aliens ICs as randomly sampled from among Intelligent Civilizations (ICs) with the potential to create a space-faring civilization. Alien civilizations also have a chance at succeeding at aligning their ASI with positive moral values. Thus, by assuming the Mediocrity Principle, we can say that the expected value produced by both is similar (as long as we don’t gain information that we are different).
Misaligned AIs, are not sampled from this distribution. Thus, letting loose a misaligned AGI does not produce the same expected value. I.e. letting loose humanity is equivalent to letting loose an alien IC (if we can’t predict their differences and the impact of such differences), but letting loose a misaligned AGI does not produce the same expected value.
I hope that makes sense. You can also see the comment by MacAskill just below. For clarity, I think that letting loose a misaligned AGI is strongly negative, as argued in posts I published.
You can find a first evaluation of the Civ-Saturation hypothesis in Other Civilizations Would Recover 84+% of Our Cosmic Resources—A Challenge to Extinction Risk Prioritization. It seems pretty accurate as long as you assume EDT.
> Civ-Similarity seems implausible. I at least have some control over what humans do in the future
Maybe there is a misunderstanding here. The Civ-Similarity is not about having control; it is not about marginal utility. It is that the expected utility (not the marginal) produced by space-faring civilizations given either human ancestry or alien ancestry, are similar. The single strongest argument in favour of this hypothesis is that we are too uncertain about how conditioning on human ancestry or alien ancestry changes the utility produced in the far future by a space-faring civilization. We are too uncertain to say that U(far future | human ancestry) significantly differs from U(far future | alien ancestry).
No, I don’t think there’s a misunderstanding. It’s more that I think the future could go many different ways with wide variance in expected value, and I can shape the direction the human future goes but I cannot shape the direction that alien futures go.
What do you think about just building and letting misaligned AGI loose? That seems fairly similar to letting other civilisations take over. (Apologies that I haven’t read your evaluation.)
I think it’s a better model to think about humanity and aliens ICs as randomly sampled from among Intelligent Civilizations (ICs) with the potential to create a space-faring civilization. Alien civilizations also have a chance at succeeding at aligning their ASI with positive moral values. Thus, by assuming the Mediocrity Principle, we can say that the expected value produced by both is similar (as long as we don’t gain information that we are different).
Misaligned AIs, are not sampled from this distribution. Thus, letting loose a misaligned AGI does not produce the same expected value. I.e. letting loose humanity is equivalent to letting loose an alien IC (if we can’t predict their differences and the impact of such differences), but letting loose a misaligned AGI does not produce the same expected value.
I hope that makes sense. You can also see the comment by MacAskill just below.
For clarity, I think that letting loose a misaligned AGI is strongly negative, as argued in posts I published.