What do you think about just building and letting misaligned AGI loose? That seems fairly similar to letting other civilisations take over. (Apologies that I haven’t read your evaluation.)
I think it’s a better model to think about humanity and aliens ICs as randomly sampled from among Intelligent Civilizations (ICs) with the potential to create a space-faring civilization. Alien civilizations also have a chance at succeeding at aligning their ASI with positive moral values. Thus, by assuming the Mediocrity Principle, we can say that the expected value produced by both is similar (as long as we don’t gain information that we are different).
Misaligned AIs, are not sampled from this distribution. Thus, letting loose a misaligned AGI does not produce the same expected value. I.e. letting loose humanity is equivalent to letting loose an alien IC (if we can’t predict their differences and the impact of such differences), but letting loose a misaligned AGI does not produce the same expected value.
I hope that makes sense. You can also see the comment by MacAskill just below. For clarity, I think that letting loose a misaligned AGI is strongly negative, as argued in posts I published.
I think it’s a better model to think about humanity and aliens ICs as randomly sampled from among Intelligent Civilizations (ICs) with the potential to create a space-faring civilization. Alien civilizations also have a chance at succeeding at aligning their ASI with positive moral values. Thus, by assuming the Mediocrity Principle, we can say that the expected value produced by both is similar (as long as we don’t gain information that we are different).
Misaligned AIs, are not sampled from this distribution. Thus, letting loose a misaligned AGI does not produce the same expected value. I.e. letting loose humanity is equivalent to letting loose an alien IC (if we can’t predict their differences and the impact of such differences), but letting loose a misaligned AGI does not produce the same expected value.
I hope that makes sense. You can also see the comment by MacAskill just below.
For clarity, I think that letting loose a misaligned AGI is strongly negative, as argued in posts I published.