Several background variables give rise to worldviews/outlooks about how to make the transition to a world with AGIs go well. Answering this question requires assigning values to the background variables or placing weights on the various worldviews, and then thinking about how likely “Disneyland with no children” scenarios are under each worldview, by e.g. looking at how they solve philosophical problems (particularly deliberation) and how likely obvious vs non-obvious failures are.
That is to say, I think answering questions like this is pretty difficult, and I don’t think there are any deep public analyses about it. I expect most EAs who don’t specialize in AI alignment to do something on the order of “under MIRI’s views the main difficulty is getting any sort of alignment, so this kind of failure mode isn’t the main concern, at least until we’ve solved alignment; under Paul’s views we will sort of have control over AI systems, at least in the beginning, so this kind of failure seems like one of the many things to be worried about; overall I’m not sure how much weight I place on each view, and don’t know what to think so I’ll just wait for the AI alignment field to produce more insights”.
Several background variables give rise to worldviews/outlooks about how to make the transition to a world with AGIs go well. Answering this question requires assigning values to the background variables or placing weights on the various worldviews, and then thinking about how likely “Disneyland with no children” scenarios are under each worldview, by e.g. looking at how they solve philosophical problems (particularly deliberation) and how likely obvious vs non-obvious failures are.
That is to say, I think answering questions like this is pretty difficult, and I don’t think there are any deep public analyses about it. I expect most EAs who don’t specialize in AI alignment to do something on the order of “under MIRI’s views the main difficulty is getting any sort of alignment, so this kind of failure mode isn’t the main concern, at least until we’ve solved alignment; under Paul’s views we will sort of have control over AI systems, at least in the beginning, so this kind of failure seems like one of the many things to be worried about; overall I’m not sure how much weight I place on each view, and don’t know what to think so I’ll just wait for the AI alignment field to produce more insights”.