FYI—subsamples of that survey were asked about this in other ways, which gave some evidence that “extremely bad outcome” was ~equivalent to extinction.
Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI [1]was 10%, weirdly more than median chance of human extinction from AI in general,[2] at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes.
Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’
FYI—subsamples of that survey were asked about this in other ways, which gave some evidence that “extremely bad outcome” was ~equivalent to extinction.
Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’
That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’