A survey of ML researchers (not necessarily AI Safety, or EA) gave the following
That seems much higher than the corresponding sample in any other field I can think of.
I think that an “extremely bad outcome” is probably equivalent to 1Bn or more people dying.
Do a near majority of those who work in green technology (what feels like the right comparison class) feel that climate change has a 10% chance of 1 Bn deaths?
Personally, I think there is like a 7% chance of extinction before 2050, which is waaay higher than anything else.
FYI—subsamples of that survey were asked about this in other ways, which gave some evidence that “extremely bad outcome” was ~equivalent to extinction.
Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI [1]was 10%, weirdly more than median chance of human extinction from AI in general,[2] at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes.
Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’
A survey of ML researchers (not necessarily AI Safety, or EA) gave the following
That seems much higher than the corresponding sample in any other field I can think of.
I think that an “extremely bad outcome” is probably equivalent to 1Bn or more people dying.
Do a near majority of those who work in green technology (what feels like the right comparison class) feel that climate change has a 10% chance of 1 Bn deaths?
Personally, I think there is like a 7% chance of extinction before 2050, which is waaay higher than anything else.
FYI—subsamples of that survey were asked about this in other ways, which gave some evidence that “extremely bad outcome” was ~equivalent to extinction.
Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’
That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’