Reasonable people think it has the most chance of killing all of us and ending future conscious life. Compared to other risks it is bigger, compared to other cause areas it will extinguish more lives.
“Reasonable people think”—this sounds like a very weak way to start an argument. Who are those people—would be the next question. So let’s skip the differing to authority argument. Then we have “the most chance”—what are the probabilities and how soon in the future? Cause when we talk about deprioritizing other cause areas for the next X years, we need to have pretty good probabilities and timelines, right? So yeah, I would not consider differing to authorities a strong argument. But thanks for taking the time to reply.
A survey of ML researchers (not necessarily AI Safety, or EA) gave the following
That seems much higher than the corresponding sample in any other field I can think of.
I think that an “extremely bad outcome” is probably equivalent to 1Bn or more people dying.
Do a near majority of those who work in green technology (what feels like the right comparison class) feel that climate change has a 10% chance of 1 Bn deaths?
Personally, I think there is like a 7% chance of extinction before 2050, which is waaay higher than anything else.
FYI—subsamples of that survey were asked about this in other ways, which gave some evidence that “extremely bad outcome” was ~equivalent to extinction.
Explicit P(doom)=5-10% The levels of badness involved in that last question seemed ambiguous in retrospect, so I added two new questions about human extinction explicitly. The median respondent’s probability of x-risk from humans failing to control AI [1]was 10%, weirdly more than median chance of human extinction from AI in general,[2] at 5%. This might just be because different people got these questions and the median is quite near the divide between 5% and 10%. The most interesting thing here is probably that these are both very high—it seems the ‘extremely bad outcome’ numbers in the old question were not just catastrophizing merely disastrous AI outcomes.
Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’
Reasonable people think it has the most chance of killing all of us and ending future conscious life. Compared to other risks it is bigger, compared to other cause areas it will extinguish more lives.
“Reasonable people think”—this sounds like a very weak way to start an argument. Who are those people—would be the next question. So let’s skip the differing to authority argument. Then we have “the most chance”—what are the probabilities and how soon in the future? Cause when we talk about deprioritizing other cause areas for the next X years, we need to have pretty good probabilities and timelines, right? So yeah, I would not consider differing to authorities a strong argument. But thanks for taking the time to reply.
A survey of ML researchers (not necessarily AI Safety, or EA) gave the following
That seems much higher than the corresponding sample in any other field I can think of.
I think that an “extremely bad outcome” is probably equivalent to 1Bn or more people dying.
Do a near majority of those who work in green technology (what feels like the right comparison class) feel that climate change has a 10% chance of 1 Bn deaths?
Personally, I think there is like a 7% chance of extinction before 2050, which is waaay higher than anything else.
FYI—subsamples of that survey were asked about this in other ways, which gave some evidence that “extremely bad outcome” was ~equivalent to extinction.
Or, ‘human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species’
That is, ‘future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species’
There is a big gap between killing all of us and ending future conscious life (on earth, in our galaxy, entire universe/multiverse?)
Yes, but it’s a much smaller gap than any other cause doing this.
You’re right, conscious life will probably be fine. But it might not be.