Speaking about AI Risk particularly, I haven’t bought into the idea there’s a “cognitively substantial” chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven’t either. There’s two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:
Transformative AI will happen likely happen within 10 years, or 30
There’s a significant chance it will kill us all, or at least a catastrophic number of people (e.g. >100m)
It’s not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn’t matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.
If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1⁄10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven’t seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care about a 1% risk of something. After the pandemic, where the statistically average person has a ~1% chance of dying from covid, it has been difficult to convince something like 1⁄3 of the population to give a shit about it. The problem with small numbers like 1%, or even 10%, is a lot of people just shrug and dismiss them. Cognitively they round to zero. But the conversation “convince me 1% matters” can look a lot like just explaining longtermism to someone.
The way I like to describe it to my Intro to EA cohorts in the Existential Risk week is to ask “How many people, probabilistically, would die each year from this?”
So, if I think there’s a 10% chance AI kills us in the next 100 years, that’s 1 in 1,000 people “killed” by AI each year, or 7 million per year—roughly 17x more than malaria.
If I think there’s a 1% chance, AI risk kills 700,000 - it’s still just as important as malaria prevention, and much more neglected.
If I think there’s an 0.1% chance, AI kills 70,000 - a non-trivial problem, but not worth spending as many resources on as more likely concerns.
That said, this only covers part of the inferential distance—people in Week 5 of the Intro to EA cohort are already used to reasoning quantitatively about things and analysing cost-effectiveness.
Speaking about AI Risk particularly, I haven’t bought into the idea there’s a “cognitively substantial” chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven’t either. There’s two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:
Transformative AI will happen likely happen within 10 years, or 30
There’s a significant chance it will kill us all, or at least a catastrophic number of people (e.g. >100m)
It’s not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn’t matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.
If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1⁄10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven’t seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care about a 1% risk of something. After the pandemic, where the statistically average person has a ~1% chance of dying from covid, it has been difficult to convince something like 1⁄3 of the population to give a shit about it. The problem with small numbers like 1%, or even 10%, is a lot of people just shrug and dismiss them. Cognitively they round to zero. But the conversation “convince me 1% matters” can look a lot like just explaining longtermism to someone.
The way I like to describe it to my Intro to EA cohorts in the Existential Risk week is to ask “How many people, probabilistically, would die each year from this?”
So, if I think there’s a 10% chance AI kills us in the next 100 years, that’s 1 in 1,000 people “killed” by AI each year, or 7 million per year—roughly 17x more than malaria.
If I think there’s a 1% chance, AI risk kills 700,000 - it’s still just as important as malaria prevention, and much more neglected.
If I think there’s an 0.1% chance, AI kills 70,000 - a non-trivial problem, but not worth spending as many resources on as more likely concerns.
That said, this only covers part of the inferential distance—people in Week 5 of the Intro to EA cohort are already used to reasoning quantitatively about things and analysing cost-effectiveness.