The way I like to describe it to my Intro to EA cohorts in the Existential Risk week is to ask “How many people, probabilistically, would die each year from this?”
So, if I think there’s a 10% chance AI kills us in the next 100 years, that’s 1 in 1,000 people “killed” by AI each year, or 7 million per year—roughly 17x more than malaria.
If I think there’s a 1% chance, AI risk kills 700,000 - it’s still just as important as malaria prevention, and much more neglected.
If I think there’s an 0.1% chance, AI kills 70,000 - a non-trivial problem, but not worth spending as many resources on as more likely concerns.
That said, this only covers part of the inferential distance—people in Week 5 of the Intro to EA cohort are already used to reasoning quantitatively about things and analysing cost-effectiveness.
The way I like to describe it to my Intro to EA cohorts in the Existential Risk week is to ask “How many people, probabilistically, would die each year from this?”
So, if I think there’s a 10% chance AI kills us in the next 100 years, that’s 1 in 1,000 people “killed” by AI each year, or 7 million per year—roughly 17x more than malaria.
If I think there’s a 1% chance, AI risk kills 700,000 - it’s still just as important as malaria prevention, and much more neglected.
If I think there’s an 0.1% chance, AI kills 70,000 - a non-trivial problem, but not worth spending as many resources on as more likely concerns.
That said, this only covers part of the inferential distance—people in Week 5 of the Intro to EA cohort are already used to reasoning quantitatively about things and analysing cost-effectiveness.