Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.
I’m flagging this as something that I’m personally unsure about and tentatively disagree with.
It’s unclear how much more MacAskill means by “much”. My interpretation was that he probably meant something like 2-10x more likely.
My tentative view is that catastrophes that kill 99% of people are probably <2x as likely as catastrophes that kill 100% of people.
Full excerpt for those curious:
Will MacAskill: — most of the literature. I really wanted to just come in and be like, “Look, this is of huge importance” — because if it’s 50⁄50 when you lose 99% of the population whether you come back to modern levels of technology, that potentially radically changes how we should do longtermist prioritization. Because catastrophes that kill 99% of people are much more likely, I think, than catastrophes that kill 100%.
Will MacAskill: And that’s just one of very many particular issues that just hadn’t had this sufficient investigation. I mean, the ideal for me is if people reading this book go away and take one little chunk of it — that might be a paragraph in the book or a chapter of it — and then really do 10 years of research perhaps on the question.
I just asked Will about this at EAG and he clarified that (1) he’s talking about non-AI risk, (2) by “much” more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby’s view; Will said he puts bio xrisk at something like 0.5% by 2100.
Will MacAskill, 80,000 Hours Podcast May 2022:
I’m flagging this as something that I’m personally unsure about and tentatively disagree with.
It’s unclear how much more MacAskill means by “much”. My interpretation was that he probably meant something like 2-10x more likely.
My tentative view is that catastrophes that kill 99% of people are probably <2x as likely as catastrophes that kill 100% of people.
Full excerpt for those curious:
I just asked Will about this at EAG and he clarified that (1) he’s talking about non-AI risk, (2) by “much” more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby’s view; Will said he puts bio xrisk at something like 0.5% by 2100.