I just asked Will about this at EAG and he clarified that (1) he’s talking about non-AI risk, (2) by “much” more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby’s view; Will said he puts bio xrisk at something like 0.5% by 2100.
I just asked Will about this at EAG and he clarified that (1) he’s talking about non-AI risk, (2) by “much” more he means something like 8x as likely, (3) most of the non-AI risk is biorisk, and in his view biorisk is less than Toby’s view; Will said he puts bio xrisk at something like 0.5% by 2100.