In aggregate, the group places 38% on AI existential catastrophe, conditional on AGI being developed by 2070, and 25% on existential catastrophe via misaligned AI takeover by 2100 (suggesting that takeover is roughly two-thirds of the overall AI risk [emphasis mine]).
The bolded portion isn’t correct, since the 38% is conditional on development of AGI while the 25% is unconditional. The probability of takeover conditional on TAI is estimated as .25/.81 = 31%, so a more accurate estimate for the portion of AI risk coming from misaligned takeover is .31/.38=~82%.
The bolded portion isn’t correct, since the 38% is conditional on development of AGI while the 25% is unconditional. The probability of takeover conditional on TAI is estimated as .25/.81 = 31%, so a more accurate estimate for the portion of AI risk coming from misaligned takeover is .31/.38=~82%.
Apologies for any lack of clarity in the post.
Hi Eli — this was my mistake; thanks for flagging. We’ll correct the post.