I think arguing P(doom|AGI) >>10% is a decent strategy. So far I haven’t had anyone give good enough reasons for me to update in the other direction. I think the CEOs in the vanguard of AGI development need to really think about this. If they have good reasons for thinking that P(doom|AGI) ≤ 10%, I want to hear them! To give a worrying example: LeCun is, frankly, sounding like he has no idea of what the problem even is. OpenAI might think they can solve alignment, but their progress on alignment to date isn’t encouraging (this is so far away from the 100% watertight, 0 failure modes that we need). And Google Deepmind are throwing caution to the wind (despite safetywashing their statement with 7 mentions of the word “responsible”/”responsibly”).
The above also has the effect of shifting the public framing toward the burden being on the AI companies to prove their products are safe (in terms of not causing global catastrophe). I’m unsure as to whether the public at large would tolerate a 1% risk. Maybe they would (given the potential upside). But we are not in that world. The risk is at least 50%, probably closer to 99% imo.
I think arguing P(doom|AGI) >>10% is a decent strategy. So far I haven’t had anyone give good enough reasons for me to update in the other direction. I think the CEOs in the vanguard of AGI development need to really think about this. If they have good reasons for thinking that P(doom|AGI) ≤ 10%, I want to hear them! To give a worrying example: LeCun is, frankly, sounding like he has no idea of what the problem even is. OpenAI might think they can solve alignment, but their progress on alignment to date isn’t encouraging (this is so far away from the 100% watertight, 0 failure modes that we need). And Google Deepmind are throwing caution to the wind (despite safetywashing their statement with 7 mentions of the word “responsible”/”responsibly”).
The above also has the effect of shifting the public framing toward the burden being on the AI companies to prove their products are safe (in terms of not causing global catastrophe). I’m unsure as to whether the public at large would tolerate a 1% risk. Maybe they would (given the potential upside). But we are not in that world. The risk is at least 50%, probably closer to 99% imo.