Chris—this is all quite reasonable.
However, one could dispute ‘Premise 2: AGI has a reasonable chance of arriving in the next 30 or 40 years.’
Yes, without any organized resistance to the AI industry, the AI industry will develop AGI (if AGI is possible) -- probably fairly quickly.
But, if enough people accept Premise 5 (likely catastrophe) and Premise 6 (we can make a difference), then we can prevent AGI from arriving.
In other words, the best way to make ‘AI go well’ may be to prevent AGI (or ASI) from happening at all.
Good point. I added in “by default”.
Also, would be keen to hear if you think I should have restructured this argument in any other way?
Chris—this is all quite reasonable.
However, one could dispute ‘Premise 2: AGI has a reasonable chance of arriving in the next 30 or 40 years.’
Yes, without any organized resistance to the AI industry, the AI industry will develop AGI (if AGI is possible) -- probably fairly quickly.
But, if enough people accept Premise 5 (likely catastrophe) and Premise 6 (we can make a difference), then we can prevent AGI from arriving.
In other words, the best way to make ‘AI go well’ may be to prevent AGI (or ASI) from happening at all.
Good point. I added in “by default”.
Also, would be keen to hear if you think I should have restructured this argument in any other way?