I really appreciate you writing this. Getting clear on one’s own reasoning about AI seems really valuable, but for many people, myself included, it’s too daunting to actually do.
If you think it’s relevant to your overall point, I would suggest moving the first two footnotes (clarifying what you mean by short timelines and high risk) into the main text. Short timelines sometimes means <10 years and high risk sometimes means >95%.
I think you’re expressing your attitude to the general cluster of EA/rationalist views around AI risk typified by eg. Holden and Ajeya’s views (and maybe Paul Christiano’s, I don’t know) rather than a subset of those views typified by eg. Eliezer (and maybe other MIRI people and Daniel Kokotajlo, I don’t know). To me, the main text implies you’re thinking about the second kind of view, but the footnotes are about the first.
And different arguments in the post apply more strongly to different views. Eg
Fewer ‘smart people disagree’ about the numbers in your footnote than about the more extreme view.
I’m not sure Eliezer having occasionally been overconfident, but got the general shape of things right is any evidence at all against >50% AGI in 30 years or >15% chance of catastrophe this century (though it could be evidence against Eliezer’s very high risk view).
The Carlsmith post you say you roughly endorse seems to have 65% on AGI in 50 years, with a 10% chance of existential catastophe overall. So I’m not sure if that means your conclusion is
‘I agree with this view I’ve been critically examining’
‘I’m still skeptical of 30 year timelines with >15% risk, but I roughly endorse 50 year timelines with 10% risk’
‘I’m skeptical of 10 year timelines with >50% risk, but I roughly endorse 30-50 year timelines with 5-20% risk’
Or something else
How do you think people should do this?