Clearly written, well-argued, and up there amongst both his best work and I think one of the better criticisms of xRisk/longtermist EA that I’ve seen.
I think he’s pointed out a fundamental tension in utilitarian calculus here, and pointed out the additional assumption that xRisk-focused EAs have to make this work—“the time of perils”, but I think plausibly argues that this assumption is more difficult to argue for that the initial two (Existential Risk Pessism and the Astronomical Value Thesis)[1]
I think it’s a rich vein of criticism that I’d like to see more xRisk-inclined EAs responed to further (myself included!)
Specific recommendations if your interests overlap with Aaron_mai’s: 1(a) on a tension between thinking X-risks are likely and thinking reducing X-risks have astronomical value; 1(b) on the expected value calculation in X-risk; 6(a) as a critical review of the Carlsmith report on AI risk.
Which of David’s posts would you recommend as a particularly good example and starting point?
Imo it would his Existential Risk Pessimism and the Time of Perils series (it’s based on a GPI paper of his that he also links to)
Clearly written, well-argued, and up there amongst both his best work and I think one of the better criticisms of xRisk/longtermist EA that I’ve seen.
I think he’s pointed out a fundamental tension in utilitarian calculus here, and pointed out the additional assumption that xRisk-focused EAs have to make this work—“the time of perils”, but I think plausibly argues that this assumption is more difficult to argue for that the initial two (Existential Risk Pessism and the Astronomical Value Thesis)[1]
I think it’s a rich vein of criticism that I’d like to see more xRisk-inclined EAs responed to further (myself included!)
I don’t want to spell the whole thing out here, go read those posts :)
Thanks! I read it, it’s an interesting post, but it’s not “about reasons for his Ai skepticism ”. Browsing the blog, I assume I should read this?
Depends entirely on your interests! They are sorted thematically https://ineffectivealtruismblog.com/post-series/
Specific recommendations if your interests overlap with Aaron_mai’s: 1(a) on a tension between thinking X-risks are likely and thinking reducing X-risks have astronomical value; 1(b) on the expected value calculation in X-risk; 6(a) as a critical review of the Carlsmith report on AI risk.