Clearly written, well-argued, and up there amongst both his best work and I think one of the better criticisms of xRisk/longtermist EA that I’ve seen.
I think he’s pointed out a fundamental tension in utilitarian calculus here, and pointed out the additional assumption that xRisk-focused EAs have to make this work—“the time of perils”, but I think plausibly argues that this assumption is more difficult to argue for that the initial two (Existential Risk Pessism and the Astronomical Value Thesis)[1]
I think it’s a rich vein of criticism that I’d like to see more xRisk-inclined EAs responed to further (myself included!)
Imo it would his Existential Risk Pessimism and the Time of Perils series (it’s based on a GPI paper of his that he also links to)
Clearly written, well-argued, and up there amongst both his best work and I think one of the better criticisms of xRisk/longtermist EA that I’ve seen.
I think he’s pointed out a fundamental tension in utilitarian calculus here, and pointed out the additional assumption that xRisk-focused EAs have to make this work—“the time of perils”, but I think plausibly argues that this assumption is more difficult to argue for that the initial two (Existential Risk Pessism and the Astronomical Value Thesis)[1]
I think it’s a rich vein of criticism that I’d like to see more xRisk-inclined EAs responed to further (myself included!)
I don’t want to spell the whole thing out here, go read those posts :)
Thanks! I read it, it’s an interesting post, but it’s not “about reasons for his Ai skepticism ”. Browsing the blog, I assume I should read this?