It would be totally reasonable for the author to discuss s-risks. But only some s-risks are very concerning to utilitarians—for example, utilitarians don’t worry much about the s-risk of 10^30 suffering people in a universe with 10^40 flourishing people. And it’s not clear that utilitarian catastrophes are anywhere near as likely as the possible outcomes the author discusses. This post is written for utilitarians, and I’m not aware of arguments that it’s reasonably likely that the future is bad on a scale comparable to the goodness of “utilitarian AGI” (from a utilitarian perspective).
But only some s-risks are very concerning to utilitarians—for example, utilitarians don’t worry much about the s-risk of 10^30 suffering people in a universe with 10^40 flourishing people.
Utilitarianism =/= classical utilitarianism. I’m a utilitarian who would think that outcome is extremely awful. It depends on the axiology.
It seems like some discussion of s-risks is called for as they seem to be assumed away, though many longtermists are concerned about them.
It would be totally reasonable for the author to discuss s-risks. But only some s-risks are very concerning to utilitarians—for example, utilitarians don’t worry much about the s-risk of 10^30 suffering people in a universe with 10^40 flourishing people. And it’s not clear that utilitarian catastrophes are anywhere near as likely as the possible outcomes the author discusses. This post is written for utilitarians, and I’m not aware of arguments that it’s reasonably likely that the future is bad on a scale comparable to the goodness of “utilitarian AGI” (from a utilitarian perspective).
Utilitarianism =/= classical utilitarianism. I’m a utilitarian who would think that outcome is extremely awful. It depends on the axiology.