While this does not speak in favor of prioritizing s-risks per se, it obviously speaks against prioritizing X-risks which seem to be their biggest longtermist “competitors” at the moment.
(I have two unrelated remarks I’ll make in separate comments.)
“[U]nderappreciatedly likely to be negative [...] from whatever plausible moral perspective” could mean many things. I maybe agree with the spirit behind this claim, but I want to flag that, personally, I think it’s <10% likely that, if the wisest minds of the EA community researched and discussed this question for a full year, they’d conclude that the future is net negative in expectation for symmetric or nearly-symmetric classical utilitarianism. At the same time, I expect the median future to not be great (partly because I already think the current world is rather bad) and I think symmetric classical utilitarianism is on the furthest end of the spectrum of what seems defensible.
Interesting! Thanks for writing this. Seems like a helpful summary of ideas related to s-risks from AI.
Another important normative reason for dedicating some attention to s-risks is that the future (conditional on humanity’s survival) is underappreciatedly likely to be negative -- or at least not very positive—from whatever plausible moral perspective, e.g., classical utilitarianism (see DiGiovanni 2021; Anthis 2022).
While this does not speak in favor of prioritizing s-risks per se, it obviously speaks against prioritizing X-risks which seem to be their biggest longtermist “competitors” at the moment.
(I have two unrelated remarks I’ll make in separate comments.)
“[U]nderappreciatedly likely to be negative [...] from whatever plausible moral perspective” could mean many things. I maybe agree with the spirit behind this claim, but I want to flag that, personally, I think it’s <10% likely that, if the wisest minds of the EA community researched and discussed this question for a full year, they’d conclude that the future is net negative in expectation for symmetric or nearly-symmetric classical utilitarianism. At the same time, I expect the median future to not be great (partly because I already think the current world is rather bad) and I think symmetric classical utilitarianism is on the furthest end of the spectrum of what seems defensible.