“[U]nderappreciatedly likely to be negative [...] from whatever plausible moral perspective” could mean many things. I maybe agree with the spirit behind this claim, but I want to flag that, personally, I think it’s <10% likely that, if the wisest minds of the EA community researched and discussed this question for a full year, they’d conclude that the future is net negative in expectation for symmetric or nearly-symmetric classical utilitarianism. At the same time, I expect the median future to not be great (partly because I already think the current world is rather bad) and I think symmetric classical utilitarianism is on the furthest end of the spectrum of what seems defensible.
“[U]nderappreciatedly likely to be negative [...] from whatever plausible moral perspective” could mean many things. I maybe agree with the spirit behind this claim, but I want to flag that, personally, I think it’s <10% likely that, if the wisest minds of the EA community researched and discussed this question for a full year, they’d conclude that the future is net negative in expectation for symmetric or nearly-symmetric classical utilitarianism. At the same time, I expect the median future to not be great (partly because I already think the current world is rather bad) and I think symmetric classical utilitarianism is on the furthest end of the spectrum of what seems defensible.