As a utilitarianist I personally believe alignment to be the most important course areaâthough weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of â1, I would describe my believe for future scenarios like this:
~5% likelihood: ~101000 (Very good future e.g. hedonium shockwave)
~90% likelihood: â1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
~5% likelihood: ~-10100 (s-risk like scenarios)
My reasoning for thinking âscenario 2â is more likely than âscenario 1â is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, weâll find a way to keep the earth habitable, and thus wild animal suffering.
Based on these 3 scenarios, you donât have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) donât seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if youâre a classical utilitarian, the future is likely to be good.
So now Iâm asking, what am I getting wrong? Why is the future likely to be net positive?
Hey Jens- itâs worth posting this one as a quick take, youâre likely to get more engagement.
One thing Iâd add here is that values are very weird, the future is very weird (and always has been) and the 90% likelihood of â1 therefore seems way too confident to me.
I have a question I would like some thoughts on:
As a utilitarianist I personally believe alignment to be the most important course areaâthough weirdly enough, even though I believe x-risk reduction to be positive in expectation, I believe the future is most likely to be net negative.
I personally believe, without a high level of certainty, that the current utilities on earth are net negative due to wild animal suffering. If we therefore give the current world a utility value of â1, I would describe my believe for future scenarios like this:
~5% likelihood: ~101000 (Very good future e.g. hedonium shockwave)
~90% likelihood: â1 (the future is likely good for humans, but this is still negligible compared to wild animal suffering, which will remain)
~5% likelihood: ~-10100 (s-risk like scenarios)
My reasoning for thinking âscenario 2â is more likely than âscenario 1â is based on what seems to be the values of the general public currently. Most people seem to care about nature conservation, but no one seems interested in mass producing (artificial) happiness. While the earth is only expected to remain habitable for about two billion years (and humans, assuming we avoid any x-risks, are likely to remain for much longer), I think, when it comes to it, weâll find a way to keep the earth habitable, and thus wild animal suffering.
Based on these 3 scenarios, you donât have to be a great mathematician to realize that the future is most likely to be net negative, yet positive in expectation. While I still, based on this, find alignment to be the most important course area, I find it quite demotivating that I spend so much of my time (donations) to preserve a future that I find unlikely to be positive. But the thing is, most long-termist (and EAs in general) donât seem to have similar beliefs to me. Even someone like Brian Tomasik has said that if youâre a classical utilitarian, the future is likely to be good.
So now Iâm asking, what am I getting wrong? Why is the future likely to be net positive?
Hey Jens- itâs worth posting this one as a quick take, youâre likely to get more engagement.
One thing Iâd add here is that values are very weird, the future is very weird (and always has been) and the 90% likelihood of â1 therefore seems way too confident to me.