I guess to me, the part of the future with 10^25 unhappy individuals sounds like an s-risk. I would imagine an s-outcome could take place in a universe that’s still on the net good. Just because the universe may be on the net good though, doesn’t mean we shouldn’t be concerned with large s-outcomes that may happen.
Especially when it comes to the prevention of s-risks affecting futures that otherwise contain a lot of happiness, it matters a great deal how the risk in question is being prevented. For instance, if we envision a future that is utopian in many respects except for a small portion of the population suffering because of problem x, it is in the interest of virtually all value systems to solve problem x in highly targeted ways that move probability mass towards even better futures. By contrast, only few value systems (ones that are strongly or exclusively about reducing suffering/bad things) would consider it overall good if problem x was “solved” in a way that not only prevented the suffering due to problem x, but also prevented all the happiness from the future scenario this suffering was embedded in.
So it’d be totally fine to address all sources of unnecessary suffering (and even “small” s-risks embedded in an otherwise positive future) if there are targeted ways to bring about uncontroversial improvements. :) In practice, it’s sometimes hard to find interventions that are targeted enough because affecting the future is very very difficult and we only have crude levers. Having said that, I think many things that we’re going to support with the fund are actually quite positive for positive-future-oriented value systems as well. So there certainly are some more targeted levers.
There are instances where it does feel justified to me to also move some probability mass away from s-risks towards extinction (or paperclip scenarios), but that should be reserved either for uncontroversially terrible futures, or for those futures where most of the disvalue for downside-focused value systems comes from. I doubt that this includes futures where 10^10x more people are happy than unhappy.
And of course positive-future-oriented EAs face analogous tradeoffs of cooperation with other value systems.
I guess to me, the part of the future with 10^25 unhappy individuals sounds like an s-risk. I would imagine an s-outcome could take place in a universe that’s still on the net good. Just because the universe may be on the net good though, doesn’t mean we shouldn’t be concerned with large s-outcomes that may happen.
Yeah. I put it the following way in another post:
So it’d be totally fine to address all sources of unnecessary suffering (and even “small” s-risks embedded in an otherwise positive future) if there are targeted ways to bring about uncontroversial improvements. :) In practice, it’s sometimes hard to find interventions that are targeted enough because affecting the future is very very difficult and we only have crude levers. Having said that, I think many things that we’re going to support with the fund are actually quite positive for positive-future-oriented value systems as well. So there certainly are some more targeted levers.
There are instances where it does feel justified to me to also move some probability mass away from s-risks towards extinction (or paperclip scenarios), but that should be reserved either for uncontroversially terrible futures, or for those futures where most of the disvalue for downside-focused value systems comes from. I doubt that this includes futures where 10^10x more people are happy than unhappy.
And of course positive-future-oriented EAs face analogous tradeoffs of cooperation with other value systems.