Brian Tomasik wrote this article on his donation recommendations, which may provide you with some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. In terms of the long-term future, reducing suffering in the far future may be more important than reducing existential risk. If life in the far future is significantly bad on average, space colonization could potentially create and spread a large amount of suffering.
My understanding is that Brian Tomasik has a suffering-focused view of ethics in that he sees reducing suffering as inherently more important than increasing happiness—even if the ‘magnitude’ of the happiness and suffering are the same.
If one holds a more symmetric view where suffering and happiness are both equally important it isn’t clear how useful his donation recommendations are.
Even if you value reducing suffering and increasing happiness equally, reducing S-risks would likely still greatly increase the expected value of the far future. Efforts to reduce S-risks would almost certainly reduce the risk of extreme suffering being created in the far future, but it’s not clear that they would reduce happiness much.
I’m not saying that reducing S-risks isn’t a great thing to do, nor that it would reduce happiness, I’m just saying that it isn’t clear that a focus on reducing S-risks rather than on reducing existential risk is justified if one values reducing suffering and increasing happiness equally.
I think robustness (or ambiguity aversion) favours reducing extinction risks without increasing s-risks and reducing s-risks without increasing extinction risks, or overall reducing both, perhaps with a portfolio of interventions. I think this would favour AI safety, especially that focused on cooperation, possibly other work on governance and conflict, and most other work to reduce s-risks (since it does not increase extinction risks), at least if we believe CRS and/or CLR that these do in fact reduce s-risks. I think Brian Tomasik comes to an overall positive view of MIRI in his recommendations page, and Raising for Effective Giving, also a project by the Effective Altruism Foundation like CLR, recommends MIRI in part because “MIRI’s work has the ability to prevent vast amounts of future suffering.”.
Some work to reduce extinction risks seems reasonably likely to me on its own to increase s-risks, like biosecurity and nuclear risk reduction work, although there may also be arguments in favour related to improving cooperation, but I’m skeptical.
For what it’s worth, I’m not personally convinced any particular AI safety work reduces s-risks overall, because it’s not clear it reduces s-risks directly more than it increases them by reducing extinction risks, although I would expect CLR and CRS to be better donation opportunities for this given their priorities. I haven’t spent a lot of time thinking about this, though.
If one values reducing suffering and increasing happiness equally, it isn’t clear that reducing existential risk is justified either. Existential risk reduction and space colonization means that the far future can be expected to have both more happiness and more suffering, which would seem to even out the expected utility. More happiness + more suffering isn’t necessarily better than less happiness + less suffering. Focusing on reducing existential risks would only seem to be justified if either A) you believe in Positive Utilitarianism, i.e. increasing happiness is more important than reducing suffering, B) the far future can be reasonably expected to have significantly more happiness than suffering, or C) reducing existential risk is a terminal value in and of itself.
B) the far future can be reasonably expected to have significantly more happiness than suffering
I think EAs who want to reduce x-risk generally do believe that the future should have more happiness than suffering, conditional on no existential catastrophe occurring. I think these people generally argue that quality of life has improved over time and believe that this trend should continue (e.g. Steven Pinker’s The Better Angels of Our Nature). Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.
Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option. In the meantime it make sense to reduce existential risk if we are uncertain about the sign of the value of the future, to leave open the possibility of an amazing future.
Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.
I think there’s recently more skepticism about cultured meat (see here, although I still expect factory farming to be phased out eventually, regardless), but either way, it’s not clear a similar argument would work for artificial sentience, used as tools, used in simulations or even intentionally tortured. There’s also some risk that nonhuman animals themselves will be used in space colonization, but that may not be where most of the risk is.
Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.
It seems unlikely to me that we would go extinct, even conditional on “us” deciding it would be best. Who are “we”? There will probably be very divergent views (especially after space colonization, within and between colonies, and these colonies may be spatially distant and self-sufficient, so influencing them becomes much more difficult). You would need to get a sufficiently large coalition to agree and force the rest to go extinct, but both are unlikely, even conditional on “our” judgement that extinction would be better, and actively attempting to force groups into extinction may itself be an s-risk. In this way, an option value argument may go the other way, too: once TAI is here in a scenario with multiple powers or space colonization goes sufficiently far, going extinct effectively stops being an option.
I’m not really sure what to think about digital sentience. We could in theory create astronomical levels of happiness, astronomical levels of suffering, or both. Digital sentience could easily dominate all other forms of sentience so it’s certainly an important consideration.
It seems unlikely to me that we would go extinct, even conditional on “us” deciding it would be best.
Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option.
Note that this post (written by people who agree that reducing extinction risk is good) provides a critique of the option value argument.
There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving. Even though poverty is far lower and medical care is far better than in the past, there may also be more mental illness and loneliness than in the past. The mutational load within the human population may also be increasing. Taking the hedonic treadmill into account, happiness levels in general should be roughly stable in the long run regardless of life circumstances. One may object to this by saying that wireheading may become feasible in the far future. Yet wireheading may be evolutionarily maladaptive, and pure replicators may dominate the future instead. Andrés Gómez Emilsson has also talked about this in A Universal Plot—Consciousness vs. Pure Replicators.
Regarding averting extinction and option value, deciding to go extinct is far easier said than done. You can’t just convince everyone that life ought to go extinct. Collectively deciding to go extinct would likely require a singleton to exist, such as Thomas Metzinger’s BAAN scenario. Even if you could convince a sizable portion of the population that extinction is desirable, these people will simply be removed by natural selection, and the remaining portion of the population will continue existing and reproducing. Thus, if extinction turns out to be desirable, engineered extinction would most likely have to be done without the consent of the majority of the population. In any case, it is probably far easier to go extinct now while we are confined to a single planet than it would be during the age of galaxy-wide colonization.
There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving.
Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
happiness levels in general should be roughly stable in the long run regardless of life circumstances.
Maybe, but if we can’t make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.
Regarding averting extinction and option value, deciding to go extinct is far easier said than done.
This is a fair point. What I would say though is that extinction risk is only a very small subset of existential risk so desiring extinction doesn’t necessarily mean you shouldn’t want to reduce most forms of existential risk.
Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right, you are extrapolating billions of years from the past few thousand years. To say that this is wild extrapolation is an understatement. I know Jacy Reese talks about it in this post, yet he admits the possibility that the expected value of the far future could potentially be close to zero. Brian Tomasik also wrote this article about how a “near miss” in AI alignment could create astronomical amounts of suffering.
Maybe, but if we can’t make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point of the population and make everyone have hyperthymia. But you must remember that millions of years of evolution put our hedonic set-points where they are for a reason. It’s possible that in the long run, genetically engineered hyperthymia might be evolutionarily maladaptive, and the “super happy people” will die out in the long run.
Would you mind linking some posts or articles assessing the expected value of the long-term future?
You’re right to question this as it is an important consideration. The Global Priorities Institute has highlighted “The value of the future of humanity” in their research agenda (pages 10-13). Have a look at the “existing informal discussion” on pages 12 and 13, some of which argues that the expected value of the future is positive.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point
I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.
Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you’re a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you’re a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn’t necessarily increase expected utility.
Yes that is true. For what it’s worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the “sadistic conclusion” whereby one can make things better by bringing into existence people with terrible lives, as long as they’re still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.
I suspect that many of the writings by people associated with the Future of Humanity Institute address this in some form or other. One reading of any and everything by transhumanists / Humanity+ people (Bostrom included) is that the value of the future seems pretty likely to be positive. Similarly, I expect that a lot of the interviewees other than just Christiano on the 80k (and Future of Life?) podcast express this view in some form or other and defend it at least a little bit, but I don’t remember specific references other than the Christiano one.
And there’s suffering-focused stuff too, but it seemed like you were looking for arguments pointing in the opposite direction.
I lend some credence to the trendlines argument but mostly think that humans are more likely to want to optimize for extreme happiness (or other positive moral goods) than extreme suffering (or other negatives/moral bads), and any additive account of moral goods will shake out to in expectation have a lot more positive moral goods than moral bads, unless you have really extreme inside views to think that optimizing for extreme moral bads is as as likely (or more likely) than optimizing for extreme moral goods.
I do think there are nontrivial probability of P(S-risk | singularity), eg a) our descendants are badly mistaken or b) other agents follow through with credible pre-commitments to torture, but I think it ought to be surprising for classical utilitarians to believe that the EV of the far future is negative.
This is easy for me to say as someone who agrees with these donation recommendations, but I find it disappointing that this comment apparently has gotten several downvotes. The comment calls attention to a neglected segment of longtermist causes, and briefly discusses what sorts of considerations would lead you to prioritize those causes. Seems like a useful contribution.
Note that s-risks are existential risks (or at least some s-risks are, depending on the definition). Extinction risks are specific existential risks, too.
Brian Tomasik wrote this article on his donation recommendations, which may provide you with some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. In terms of the long-term future, reducing suffering in the far future may be more important than reducing existential risk. If life in the far future is significantly bad on average, space colonization could potentially create and spread a large amount of suffering.
My understanding is that Brian Tomasik has a suffering-focused view of ethics in that he sees reducing suffering as inherently more important than increasing happiness—even if the ‘magnitude’ of the happiness and suffering are the same.
If one holds a more symmetric view where suffering and happiness are both equally important it isn’t clear how useful his donation recommendations are.
Even if you value reducing suffering and increasing happiness equally, reducing S-risks would likely still greatly increase the expected value of the far future. Efforts to reduce S-risks would almost certainly reduce the risk of extreme suffering being created in the far future, but it’s not clear that they would reduce happiness much.
I’m not saying that reducing S-risks isn’t a great thing to do, nor that it would reduce happiness, I’m just saying that it isn’t clear that a focus on reducing S-risks rather than on reducing existential risk is justified if one values reducing suffering and increasing happiness equally.
I think robustness (or ambiguity aversion) favours reducing extinction risks without increasing s-risks and reducing s-risks without increasing extinction risks, or overall reducing both, perhaps with a portfolio of interventions. I think this would favour AI safety, especially that focused on cooperation, possibly other work on governance and conflict, and most other work to reduce s-risks (since it does not increase extinction risks), at least if we believe CRS and/or CLR that these do in fact reduce s-risks. I think Brian Tomasik comes to an overall positive view of MIRI in his recommendations page, and Raising for Effective Giving, also a project by the Effective Altruism Foundation like CLR, recommends MIRI in part because “MIRI’s work has the ability to prevent vast amounts of future suffering.”.
Some work to reduce extinction risks seems reasonably likely to me on its own to increase s-risks, like biosecurity and nuclear risk reduction work, although there may also be arguments in favour related to improving cooperation, but I’m skeptical.
For what it’s worth, I’m not personally convinced any particular AI safety work reduces s-risks overall, because it’s not clear it reduces s-risks directly more than it increases them by reducing extinction risks, although I would expect CLR and CRS to be better donation opportunities for this given their priorities. I haven’t spent a lot of time thinking about this, though.
If one values reducing suffering and increasing happiness equally, it isn’t clear that reducing existential risk is justified either. Existential risk reduction and space colonization means that the far future can be expected to have both more happiness and more suffering, which would seem to even out the expected utility. More happiness + more suffering isn’t necessarily better than less happiness + less suffering. Focusing on reducing existential risks would only seem to be justified if either A) you believe in Positive Utilitarianism, i.e. increasing happiness is more important than reducing suffering, B) the far future can be reasonably expected to have significantly more happiness than suffering, or C) reducing existential risk is a terminal value in and of itself.
I think EAs who want to reduce x-risk generally do believe that the future should have more happiness than suffering, conditional on no existential catastrophe occurring. I think these people generally argue that quality of life has improved over time and believe that this trend should continue (e.g. Steven Pinker’s The Better Angels of Our Nature). Of course life for farmed animals has got worse...but I think people believe we should successfully render factory farming redundant on account of cultivated meat.
Also, considering extinction specifically, Will MacAskill has made the argument that we should avert human extinction based on option value even if we think extinction might be best. Basically even if we avert extinction now, we can in theory go extinct later on if we judge that to be the best option. In the meantime it make sense to reduce existential risk if we are uncertain about the sign of the value of the future, to leave open the possibility of an amazing future.
I think there’s recently more skepticism about cultured meat (see here, although I still expect factory farming to be phased out eventually, regardless), but either way, it’s not clear a similar argument would work for artificial sentience, used as tools, used in simulations or even intentionally tortured. There’s also some risk that nonhuman animals themselves will be used in space colonization, but that may not be where most of the risk is.
It seems unlikely to me that we would go extinct, even conditional on “us” deciding it would be best. Who are “we”? There will probably be very divergent views (especially after space colonization, within and between colonies, and these colonies may be spatially distant and self-sufficient, so influencing them becomes much more difficult). You would need to get a sufficiently large coalition to agree and force the rest to go extinct, but both are unlikely, even conditional on “our” judgement that extinction would be better, and actively attempting to force groups into extinction may itself be an s-risk. In this way, an option value argument may go the other way, too: once TAI is here in a scenario with multiple powers or space colonization goes sufficiently far, going extinct effectively stops being an option.
I’m not really sure what to think about digital sentience. We could in theory create astronomical levels of happiness, astronomical levels of suffering, or both. Digital sentience could easily dominate all other forms of sentience so it’s certainly an important consideration.
This is a fair point to be honest!
Note that this post (written by people who agree that reducing extinction risk is good) provides a critique of the option value argument.
There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving. Even though poverty is far lower and medical care is far better than in the past, there may also be more mental illness and loneliness than in the past. The mutational load within the human population may also be increasing. Taking the hedonic treadmill into account, happiness levels in general should be roughly stable in the long run regardless of life circumstances. One may object to this by saying that wireheading may become feasible in the far future. Yet wireheading may be evolutionarily maladaptive, and pure replicators may dominate the future instead. Andrés Gómez Emilsson has also talked about this in A Universal Plot—Consciousness vs. Pure Replicators.
Regarding averting extinction and option value, deciding to go extinct is far easier said than done. You can’t just convince everyone that life ought to go extinct. Collectively deciding to go extinct would likely require a singleton to exist, such as Thomas Metzinger’s BAAN scenario. Even if you could convince a sizable portion of the population that extinction is desirable, these people will simply be removed by natural selection, and the remaining portion of the population will continue existing and reproducing. Thus, if extinction turns out to be desirable, engineered extinction would most likely have to be done without the consent of the majority of the population. In any case, it is probably far easier to go extinct now while we are confined to a single planet than it would be during the age of galaxy-wide colonization.
Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
Maybe, but if we can’t make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.
This is a fair point. What I would say though is that extinction risk is only a very small subset of existential risk so desiring extinction doesn’t necessarily mean you shouldn’t want to reduce most forms of existential risk.
Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right, you are extrapolating billions of years from the past few thousand years. To say that this is wild extrapolation is an understatement. I know Jacy Reese talks about it in this post, yet he admits the possibility that the expected value of the far future could potentially be close to zero. Brian Tomasik also wrote this article about how a “near miss” in AI alignment could create astronomical amounts of suffering.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point of the population and make everyone have hyperthymia. But you must remember that millions of years of evolution put our hedonic set-points where they are for a reason. It’s possible that in the long run, genetically engineered hyperthymia might be evolutionarily maladaptive, and the “super happy people” will die out in the long run.
You’re right to question this as it is an important consideration. The Global Priorities Institute has highlighted “The value of the future of humanity” in their research agenda (pages 10-13). Have a look at the “existing informal discussion” on pages 12 and 13, some of which argues that the expected value of the future is positive.
I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.
Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you’re a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you’re a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn’t necessarily increase expected utility.
Yes that is true. For what it’s worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the “sadistic conclusion” whereby one can make things better by bringing into existence people with terrible lives, as long as they’re still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.
The most direct (positive) answer to this question I remember reading is here.
Toby Ord discusses it briefly in chapter 2 of The Precipice.
Some brief podcast discussion here.
I suspect that many of the writings by people associated with the Future of Humanity Institute address this in some form or other. One reading of any and everything by transhumanists / Humanity+ people (Bostrom included) is that the value of the future seems pretty likely to be positive. Similarly, I expect that a lot of the interviewees other than just Christiano on the 80k (and Future of Life?) podcast express this view in some form or other and defend it at least a little bit, but I don’t remember specific references other than the Christiano one.
And there’s suffering-focused stuff too, but it seemed like you were looking for arguments pointing in the opposite direction.
I lend some credence to the trendlines argument but mostly think that humans are more likely to want to optimize for extreme happiness (or other positive moral goods) than extreme suffering (or other negatives/moral bads), and any additive account of moral goods will shake out to in expectation have a lot more positive moral goods than moral bads, unless you have really extreme inside views to think that optimizing for extreme moral bads is as as likely (or more likely) than optimizing for extreme moral goods.
I do think there are nontrivial probability of P(S-risk | singularity), eg a) our descendants are badly mistaken or b) other agents follow through with credible pre-commitments to torture, but I think it ought to be surprising for classical utilitarians to believe that the EV of the far future is negative.
This is easy for me to say as someone who agrees with these donation recommendations, but I find it disappointing that this comment apparently has gotten several downvotes. The comment calls attention to a neglected segment of longtermist causes, and briefly discusses what sorts of considerations would lead you to prioritize those causes. Seems like a useful contribution.
Note that s-risks are existential risks (or at least some s-risks are, depending on the definition). Extinction risks are specific existential risks, too.