There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving. Even though poverty is far lower and medical care is far better than in the past, there may also be more mental illness and loneliness than in the past. The mutational load within the human population may also be increasing. Taking the hedonic treadmill into account, happiness levels in general should be roughly stable in the long run regardless of life circumstances. One may object to this by saying that wireheading may become feasible in the far future. Yet wireheading may be evolutionarily maladaptive, and pure replicators may dominate the future instead. Andrés Gómez Emilsson has also talked about this in A Universal Plot—Consciousness vs. Pure Replicators.
Regarding averting extinction and option value, deciding to go extinct is far easier said than done. You can’t just convince everyone that life ought to go extinct. Collectively deciding to go extinct would likely require a singleton to exist, such as Thomas Metzinger’s BAAN scenario. Even if you could convince a sizable portion of the population that extinction is desirable, these people will simply be removed by natural selection, and the remaining portion of the population will continue existing and reproducing. Thus, if extinction turns out to be desirable, engineered extinction would most likely have to be done without the consent of the majority of the population. In any case, it is probably far easier to go extinct now while we are confined to a single planet than it would be during the age of galaxy-wide colonization.
There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving.
Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
happiness levels in general should be roughly stable in the long run regardless of life circumstances.
Maybe, but if we can’t make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.
Regarding averting extinction and option value, deciding to go extinct is far easier said than done.
This is a fair point. What I would say though is that extinction risk is only a very small subset of existential risk so desiring extinction doesn’t necessarily mean you shouldn’t want to reduce most forms of existential risk.
Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right, you are extrapolating billions of years from the past few thousand years. To say that this is wild extrapolation is an understatement. I know Jacy Reese talks about it in this post, yet he admits the possibility that the expected value of the far future could potentially be close to zero. Brian Tomasik also wrote this article about how a “near miss” in AI alignment could create astronomical amounts of suffering.
Maybe, but if we can’t make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point of the population and make everyone have hyperthymia. But you must remember that millions of years of evolution put our hedonic set-points where they are for a reason. It’s possible that in the long run, genetically engineered hyperthymia might be evolutionarily maladaptive, and the “super happy people” will die out in the long run.
Would you mind linking some posts or articles assessing the expected value of the long-term future?
You’re right to question this as it is an important consideration. The Global Priorities Institute has highlighted “The value of the future of humanity” in their research agenda (pages 10-13). Have a look at the “existing informal discussion” on pages 12 and 13, some of which argues that the expected value of the future is positive.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point
I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.
Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you’re a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you’re a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn’t necessarily increase expected utility.
Yes that is true. For what it’s worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the “sadistic conclusion” whereby one can make things better by bringing into existence people with terrible lives, as long as they’re still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.
I suspect that many of the writings by people associated with the Future of Humanity Institute address this in some form or other. One reading of any and everything by transhumanists / Humanity+ people (Bostrom included) is that the value of the future seems pretty likely to be positive. Similarly, I expect that a lot of the interviewees other than just Christiano on the 80k (and Future of Life?) podcast express this view in some form or other and defend it at least a little bit, but I don’t remember specific references other than the Christiano one.
And there’s suffering-focused stuff too, but it seemed like you were looking for arguments pointing in the opposite direction.
There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving. Even though poverty is far lower and medical care is far better than in the past, there may also be more mental illness and loneliness than in the past. The mutational load within the human population may also be increasing. Taking the hedonic treadmill into account, happiness levels in general should be roughly stable in the long run regardless of life circumstances. One may object to this by saying that wireheading may become feasible in the far future. Yet wireheading may be evolutionarily maladaptive, and pure replicators may dominate the future instead. Andrés Gómez Emilsson has also talked about this in A Universal Plot—Consciousness vs. Pure Replicators.
Regarding averting extinction and option value, deciding to go extinct is far easier said than done. You can’t just convince everyone that life ought to go extinct. Collectively deciding to go extinct would likely require a singleton to exist, such as Thomas Metzinger’s BAAN scenario. Even if you could convince a sizable portion of the population that extinction is desirable, these people will simply be removed by natural selection, and the remaining portion of the population will continue existing and reproducing. Thus, if extinction turns out to be desirable, engineered extinction would most likely have to be done without the consent of the majority of the population. In any case, it is probably far easier to go extinct now while we are confined to a single planet than it would be during the age of galaxy-wide colonization.
Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
Maybe, but if we can’t make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.
This is a fair point. What I would say though is that extinction risk is only a very small subset of existential risk so desiring extinction doesn’t necessarily mean you shouldn’t want to reduce most forms of existential risk.
Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right, you are extrapolating billions of years from the past few thousand years. To say that this is wild extrapolation is an understatement. I know Jacy Reese talks about it in this post, yet he admits the possibility that the expected value of the far future could potentially be close to zero. Brian Tomasik also wrote this article about how a “near miss” in AI alignment could create astronomical amounts of suffering.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point of the population and make everyone have hyperthymia. But you must remember that millions of years of evolution put our hedonic set-points where they are for a reason. It’s possible that in the long run, genetically engineered hyperthymia might be evolutionarily maladaptive, and the “super happy people” will die out in the long run.
You’re right to question this as it is an important consideration. The Global Priorities Institute has highlighted “The value of the future of humanity” in their research agenda (pages 10-13). Have a look at the “existing informal discussion” on pages 12 and 13, some of which argues that the expected value of the future is positive.
I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.
Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you’re a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you’re a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn’t necessarily increase expected utility.
Yes that is true. For what it’s worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the “sadistic conclusion” whereby one can make things better by bringing into existence people with terrible lives, as long as they’re still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.
The most direct (positive) answer to this question I remember reading is here.
Toby Ord discusses it briefly in chapter 2 of The Precipice.
Some brief podcast discussion here.
I suspect that many of the writings by people associated with the Future of Humanity Institute address this in some form or other. One reading of any and everything by transhumanists / Humanity+ people (Bostrom included) is that the value of the future seems pretty likely to be positive. Similarly, I expect that a lot of the interviewees other than just Christiano on the 80k (and Future of Life?) podcast express this view in some form or other and defend it at least a little bit, but I don’t remember specific references other than the Christiano one.
And there’s suffering-focused stuff too, but it seemed like you were looking for arguments pointing in the opposite direction.