Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right, you are extrapolating billions of years from the past few thousand years. To say that this is wild extrapolation is an understatement. I know Jacy Reese talks about it in this post, yet he admits the possibility that the expected value of the far future could potentially be close to zero. Brian Tomasik also wrote this article about how a “near miss” in AI alignment could create astronomical amounts of suffering.
Maybe, but if we can’t make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point of the population and make everyone have hyperthymia. But you must remember that millions of years of evolution put our hedonic set-points where they are for a reason. It’s possible that in the long run, genetically engineered hyperthymia might be evolutionarily maladaptive, and the “super happy people” will die out in the long run.
Would you mind linking some posts or articles assessing the expected value of the long-term future?
You’re right to question this as it is an important consideration. The Global Priorities Institute has highlighted “The value of the future of humanity” in their research agenda (pages 10-13). Have a look at the “existing informal discussion” on pages 12 and 13, some of which argues that the expected value of the future is positive.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point
I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.
Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you’re a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you’re a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn’t necessarily increase expected utility.
Yes that is true. For what it’s worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the “sadistic conclusion” whereby one can make things better by bringing into existence people with terrible lives, as long as they’re still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.
I suspect that many of the writings by people associated with the Future of Humanity Institute address this in some form or other. One reading of any and everything by transhumanists / Humanity+ people (Bostrom included) is that the value of the future seems pretty likely to be positive. Similarly, I expect that a lot of the interviewees other than just Christiano on the 80k (and Future of Life?) podcast express this view in some form or other and defend it at least a little bit, but I don’t remember specific references other than the Christiano one.
And there’s suffering-focused stuff too, but it seemed like you were looking for arguments pointing in the opposite direction.
Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right, you are extrapolating billions of years from the past few thousand years. To say that this is wild extrapolation is an understatement. I know Jacy Reese talks about it in this post, yet he admits the possibility that the expected value of the far future could potentially be close to zero. Brian Tomasik also wrote this article about how a “near miss” in AI alignment could create astronomical amounts of suffering.
Sure, it’s possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point of the population and make everyone have hyperthymia. But you must remember that millions of years of evolution put our hedonic set-points where they are for a reason. It’s possible that in the long run, genetically engineered hyperthymia might be evolutionarily maladaptive, and the “super happy people” will die out in the long run.
You’re right to question this as it is an important consideration. The Global Priorities Institute has highlighted “The value of the future of humanity” in their research agenda (pages 10-13). Have a look at the “existing informal discussion” on pages 12 and 13, some of which argues that the expected value of the future is positive.
I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.
Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you’re a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you’re a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn’t necessarily increase expected utility.
Yes that is true. For what it’s worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the “sadistic conclusion” whereby one can make things better by bringing into existence people with terrible lives, as long as they’re still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.
The most direct (positive) answer to this question I remember reading is here.
Toby Ord discusses it briefly in chapter 2 of The Precipice.
Some brief podcast discussion here.
I suspect that many of the writings by people associated with the Future of Humanity Institute address this in some form or other. One reading of any and everything by transhumanists / Humanity+ people (Bostrom included) is that the value of the future seems pretty likely to be positive. Similarly, I expect that a lot of the interviewees other than just Christiano on the 80k (and Future of Life?) podcast express this view in some form or other and defend it at least a little bit, but I don’t remember specific references other than the Christiano one.
And there’s suffering-focused stuff too, but it seemed like you were looking for arguments pointing in the opposite direction.