I think I disagree with several arguments here, and one of the main arguments could be thought of as an argument for longtermism. And I have to add, this post is really well-written and the arguments/intuitions are really clearly expressed! Also, epistemic status of the last paragraph is quite speculative.
First of all, most longtermist causes and projects aim to increase the chance of survival for existing humans(misaligned AI, engineered pandemics and nuclear war are catastrophes that could take within this century, if you don’t have a reason to completely disregard forecasters and experts) or reduce the chance of global catastrophic events for the generation who is already alive, again biorisk and pandemics could be thought of longtermist causes but if more people would be working on these issues pre-2020, their actions and work would be impactful not only for future generations, but already existing people who suffered throughout COVID-19.
If I’m not misunderstanding one of the main ideas/intuitions that form the basis for this review is “It is uncertain whether future people will exist or not, so therefore we should give more weight to the idea that humanity may cease to exist and donating to or working for longtermist causes may be less impactful compared to neartermist causes”. If we ought to give more weight to the idea that future people may not exist, isn’t this argument for working on x-risk reduction? Even if you have a person-affecting view of population ethics, since the world could be destroyed tomorrow, the following week or within this year/decade/century, thinking about s-risk that could result from a misaligned AI or stable totalitarianism are all events that could impact people who are already alive, and cause them to suffer at an astronomical level, or if we’re optimistic, curtail humanity’s potential in a way that will render the lives of already existing more unbearable and prevent us from coordinating to reduce suffering.
Thirdly, I think it wouldn’t be wrong to say “excited altruism” rather than “obligatory altruism” emphasized more and more as EAs started focusing on scaling and community-building. Peter Singer do think we have an obligation to help those who suffer as long as it doesn’t cost us astronomically. Most variants of utilitarianism and Kantian-ish moral views would use the word “obligation” in a non-trivial way for framing our responsibility to help those who suffer and who are worse-off. Should I buy a yacht or save 100 children in Africa? Even though a lot of EAs wouldn’t say “they are obligated to not buy the yacht and donate to GiveWell”, some EAs including me would probably agree that is a moral dilemma where we could say that billionaire kinda has an obligation to help. But you may disagree with this and I would totally understand and you may even be right, because maybe there are no moral truths! But, I wouldn’t say longtermism too can be and is framed within a paradigm of excited altruism, because the stakes are too high and longtermism is usually targeted at already EA audiences, people use the word “should” because this conversation usually takes place between people who already agree that we should do good. So even if you’re not a moral realist and don’t believe in moral obligations, you can be a longtermist.
As a final point, I do agree we don’t care about humanity in abstract, usually people care about existing people because of intuitions/sentiments. But, also most people with the exception of few cultures, didn’t care about animals at all throughout humanity’s history. So when it comes who should we care about and how we should think about that questions, our hunch and intuitions usually don’t work really well. We, I personally don’t at a sentimental level, tend to also don’t think about the welfare of insects and shrimps, but is there some chance that we should include these beings into our moral circles and care about them? I definitely wouldn’t say no. Also a lot of people’s hunch is that we should care about people around us, but again, that is incompatible with the idea that certain people aren’t more worthy of saving and caring just because people closer to us, which probably isn’t the case and a Brit should save a British person instead 180 people from Malawi, even though almost everyone(in the literal sense) acted that way until Peter Singer because they had a hunch, but that hunch is unfortunately probably inaccurate if we want to do good, so we may have this intuition that when we’re doing good, we shall think more about people who already exist, but we may have to disregard that intuition, and think about uncertainty more seriously and rationally rather than just disregard future people’s welfare because those people may not exist.
As a final-final point, coming up with a decision theory that prevents us from caring about our posteriority and future people is really really hard, even if you are very skeptical of uncertainty, if you don’t believe Toby Ord, Macaskill or top forecasters like Eli Lifland who published a magnificent critique of this book completely and think x-risk probability is very overestimated, I think arguments based on the intuition that “It’s uncertain whether future people will exist”, isn’t a counterargument against not only weak longtermism but also strong longtermism, and I think this argument should lead us to think about which decision theory is best to navigate the uncertainty we face, rather than prioritize people who already exist.
Btw, if you’re from Turkey and would like to connect with the community in Turkey, feel free to dm!
“Imagine that we spend the entire next century prioritizing the very long-term future, only to find out that humanity is sure to go extinct the next day. Would it not feel like we have made a big blunder?”
No more than we would have feel we had made a big blunder if we invested in a pension, only to find out that we have terminal cancer and won’t live to draw that pension. Uncertainty still requires action.
Imagine that we spend the entire next century prioritizing the very long-term future, only to find out that humanity is sure to go extinct the next day. Would it not feel like we have made a big blunder?
I guess if an asteroid will hit us tomorrow, we in the EA community will be able to make a peace with ourselves of the time we spend on trying despite the outcome. Making sacrifices for the future is one of the best ways I found to justify the life we have been gifted with...
I think I disagree with several arguments here, and one of the main arguments could be thought of as an argument for longtermism. And I have to add, this post is really well-written and the arguments/intuitions are really clearly expressed! Also, epistemic status of the last paragraph is quite speculative.
First of all, most longtermist causes and projects aim to increase the chance of survival for existing humans(misaligned AI, engineered pandemics and nuclear war are catastrophes that could take within this century, if you don’t have a reason to completely disregard forecasters and experts) or reduce the chance of global catastrophic events for the generation who is already alive, again biorisk and pandemics could be thought of longtermist causes but if more people would be working on these issues pre-2020, their actions and work would be impactful not only for future generations, but already existing people who suffered throughout COVID-19.
If I’m not misunderstanding one of the main ideas/intuitions that form the basis for this review is “It is uncertain whether future people will exist or not, so therefore we should give more weight to the idea that humanity may cease to exist and donating to or working for longtermist causes may be less impactful compared to neartermist causes”. If we ought to give more weight to the idea that future people may not exist, isn’t this argument for working on x-risk reduction? Even if you have a person-affecting view of population ethics, since the world could be destroyed tomorrow, the following week or within this year/decade/century, thinking about s-risk that could result from a misaligned AI or stable totalitarianism are all events that could impact people who are already alive, and cause them to suffer at an astronomical level, or if we’re optimistic, curtail humanity’s potential in a way that will render the lives of already existing more unbearable and prevent us from coordinating to reduce suffering.
Thirdly, I think it wouldn’t be wrong to say “excited altruism” rather than “obligatory altruism” emphasized more and more as EAs started focusing on scaling and community-building. Peter Singer do think we have an obligation to help those who suffer as long as it doesn’t cost us astronomically. Most variants of utilitarianism and Kantian-ish moral views would use the word “obligation” in a non-trivial way for framing our responsibility to help those who suffer and who are worse-off. Should I buy a yacht or save 100 children in Africa? Even though a lot of EAs wouldn’t say “they are obligated to not buy the yacht and donate to GiveWell”, some EAs including me would probably agree that is a moral dilemma where we could say that billionaire kinda has an obligation to help. But you may disagree with this and I would totally understand and you may even be right, because maybe there are no moral truths! But, I wouldn’t say longtermism too can be and is framed within a paradigm of excited altruism, because the stakes are too high and longtermism is usually targeted at already EA audiences, people use the word “should” because this conversation usually takes place between people who already agree that we should do good. So even if you’re not a moral realist and don’t believe in moral obligations, you can be a longtermist.
As a final point, I do agree we don’t care about humanity in abstract, usually people care about existing people because of intuitions/sentiments. But, also most people with the exception of few cultures, didn’t care about animals at all throughout humanity’s history. So when it comes who should we care about and how we should think about that questions, our hunch and intuitions usually don’t work really well. We, I personally don’t at a sentimental level, tend to also don’t think about the welfare of insects and shrimps, but is there some chance that we should include these beings into our moral circles and care about them? I definitely wouldn’t say no. Also a lot of people’s hunch is that we should care about people around us, but again, that is incompatible with the idea that certain people aren’t more worthy of saving and caring just because people closer to us, which probably isn’t the case and a Brit should save a British person instead 180 people from Malawi, even though almost everyone(in the literal sense) acted that way until Peter Singer because they had a hunch, but that hunch is unfortunately probably inaccurate if we want to do good, so we may have this intuition that when we’re doing good, we shall think more about people who already exist, but we may have to disregard that intuition, and think about uncertainty more seriously and rationally rather than just disregard future people’s welfare because those people may not exist.
As a final-final point, coming up with a decision theory that prevents us from caring about our posteriority and future people is really really hard, even if you are very skeptical of uncertainty, if you don’t believe Toby Ord, Macaskill or top forecasters like Eli Lifland who published a magnificent critique of this book completely and think x-risk probability is very overestimated, I think arguments based on the intuition that “It’s uncertain whether future people will exist”, isn’t a counterargument against not only weak longtermism but also strong longtermism, and I think this argument should lead us to think about which decision theory is best to navigate the uncertainty we face, rather than prioritize people who already exist.
Btw, if you’re from Turkey and would like to connect with the community in Turkey, feel free to dm!
“Imagine that we spend the entire next century prioritizing the very long-term future, only to find out that humanity is sure to go extinct the next day. Would it not feel like we have made a big blunder?”
No more than we would have feel we had made a big blunder if we invested in a pension, only to find out that we have terminal cancer and won’t live to draw that pension. Uncertainty still requires action.
I guess if an asteroid will hit us tomorrow, we in the EA community will be able to make a peace with ourselves of the time we spend on trying despite the outcome. Making sacrifices for the future is one of the best ways I found to justify the life we have been gifted with...