Response to recent criticisms of EA “longtermist” thinking

This is a response to some recent criticisms of “Longtermist” EA thinking. I have organized it in the form of an FAQ responding to concerns.

Does the Bostromian paradigm rely on transhumanism and an impersonal, totalist utilitarianism?

Some object that the long-term paradigm stems from a couple of axiological positions, utilitarianism and transhumanism.

Bostrom’s views do not rely on utilitarianism. They do require that the future generally be considered potentially extremely valuable relative to the present, based on quality and/​or quantity of life. So some sort of value aggregation is required. However, intrinsic discounting, as well as a variety of nonconsequentialist views on present-day things like duties against lying/​killing/​etc, are fully compatible with Bostrom’s paradigm.

Bostrom’s paradigm doesn’t quite require transhumanism. If humanity reaches a stable state of Earthly affairs, we theoretically might continue for hundreds of millions of years, being born and dying in happy 100-year cycles, which is sufficient for an extremely valuable long-run future. Existential risks may be a big problem over this timeframe, however. Conscious simulations or human space travel colonization would be required for a reliably super-valuable far future.

Conscious simulations might technically not be considered transhumanism. The idea that we can upload our current brains onto computers is generally considered transhumanism, but that is not the only way of having conscious simulations/​computations. Of course, conscious intelligent simulations are always a pretty “out there” sci-fi scenario.

Space travel may require major human changes in order to be successful. We could, in theory, focus 100% on terraforming and travel with Earthlike space arks; this would theoretically enable major space travel with no transhumanism, but it would be hard and our descendants will undoubtedly choose a different route. If we made minor genetic changes to make humans more resilient against radiation and low-gravity environments, that could greatly reduce the difficulty of space travel, though it’s unclear if this should be considered transhumanism. Proper transhumanism to make us smarter, longer-lived and more cooperative would broadly help, however. Another option is to have space travel and terraforming be done by automated systems, and the first humans could be very similar to us, except for being conceived, born and raised de novo by robots. Again I don’t know if this is technically transhumanism, although it is certainly ‘out there.’

Finally, you could believe transhumanism will only be done for key things like space travel. Just because we can train astronauts does not mean we all want to become astronauts. Transhumanism could be like astronaut training: something clunky and unpleasant that is authorized for a few, but not done by ordinary people on Earth or on post-terraformation worlds.

In summary, while there are some ideas shared with utilitarianism and transhumanism, neither utilitarian moral theory nor the aspiration to broadly re-engineer humanity are really required for a long-term view.

If someone has an objection to axiological utilitarianism or axiological transhumanism, it’s best for them to think carefully about what their particular objections are, and then see whether they do or don’t pose a problem for the longtermist view.

Are long-term priorities distracting?

One worry with long-term priorities is that it can distract us from short-term problems. This is easily identified as a spurious complaint. Every cause area distracts us from some other cause areas. Short-term priorities distract us from long-term priorities. That is the very nature of Effective Altruism and, indeed, the modern resource-limited world. It is not a serious criticism.

Do long-term priorities imply short-term sacrifices?

Another worry is that long-term views imply that we might tolerate doing bad things for the short-term if it helps the long-term. For instance, if starting a war could reduce existential risk, it could be justified.

This seems like a basically moral complaint: “long-termists will achieve their goals of maximizing human well-being but in the process it may involve things I cannot tolerate, due to my moral views.”

Again, this objection applies to any kind of priority. If you are very concerned with a short-term problem like global disease and poverty, you might similarly decide that some actions to harm people in the long-run future are justified to assist your own cause. Furthermore, you might also decide that actions to harm some people in the short run are justified to save others in the short run. This is just the regular trolley problem. An act-consequentialist view can compel you to make such tradeoffs regardless of whether you prioritize the short run or the long run. Meanwhile, if you reject the idea of harming a few to save the many, you will not accept the idea of harming people in the short run to help people in the long run, even if you generally prioritize the long run. So in theory, this is not about short-term versus long-term priorities, it is just about consequentialism versus nonconsequentialism.

You might say that some people have a more nuanced take between the hard consequentialist and the hard nonconsequentialist view. Suppose that someone does not believe in killing 1 to save 5, but they do believe in killing 1 to save 10,000. This person might see ways that small short term harms could be offset by major long-term benefits, without seeing ways that small short-term harms could be offset by other, more modest short-term benefits. But of course this is a contingent fact. If they ever do encounter a situation where they could kill 1 to save 10,000 in the short run, they will be obliged to take that opportunity. So there is still the same moral reductio ad absurdum (assuming that you do in fact think it’s absurd to make such sacrifices, which is dubious).

One could make a practical argument instead of a moral one: that longtermist priorities are so compelling that they make it too easy for politicians and others to justify bad aggressive actions against their enemies. So the long-term priorities are a perfectly good idea for us to believe and to share with each other, but not something to share in more public political and military contexts.

Speculating how policymakers will act based on a philosophy is a very dubious approach. I have my own speculations – I think they will act well, or at least much better than the likely alternatives. But a better methodology is to see what people’s military and political views actually are when they subscribe to Bostrom’s long-term priorities. See the views of the Candidate Scoring System under “long run issues”, or see what other EAs have written about politics and international relations. They are quite conventional.

Moreover, Bostrom’s long-term priorities are a very marginal view in the political sphere, and it will be a long time before they become the dominant paradigm, if ever.

In summary, the moral argument does not work. Pragmatically speaking, it may be good to think hard about how long-term views should be packaged and sold to governments, but that’s no reason to reject the idea, especially not at this early stage.

Do long-term views place a perverse priority on saving people in wealthy countries?

Another objection to long-term views is that they could be interpreted as putting a higher priority on saving the lives of people in wealthy rather than poor countries, because such people contribute more to long-run progress. This is not unique to Bostrom’s priority, it is shared by many other views. Common parochial views in the West – to give to one’s own university or hometown – similarly put a higher priority on local people. Nationalism puts a higher priority on one’s own country. Animal-focused views can also come to this conclusion, not for lifesaving but for increasing people’s wealth, based on differing rates of meat consumption. A regular short-term human-focused utilitarian view could also come to the same conclusion, based on international differences in life expectancy and average happiness. In fact, the same basic argument that people in the West contribute more to the global economy can be used to argue for differing priorities even on a short-run worldview.

Just because so many views are vulnerable to this objection doesn’t mean the objection is wrong. But it’s still not clear what this objection even is. Assuming that saving people in wealthier countries is the best thing for global welfare, why should anyone object to it?

One could worry that sharing such an ideology will cause people to become white or Asian supremacists. On this worry, whenever you give people a reason to prefer saving a life in advanced countries (USA, France, Japan, South Korea, etc) over saving lives in poor countries, that has a risk of turning them into a white or Asian supremacist, because the richer countries happen to have people of different races than poorer countries, speaking in average terms. But hundreds of millions of people believe in one of these various ideologies which place a higher priority on saving people in their own countries, yet only a tiny minority become racial supremacists. Therefore, even if these ideologies do cause racial supremacism, the effect size is extremely small, not enough to pose a meaningful argument here. I also suspect that if you actually look at the process of how racial supremacists become radicalized, the real causes will be something other than rational arguments about the long-term collective progress of humanity.

One might say that it’s still useful for Effective Altruists to insert language in relevant papers to disavow racial supremacism, because there is still a tiny risk of radicalizing someone, and isn’t it very cheap and easy to insert such language and make sure that no one gets the wrong idea? But any reasonable reader will already know that Effective Altruists are not racial supremacists and don’t like the ideology one bit. And far-right people generally believe that there is strong liberal bias afflicting Effective Altruism and others in the mainstream media and academia, so even if Effective Altruists said we disavowed racial supremacism, far-right people would view it as a meaningless and predictable political line. As for the reader who is centrist or conservative but not far-right, such a statement may seem ridiculous, showing that the author is paranoid or possessed of a very ‘woke’ ideology, and this would harm the reputation of the author and of Effective Altruists more generally. As for anyone who isn’t already thinking about these issues, the insertion of a statement against racial supremacism may seem jarring, like a signal that the author is in fact associated with racial supremacism and is trying to deny it. If someone denies alleged connections to racial supremacism, their denial can be quoted and treated as evidence that the allegations against them really are not spurious. Finally, such statements take up space and make the document take longer to read. When asked, you should definitely directly respond “I oppose white supremacism,” but preemptively putting disclaimers for every reader seems like a bad policy.

So much for the racial supremacism worries. Still, one could say that it’s morally wrong to give money to save the lives of wealthier people, even if it’s actually the most effective and beneficial thing to do. But this argument only makes sense if you have an egalitarian moral framework, like that of Rawls, and you don’t believe that broadly improving humanity’s progress will help some extremely-badly-off people in the future.

In that case, you will have a valid moral disagreement with the longtermist rich-country-productivity argument. However, this is superfluous because your egalitarian view simply rejects the long-term priorities in the first place. It already implies that we should give money to save the worst-off people now, not happy people in the far future and not even people in 2040 or 2080 who will be harmed by climate change. (Also note that Rawls’ strict egalitarianism is wrong anyway, as his “original position” argument should ultimately be interpreted to support utilitarianism.)

Do long-term views prioritize people in the future over people today?

They do in the same sense that they prioritize the people of Russia over the people of Finland. There are more Russians than Finns. There is nothing wrong with this.

On an individual basis, the prioritization will be roughly similar, except future people may live longer and be happier (making them a higher priority to save) and they may be difficult to understand and reliably help (making them a lower priority to save).

Again, there is nothing wrong with this.

Will long-term EAs ignore short-term harms?

No, for three reasons. First, short-term harms are generally slight probabilistic long-term harms as well. If someone dies today, that makes humanity grow more slowly and makes the world a more volatile place. Therefore, fanaticism to sacrifice many people immediately in order to obtain speculative long-run benefits does not make sense in the real world, under a fanatical long-term view.

Second, EAs recognize some of the issues with long-term planning, and according to general uncertainty on our ability to predict and change the future, will incorporate some caution about incurring short-run costs.

Third, in the real world, these are all speculative philosophical trolley problems. We live in a lawful, ordered society where causing short-term harms results in legal and social punishments, which makes it irrational for people with long-term priorities to try to take harmful actions.

Going off the heels of the previous discussion of racial supremacism, one might wonder if being associated with white supremacism is good or bad for public relations in the West these days. Well, the evidence clearly shows that white supremacism is bad for PR.

A 2017 Reuters poll asked people if they favored white nationalism; 8% supported it and 65% opposed it. When asked about the alt-right, 6% supported it and 52% opposed it. When asked about neo-Nazism, 4% supported it and 77% opposed it. These results show a clear majority opposing white supremacism, and even those few who support it could be dismissed per the Lizardman Constant.

These proportions change further when you look at elites in government, academia and wealthy corporate circles. In these cases, white supremacism is essentially nonexistent. Very many who oppose it do not merely disagree with it, but actively abhor it.

Abhorrence of white supremacism extends to many concrete actions to suppress it and related views in intellectual circles. For examples, see the “Academia” section in the Candidate Scoring System, and this essay about infringements upon free speech in academia. And consider Freddie DeBoer’s observation that “for every one of these controversies that goes public, there are vastly more situations where someone self-censors, or is quietly bullied into acquiescing. For every odd example that goes viral, there is no doubt dozens more that occur behind closed doors.”

White supremacism is also generally banned on social media, including Reddit and Twitter. And deplatforming works.

For the record, I think that deplatforming white supremacists – people like Richard Spencer – is often a good thing. But I am under no illusions about the way things work.

One could retort that being wrongly accused of white supremacism can earn one public sympathy from certain influential heterodox people, like Peter Thiel and Sam Harris. These kinds of heterodox figures are often inclined to defend some people who are accused of white supremacism, like Charles Murray, Noah Carl and others. However, this defense only happens as a partial pushback against broader ‘cancellation’ conducted by others. The defense usually focuses on academic freedom and behavior rather than whether the actual ideas are correct. It can gain ground with some of the broader public, but elite corporate and academic circles remain opposed.

And even among the broader public and political spheres, the Very Online IDW type who pays attention to these re-platformed people is actually pretty rare. Most people in the real world are rather politically disengaged, have no love for ‘political correctness’ nor for those regarded as white supremacists, and don’t pay much attention to online drama. And far-right people are often excluded even from right-wing politics. For instance, the right-wing think tank Heritage Foundation made someone resign following controversy about his argument for giving priority in immigration law to white people based on IQ.

All in all, it’s clear that being associated with white supremacism is bad for PR.

Summary: what are the good reasons to disagree with longtermism?

Reason 1: You don’t believe that very large numbers of people in the far future add up to being a very big moral priority. For instance, you may disregard aggregation. Alternatively, you may take a Rawlsian moral view combined with the assumption that the worst-off people who we can help are alive today.

Reason 2: You predict that interstellar travel and conscious simulations will not be adopted and humanity will not expand.

Honorable mention 1: If you believe that future technologies like transhumanism will create a bad future, then you will still focus on the long run, but with a more pessimistic viewpoint that worries less about existential risk.

Honorable mention 2: if you don’t believe in making trolley problem-type sacrifices, you will have a mildly different theoretical understanding of longtermism than some EA thinkers who have characterized it with a more consequentialist angle. In practice, it’s unclear if there will be any difference.

Honorable mention 3: if you are extremely worried about the social consequences of giving people a strong motivation to fight for the general progress of humanity, you will want to keep longtermism a secret, private point of view.

Honorable mention 4: if you are extremely worried about the social consequences of giving people in wealthy countries a strong motivation to give aid to their neighbors and compatriots, you will want to keep longtermism a secret, private point of view.

There are others reasons to disagree with long-term priorities (mainly, uncertainty in predicting and changing the far future), but these are just the takeaways from the ideas I’ve discussed here.

A broad plea: let’s keep Effective Altruism grounded

Many people came into Effective Altruism from moral philosophy, or at least think about it in very rigorous philosophical terms. This is great for giving us rigorous, clear views on a variety of issues. However, there is a downside. The urge to systematize everything to its logical theoretical conclusions inevitably leads to cases where the consequences are counter-intuitive. Moral philosophy has tried for thousands of years to come up with a single moral theory, and it has failed, largely because any consistent moral theory will have illogical or absurd conclusions in edge cases. Why would Effective Altruism want to be like a moral theory, burdened by these edge cases that don’t matter in the real world? And if you are a critic of Effective Altruism, why would you want to insert yourself into the kind of debate where your own views can be exposed to have similar problems? Effective Altruism can instead be a more grounded point of view, a practical philosophy of living like Stoicism. Stoics don’t worry about what they would do if they had to destroy an innocent country in order to save Stoic philosophy, or other nonsense like that. And the critics of Stoicism don’t make those kinds of objections. Instead, everything revolves around a simple question whose answers are inevitably acceptable: how can I realistically live the good life? (Or something like that. I don’t actually know much about Stoicism.)

Effective Altruism certainly should not give up formal rigor in answering our main questions. However, we should be careful about which questions we seek to answer. And we should be careful about which questions we use as the basis for criticizing other Effective Altruists. We should focus on the questions that really matter for deciding practical things like where we will work, where we will donate and who we will vote for. If you have in mind some unrealistic, fantastical scenario about how utility could be maximized in a moral dilemma, (a) don’t talk about it, and (b) don’t complain about what other Effective Altruists say or might have to say about it. It’s pointless and needless on both sides.