The “TESCREAL” Bungle

Link post

A specter is haunting Silicon Valley — the specter of TESCREALism.

“TESCREALism” is a term coined by philosopher Émile Torres and AI ethicist Timnit Gebru to refer to a loosely connected group of beliefs popular in Silicon Valley. The acronym unpacks to:

Transhumanism — the belief that we should develop and use “human enhancement” technologies that would give people everything from indefinitely long lives and new senses like echolocation to math skills that rival John von Neumann’s.

Extropianism — the belief that we should settle outer space and create or become innumerable kinds of “posthuman” minds very different from present humanity.

Singularitarianism — the belief that humans are going to create a superhuman intelligence in the medium-term future.

Cosmism — a near-synonym to extropianism.

Rationalism — a community founded by AI researcher Eliezer Yudkowsky, which focuses on figuring out how to improve people’s ability to make good decisions and come to true beliefs.

Effective altruism — a community focused on using reason and evidence to improve the world as much as possible.

Longtermism — the belief that one of the most important considerations in ethics is the effects of our actions on the long-term future.[1]

TESCREALism is a personal issue for Torres,[2] who used to be a longtermist philosopher before becoming convinced that the ideology was deeply harmful. But the concept is beginning to go mainstream, with endorsements in publications like Scientific American and the Financial Times.

The concept of TESCREALism is at its best when it points out the philosophical underpinnings of many conversations occurring in Silicon Valley — principally about artificial intelligence but also about everything from gene-selection technologies to biosecurity. Eliezer Yudkowsky and Marc Andreessen — two influential thinkers Torres and Gebru have identified as TESCREAList — don’t agree on much. Eliezer Yudkowsky believes that with our current understanding of AI we’re unable to program an artificial general intelligence that won’t wipe out humanity; therefore, he argues, we should pause AI research indefinitely. Marc Andreessen believes that artificial intelligence will be the most beneficial invention in human history: People who push for delay have the blood of the starving people and sick children whom AI could have helped on their hands. But their very disagreement depends on a number of common assumptions: that human minds aren’t special or unique, that the future is going to get very strange very quickly, that artificial intelligence is one of the most important technologies determining the trajectory of future, that intelligences descended from humanity can and should spread across the stars.[3]

As an analogy, Republicans and Democrats don’t seem to agree about much. But if you were explaining American politics to a medieval peasant, the peasant would notice a number of commonalities: that citizens should choose their political leaders through voting, that people have a right to criticize those in charge, that the same laws ought to apply to everyone. To explain what was going on, you’d call this “liberal democracy.” Similarly, many people in Silicon Valley share a worldview that is unspoken and, all too often, invisible to them. When you mostly talk to people who share your perspective, it’s easy to not notice the controversial assumptions behind it. We learn about liberal democracy in school, but the philosophical underpinnings beneath some common debates in Silicon Valley can be unclear. It’s easy to stumble across Andreesen’s or Yudkowsky’s writing without knowing anything about transhumanism. The TESCREALism concept can clarify what’s going on for confused outsiders.

However, Torres is rarely careful enough to make the distinction between people’s beliefs and the premises behind the conversations they’re having. They act like everyone who believes one of these ideas believes in all the rest. In reality, it’s not uncommon for, say, an effective altruist to be convinced of the arguments that we should worry about advanced artificial intelligence without accepting transhumanism or extropianism. All too often, Torres depicts TESCREALism as a monolithic ideology — one they characterize as “profoundly dangerous.” To them, TESCREALism is “a new, secular religion, in which ‘heaven’ is something we create ourselves, in this world,” invented by “a bunch of 20th-century atheists [who] concluded that their lives lacked the meaning, purpose and hope provided by traditional religion.

Atheists, who don’t expect justice to come from an omnibenevolent God or a blissful afterlife, have sought meaning, purpose, and hope in improving this world since at least the writing of the 1933 Humanist Manifesto.[4] It is perfectly natural and not especially sinister. If a community working together to create a better world is sufficient criteria to qualify as a religion, I’m all for religion.

Torres’ primary argument that TESCREALism is dangerous centers on the fondness that effective altruists, rationalists, and longtermists hold for wild thought experiments — and what they might imply about what we should do. Torres critiques philosopher Nick Bostrom for arguing that very tiny reductions in the risk of human extinction outweigh the certain death of many people who currently exist, Eliezer Yudkowsky for arguing that we should prefer to torture one person rather than allow more people than there are atoms in the universe to get dust specks in their eyes, and effective altruists (as a group) for arguing that it might be morally right to work for an “evil” organization and donate the money to charity.

It seems like the thing Torres might actually be objecting to is analytic ethical philosophy.

Effective altruists, rationalists, and longtermists have no monopoly on morally repugnant thought experiments. Analytic ethical philosophy is full of them. Should you tell the truth to the Nazi at your door about whether there are Jews in your basement? If you’re in a burning building, should you save one child or ten embryos? If an adult brother and sister secretly have sex, knowing that they’re both unable to conceive children, and they both had a wonderful time and believe the sex brought them closer and made their relationship better, did they do something wrong, and if so, why? Ethical philosophers argue both sides of these and many other morally repugnant questions. They’re trying to poke at the edge cases within our intuitions, the places where our intuitive sense of good and bad doesn’t match up with our stated ethical principles.

Outside the philosophy classroom, ethicists mostly ignore the findings of their philosophy, as philosophers Joshua Rust and Eric Schwitzgebel have shown in a clever series of studies. Ethicists ignore ethical philosophy in ways we like (presumably even the most committed Kantian would lie if there were actually a Nazi at the door), but also in ways we don’t like (not donating to charity). Rationalists and effective altruists are unusual because they act on some of the conclusions of ethical philosophy outside of the classroom — and there, of course, comes the danger.

In practice, Torres has found little evidence that effective altruists, rationalists, and longtermists have carried these particular thought experiments through to their conclusions. No one has access to more people than there exist atoms in the universe, much less the ability to put dust specks in their eyes. 80,000 Hours, a nonprofit that provides career advice and conducts research on which careers have the most effective impact,[5] has consistently advised against taking harmful jobs.

Torres gives an example of an “evil organization” at which effective altruists recommend people work: the proprietary trading firm Jane Street. But Jane Street seems at worst useless. There are many criticisms to be made of a system in which people earn obscene amounts of money making sure that the price of a stock in Tokyo equalizes with the price of a stock in London slightly faster than it otherwise would. But if someone is going to pay millions of dollars for people to do that, it might as well go to people who will spend it on medicine for poor children rather than to people who will spend it on a yacht. It’s dumb to dump money from helicopters, but if someone dumps a million dollars in front of my house, I’m going to take it and donate it. It’s true that Sam Bankman-Fried, an effective altruist Jane Street employee, went on to commit an enormous fraud — but the fraud was universally condemned by members of the effective altruist community. People who do evil things exist in every sufficiently large social movement; it doesn’t mean that every movement recommends evil.

The most important thought experiment — in terms of the weight Torres gives it and how TESCREALists actually behave — is about trade-offs related to so-called existential risk: the risk of either human extinction or a greatly curtailed future (such as a 1984-style dystopia). While most TESCREALists are worried about a range of existential risks, including bioengineered pandemics, the one most discussed by Torres is advanced artificial intelligence. Many experts in the field worry that we’ll develop extraordinarily powerful artificial intelligences without knowing how to get them to do what we want. If a normal computer program is seriously malfunctioning, we can turn it off until we figure out how to debug it. But a so-called “misaligned” artificial intelligence won’t want us to turn it off — and may well drive us extinct so we can’t.

People who are worried about risks from advanced artificial intelligence generally expect that it will come very soon. Models created by people who are worried about risks from advanced artificial intelligence generally predict that we’ll develop it long before 2100. No significant number of people are saying, “Well, I think that in 999,999,999 out of 1,000,000,000 worlds we won’t invent an artificial intelligence in the next two hundred years, but I’ve completely reshaped my entire life around it anyway, because there are so many potential digital minds I could affect.”

It’s true that TESCREAList philosophers often debate Pascal’s mugging arguments: arguments that you should (say) be willing to kill four people for an infinitesimal decrease in the risk of existential risk. But Pascal’s mugging arguments are generally considered undesirable paradoxes, and TESCREAList philosophers often work on trying to figure out a convincing, solid counterargument.[6] But it’s convenient for Torres’ case to pretend otherwise.

Many rationalists, effective altruists, and longtermists talk about a concept called “getting off the crazy train.” Rationalists, effective altruists, and longtermists don’t want to be the hypocritical ethics professor who talks about the moral necessity of donating most of your income to help the global poor and then drives home in a Cadillac. They also don’t want to commit genocide because of a one-in-one-billion chance that it would prevent extinction. It makes sense to get off the crazy train at some point. Human reason is fallible; it’s far more likely that you would mistakenly believe that this genocide is justified than that it actually is.

But it’s difficult to pick any sort of principled stop at which to deboard the crazy train. Some people are bought in on AI risk but don’t accept that a universe with more worse-off people can be better than a universe with fewer better-off people. Some people work on preventing bioengineered pandemics and donate a fifth of their salaries to buy malaria nets. Some people work on vaccines while worrying that everything will be pointless when the world ends. Some people say, “I might believe we live in a simulation, but I don’t accept infinite ethics; that stuff’s too wild,” even though the exact distinction being made here is unclear to anyone else. And everyone shifts uncomfortably and wants to change the subject when the topic of how they made these decisions comes up.

But there’s one particular stop on the crazy train Torres worries the most about. They critique longtermism sharply:

According to the longtermist framework, the biggest tragedy of an AGI apocalypse wouldn’t be the 8 billion deaths of people now living. This would be bad, for sure, but much worse would be the nonbirth of trillions and trillions of future people who would have otherwise existed. We should thus do everything we can to ensure that these future people exist, including at the cost of neglecting or harming current-day people — or so this line of reasoning straightforwardly implies.

They ask, “If the ends can justify the means, and the end is paradise, then what exactly is off the table for protecting and preserving this end?” In short, TESCREALists are so in love with the idea of a far-off paradise that they are willing to sacrifice the needs of people currently living.

At first blush, it seems insensitive, even cruel, to prioritize people who don’t exist over people who do. But it’s difficult to have common-sense views about a number of issues without caring about future people. For example, the negative effects of climate change are mostly on people who don’t exist yet — and that was even more true in the late 1980s when the modern consensus around climate change was first coalescing. Should we tolerate higher gas prices now to keep an island from sinking underwater a century from now? After all, high gas prices harm the people choosing between dinner and the gas they need to get to work right now. Why not just pollute as much as we want and stick future generations with the bill?

Longtermism may rearrange our priorities, but it won’t fundamentally replace them. Large effective altruist funders such as Open Philanthropy generally adopt a “portfolio” approach to doing good, including both charities that primarily affect present people and charities that primarily affect future people. Effective altruists are trying to pick the lowest-hanging fruit to make the world a better place. If you’re in an orchard, you’ll do much better picking the easily picked apples from as many trees as you can, rather than hunting for the tree with the most apples and stripping them all off while saying, “This tree has the most apples, and therefore no matter how hard it is to climb, all its apples must be the easiest to get!” Even if the long-term future is overwhelmingly important, we may run low on opportunities that outweigh helping people who already exist. (In fact, the vast majority of people in history were uncontroversially in this position.)

Further, the common-sense view is that, all things equal, things that are good for humanity in the short run are good for humanity in the long run. Great-power war and political instability increase the risk of AI race dynamics or the release of deadly bioengineered pandemics. If humanity is going to face future challenges head-on, it would help if more of its members were well-fed, well-educated, and not sick with malaria.

Torres worries that longtermists would deprioritize climate change relative to other concerns. But to the extent that longtermism changes our priorities, it might make climate change more important. Toby Ord estimates a one in a thousand chance that climate change causes human extinction. If you’re not a longtermist, we should maybe prioritize climate change a bit more than we currently do. If you are a longtermist, we should seriously consider temporarily banning airplanes.

Present-day longtermists aren’t campaigning for banning airplanes, because they believe that other threats pose even larger risks of human extinction. The real disagreement between Torres and longtermists is about factual matters. If you believe that artificial intelligence might drive us extinct in 30 years, you worry more about artificial intelligence; if you don’t, you worry more about climate change. The philosophy doesn’t really enter into it.

Torres hasn’t established that TESCREALists are doing anything extreme. Actions taken by TESCREALists that Torres frowns on include:

Participating in governments, foreign policy circles, and the UN.

Fundraising.

Giving advice to people about how to talk to journalists.

Reaching out to people who are good communicators and thought leaders to convince them of things.

Following social norms and avoiding needless controversy.

Trying to avoid turning people off unnecessarily.

All social movements do these things. It isn’t a dark conspiracy for a movement to try to achieve its goals, especially if the movement’s philosophy is that we should direct our finite resources toward doing the most possible good.

Torres has received death threats and harassment. I — like any minimally decent person — condemn death threats and harassment wholeheartedly. But harassment is an internet-wide problem, particularly for women and nonbinary people. If harassment were caused by TESCREAList extremism, people wouldn’t be sending each other death threats over not liking particular movies. If even one in ten thousand people thinks sending death threats is okay, critics will face death threats — but it’s unreasonable to hold the death threats against the 9,999 people who think death threats are wrong and would never send one. No major or even minor thinkers in effective altruism, transhumanism, the rationalist movement, or longtermism support harassment.

Torres is particularly concerned about TESCREALists cavalierly running the risk of nuclear war. They criticize Eliezer Yudkowsky for supporting a hypothetical international treaty that permits military strikes against countries developing artificial intelligence — even if those countries are nuclear powers and the action risks nuclear war.

But almost any action a nuclear power takes relating to another nuclear power could potentially affect the risk of nuclear war. The war in Ukraine, for example, might increase the risk that Vladimir Putin will choose to engage in a nuclear first strike. That doesn’t mean that NATO should have simply allowed the invasion to happen without providing any assistance to Ukraine. We must trade off the risk of nuclear war against other serious geopolitical concerns. As the world grows more dangerous, our risk calculus should include the dangers posed by emerging technologies, such as bioengineered pandemics and artificial intelligence. We shouldn’t engage in reckless nuclear brinkmanship, but similarly we shouldn’t be so concerned about nuclear war that we miss a rogue country releasing a virus a thousand times more deadly and virulent than COVID-19.

Torres’ implication that only TESCREALists think this way is simply false. Eliezer Yudkowsky’s argument is no different from calculations that have been made by policymakers across the globe since 1945. If anything, longtermists are more cautious about nuclear war than many saber-rattling politicians for the same reasons they care more about climate change. For example, 80,000 Hours characterizes nuclear security as “among the best ways of improving the long-term future we know of,” although it’s “less pressing than our highest priority areas.”

Torres themself supports a moratorium, perhaps even permanent, on research into artificial intelligence. I have no idea how they believe this would be enforced without the threat of some form of military intervention. Lack of intellectual honesty about the costs of your preferred policies is not a virtue.

Paradoxically, although Torres believes that TESCREALists make a trade-off between the well-being of present-day people in the name of speculative hopes about the future, the policies Torres supports involve far more wide-ranging and radical sacrifices. They write:

[I]f advanced technologies continue to be developed at the current rate, a global-scale catastrophe is almost certainly a matter of when rather than if. Yes, we will need advanced technologies if we wish to escape Earth before it’s sterilised by the Sun in a billion years or so. But the crucial fact that longtermists miss is that technology is far more likely to cause our extinction before this distant future event than to save us from it.

The solution? For us “to slow down or completely halt further technological innovation.” In a different article, they call for an end to economic growth and to all attempts to “subjugate and control” nature.

It’s possible that Torres is phrasing their beliefs more strongly than they hold them. Perhaps they simply believe that we should avoid developing new technologies that pose an outsized risk of harm — a wise viewpoint originally developed by TESCREAList and philosopher Nick Bostrom.

But let’s say that Torres means what they say. Then let us be clear about the consequences of ending technological innovation, economic growth, and the control of nature. Throughout the vast majority of human history, only half of children survived to the age of 15; today, 96% do. Because of the Green Revolution and global transportation networks, for the first time in history, famine happens only if a government is too poorly run to take the simple steps necessary to prevent it. The only solution anyone has discovered for an effective end to poverty is economic growth. Before the Industrial Revolution, all but a tiny minority of elites lived in what we would currently consider extreme poverty.

Many disabled people rely on technology for their survival. If we end all attempts to control nature, innumerable disabled people will die, from people who need ventilators to breathe to premature babies in the NICU. I take a daily pill that treats the disease that would otherwise make my life unlivable; it costs pennies per dose. My six-year-old son has all human knowledge available at his fingertips, even if he mostly uses it to learn more about Minecraft. Due to our economic surplus, an unprecedented number of people have the education and free time to develop in-depth opinions about philosophical longtermism.

Technological progress continues to benefit the world. To pick only one example, since 2021, when Torres called for an end to technological innovation, solar technology has improved massively — making solar and other clean energy technologies one of our best hopes for fighting climate change. And while large language models get the headlines, most inventions solve the boring problems of ordinary people, as they always have: For example, while traditional cookstoves are a major cause of indoor air pollution, we have yet to develop clean cookstoves that most developing-world consumers want to use. Technology matters.

For all their faults, TESCREALists usually have a very concrete vision of the future they want: interstellar colonization, the creation of nonhuman minds that transcend their creators, technology giving us new abilities both earthshattering (immortality!) and trivial (flight!). Torres’ vision is opaque at best.

Torres talks a lot about deliberative-democratic institutions and Indigenous wisdom. They call for “attunement to nature and our animal kin, not estrangement from them; humility, not growth-obsessed, technophilic, rocket-fueling of current catastrophic trends; lower birthrates, not higher; and so forth.” But they give few specifics about what they think a society marked by attunement to nature and humility and Indigenous wisdom would look like. Specifics about Torres’ ideal world, I think, would raise questions about what happens to the NICU babies.

Torres’ disagreement with TESCREALists is not about whether to care about future people, which they do. It isn’t about whether we should sacrifice the well-being of current people in the hopes of achieving some future utopia: Although Torres criticizes utopian thinking, they engage in it themself. It isn’t even about what measures are acceptable to achieve utopia; Torres achieves moral purity through refusing to discuss how the transition to their ideal society would be accomplished.

It is entirely and exclusively about what the utopia ought to look like.

Many people find the TESCREAList vision of the future unappealing. The discussion of how we should shape the future should include more opinions from people who didn’t obsessively read science fiction novels when they were 16. But Torres’ critique of TESCREALism ultimately comes from an even more unappealing place: a complete rejection of technological progress.

Torres can dismiss all TESCREALists out of hand because Torres is opposed to economic growth and even the most necessary control of nature. Everyone else has to consider specific ideas. How likely is it that we’ll develop advanced artificial intelligence in the next century, and how much of a risk does it pose? What international treaties should we make about dangerous emerging technologies? Where should you get off the crazy train? These questions are important — and Torres’ critiques of TESCREALism don’t help us answer them.

  1. ^

    If they dropped “Cosmism” the acronym could be REALEST, and it would be much less unwieldy.

  2. ^

    Going forward I’ll mostly be talking about Torres, who has written far more about their viewpoints.

  3. ^

    Of course, a large number of people working in tech — including many people working on artificial intelligence — have never heard of any of these ideologies.

  4. ^

    While the Humanist Manifesto was written by a religious humanist, many signers were atheists, and 20th-century humanist movements were generally secular.

  5. ^

    Their job board includes listings at many organizations under the effective altruism umbrella, as well as more traditional organizations like USAID and the Bill & Melinda Gates Foundation.

  6. ^

    One example is Nick Beckstead and Teruji Thomas’s paper, “A paradox for tiny probabilities and enormous values.”