Longtermists Should Work on AI—There is No “AI Neutral” Scenario

Summary: If you’re a longtermist (i.e you believe that most of the moral value lies in the future), and you want to prioritize impact in your career choice, you should strongly consider either working on AI directly, or working on things that will positively influence the development of AI.

Epistemic Status: The claim is strong but I’m fairly confident (>75%) about it. I think the main crux is how bad biorisks could be and how the risk profile compared with the AI safety one, which I think is the biggest crux of this post. I’ve spent at least a year thinking about advanced AIs and their implications on everything, including much of today’s decision-making. I’ve reoriented my career towards AI based on these thoughts.

The Case for Working on AI

If you care a lot about the very far future, you probably want two things to happen: first, you want to ensure that humanity survives at all; second, you want to increase the growth rate of good things that matter to humanity—for example, wealth, happiness, knowledge, or anything else that we value.

If we increase the growth rate earlier and by more, this will have massive ripple effects on the very longterm future. A minor increase in the growth rate now means a huge difference later. Consider the spread of covid—minor differences in the R-number had huge effects on how fast the virus could spread and how many people eventually caught it. So if you are a longtermist, you should want to increase the growth rate of whatever you care about as early as possible, and as much as possible.

For example, if you think that every additional happy life in the universe is good, then you should want the number of happy humans in the universe to grow as fast as possible. AGI is likely to be able to help with this, since it could create a state of abundance and enable humanity to quickly spread across the universe through much faster technological progress.

AI is directly relevant to both longterm survival and longterm growth. When we create a superintelligence, there are three possibilities. Either:

  1. The superintelligence is misaligned and it kills us all

  2. The superintelligence is misaligned with our own objectives but is benign

  3. The superintelligence is aligned, and therefore can help us increase the growth rate of whatever we care about.

Longtermists should, of course, be eager to prevent the development of a destructive misaligned superintelligence. But they should also be strongly motivated to bring about the development of an aligned, benevolent superintelligence, because increasing the growth rate of whatever we value (knowledge, wealth, resources…) will have huge effects into the longterm future.

Some AI researchers focus more on the ‘carrot’ of aligned benevolent AI, others on the ‘stick’ of existential risk. But the point is, AI will likely either be extremely good or extremely bad—it’s difficult to be AI-neutral.

I want to emphasize that my argument only applies to people who want to strongly prioritize impact. It’s fine for longtermists to choose not to work on AI for personal reasons. Most people value things other than impact, and big career transitions can be extremely costly. I just think that if longtermists really want to prioritize impact above everything else, then AI-related work is the best thing for (most of) them to do; and if they want to work on other things for personal reasons, they shouldn’t be tempted by motivated reasoning to believe that they are working on the most impactful thing.

Objections

Here are some reasons why you might be unconvinced by this argument, along with reasons why I find these objections unpersuasive or unlikely.

You might not buy this argument because you believe one of the following things:

You want to take a ‘portfolio approach’

Some EAs take a ‘portfolio approach’ to cause prioritization, thinking that since the most important cause is uncertain, we should divide our resources between many plausibly-important causes.

A portfolio approach makes sense when you have comparable causes, and/​or when there are decreasing marginal returns on each additional resource spent on one cause. But in my opinion, this isn’t true for longtermists and AI. First, the causes here are not comparable; no other cause has such large upsides and downsides. Second, the altruistic returns on AI work are so immensely high that even with decreasing marginal returns, there is still a large difference between this opportunity and our second biggest priority.

There’s a greater existential risk in the short term

You might think that something else currently poses an even greater existential risk than AI. I think this is unlikely, however. First, I’m confident that of the existential risks known to EAs, none is more serious than the risk from AI. Second, I think it’s unlikely that there is some existential risk that is known to a reader but not to most EAs, and that is more serious than AI risk.

In The Precipice, Toby Ord estimates that we are 3 times more likely to go extinct due to AI than due to biological risks—the second biggest risk factor after AI (in his opinion). Many people—including me—think that Ord vastly overestimates biorisks, and our chances of going extinct from biological disasters are actually very small.

One of the most critical features that seem to be crucial to extinction events via viruses is whether the virus is stealth or not and for how long. I think we’re likely to be able to prevent the ‘stealth viruses’ scenario happening in the next few years thanks to metagenomic sequencing which should make extinction from stealthy pathogens even less likely; therefore, I believe that the risk of extinction from pathogens in the next few decades is very unlikely. If there’s any X-risk this century, I think it’s heavily distributed in the second half of this century. For those interested, I wrote a more detailed post on scenarios that could lead to X-risks via biorisks. I think that the most likely way I could be wrong here is if the minimum viable population was not 1000 but greater than 1% of the world population or if an irrecoverable collapse was very likely even above these thresholds.

On the other hand, transformative AIs (TAIs) will probably be developed within the next few decades according to Ajeya Cotra’s report on biological anchors (which is arguably an upper bound of the development of TAI).

Others have argued that nuclear war and climate change, while they could have catastrophic consequences, are unlikely to cause human extinction.

A caveat: I’m less certain about the risks posed by nanotechnology. However, I don’t think this poses a comparable risk to AI, although I’d expect this to be the second biggest source of risk after AI.

See here for a database of various experts’ estimates of existential risk from various causes.

It’s not a good fit for you

I.e., you have skills or career capital that make it suboptimal for you to switch into AI. This is possible, but given that both AI Governance and AI Safety need a wide range of skills, I expect this to be pretty rare.

By wide range, I mean very wide. So wide that I think that even most longtermists with a biology background who want to maximize their impact should work on AI. Let me give some examples of AI-related career paths that are not obvious:

  • Community building (general EA community building or building the AI safety community specifically).

  • Communications about AI (to targeted public such as the ML community).

  • Increasing the productivity of people who do direct AI work by working with them as a project manager, coach, executive assistant, writer, or other key support roles.

  • Making a ton of money (I expect this to be very useful for AI governance as I will argue in a future post).

  • Building influence in politics (I expect this to be necessary for AI governance).

  • Studying psychology (e.g. what makes humans altruistic) or biology (e.g evolution). These questions are relevant for AI to make our understanding of optimization dynamics more accurate, which is key to predicting what we may expect from gradient descent. PIBSS is an example of this kind of approach to the AI problem.

  • UX designer for EA organizations such as 80k.

  • Writing fiction about AGI that is about plausible scenarios that could happen (rather than, e.g., terminator robots) - the only example I know of this type of fiction is Clippy.

There is something that will create more value in the long-term future than intelligence

This could be the case; but I give it a low probability, since intelligence seems to be highly multipurpose, and a superintelligent AI could help you find or increase this other thing more quickly.

It’s not possible to align AGI

In this case, you should focus on stopping the development of AGI or tried to develop beneficial unaligned AGI.

AGI will be aligned by default

If you don’t accept the orthogonality thesis or aren’t worried about misaligned AGI, then you should work to ensure that the governance structure around AGI is favorable to what you care about and that AGI happens as soon as possible within this structure, because then we can increase the growth rate of whatever we care about.

You’re really sure that developing AGI is impossible

This is hard to justify: the existence of humans proves that general intelligence is feasible.

Have I missed any important considerations and counter-arguments? Let me know in the comments. If you’re not convinced of my main point, I expect this to be because you disagree with the following crux: there isn’t any short term X-risk which is nearly as important as AGI. If this is the case- especially if you think that biorisks could be equally dangerous - tell me in the comments and I’ll consider writing about this topic in more depth.

Non-longtermists should also consider working on AI

In this post I’ve argued that longtermists should consider working on AI. I also believe the following stronger claim: “whatever thing you care more about, it will likely be radically transformed by AI pretty soon, so you should care about AI and work on something related to it”. I didn’t argue for this claim because this would have required significantly more effort. However, If you care about causes such as poverty, health or animals, and you think your community could update based on a post saying “Cause Y will be affected by AI”, leave a comment and I will think about writing about it.

This post was written collaboratively by Siméon Campos and Amber Dawn Ace as part of Nonlinear’s experimental Writing Internship program. The ideas are Siméon’s; Siméon explained them to Amber, and Amber wrote them up. We would like to offer this service to other EAs who want to share their as-yet unwritten ideas or expertise.

If you would be interested in working with Amber to write up your ideas, fill out this form.