CEA Should Invest in Helping Altruists Navigate Advanced AI

Since CEA is looking for a new executive director and they are open to that person pursuing a dramatically different strategy, now seems like a good time to suggest possible strategic directions for the candidates to consider (I’d love to see posts suggesting a variety of directional shifts).

I suggest CEA increases its focus on helping altruists determine how to have the greatest impact in a world where AI capabilities are progressing rapidly. I’m proposing a discussion that would be much broader than AI safety, for instance considering short-term interventions like malaria nets vs. long-term interventions like economic growth, or developing scalable AI products for good. However, we should not assume current progress will continue unabated, and also discuss the possibility that we are in an ‘AI bubble’[1].

This would be an additional stream of activity and I’m definitely not suggesting a complete pivot. For example, virtual programs could offer a course on this, alongside Intro and In-Depth. Local groups would mostly continue on the same, but new content would be made available to them on these topics for them to use if wished.

Other activities that could contribute to such an agenda would include: talks/​discussions at EA conferences, online debates, essay contests, and/​or training movement builders on how to assist others working through these questions.

Another alternative: CEA could pick a “yearly theme” and advanced AI technologies might just be the theme for just one year.

Topics to explore include:

  • The AI Landscape: The current and potential future state of AI, including capabilities, recent progress, timelines, and whether we might be just in a ‘bubble’.

  • AI X-risks: Could superintelligence pose an existential threat? If so, how likely and how can we help (technical, policy, community-building, ect.)? Main arguments of skeptics.

  • Mindcrimes: Is AI likely to be sentient? If so, could running certain AI systems constitute a mindcrime?

  • AI and other X-risks: Might advanced AI aid or worsen other existential risks (bio, nano)?

  • Responsible AI: Even if AI doesn’t pose an x-risk, how well we transition to AI could be the main determiner of our future. How can we responsibly manage this transition?

  • How worried should we be about people using AI to undermine democracy or spread misinformation?

  • AI and Global Poverty: How does the advance of AI affect our global poverty efforts? Does it render them irrelevant? Should we be integrating AI into our efforts? Should we be focusing more on short-term interventions rather than long-term interventions, as there is less chance of them being made irrelevant?

  • Animal rights: How do increasing AI capabilities affect the development of alternative proteins? Should the animal arm of EA be more focused on ensuring that the transition to advanced AI goes well for animals (such as through moral circle expansion) rather than focusing on the near term? What opportunities could advanced AI technologies open up for relieving wild animal suffering?

  • Applications of AI: Can we develop AI to improve education, wisdom, and mental health? Do any such applications have unforeseen consequences?

Why focus here?

  • There’s been a rapid advance in AI capabilities recently, with little sign of slowing. Meanwhile, AI is becoming an increasing focus among people in EA, with many considering an AI safety pivot.

  • My proposal provides a way for CEA to adapt to these changes whilst also running activities that would be relevant to most of the community. This is important as while much of the community is sold on AI x-risks, much of the community isn’t and so it’s important to help people who are skeptical of x-risk arguments figure out the broader implications of AI.

  • There’s a lot of interest in AI in society more generally. New members may join for this content who would not want to do a general EA fellowship.

  • Facilitating discussions on new considerations is one way to prevent a community from intellectually stagnating[2].

  1. ^

    I strongly disagree with this, but it is a discussion worth happening.

  2. ^

    This is another reason why I am suggesting exploring a theme that would include a broad range of discussions rather than just focusing on x-risk.